Archiv des Autors: innovation

‘I Don’t Really Want to Work for Facebook.’ So Say Some Computer Science Students.

Surprisingly a number of students and generation Y digital natives turn against social media giants.

Computer Science Students.

Image
The Cal Hacks 5.0 competition drew students to the University of California, Berkeley, including, from left, Haitao Zhang, Ingrid Wu and Emily Hu, all students at Berkeley. Some students at the hackathon expressed a reluctance to work for big tech firms.CreditCreditMax Whittaker for The New York Times

BERKELEY, Calif. — A job at Facebook sounds pretty plum. The interns make around $8,000 a month, and an entry-level software engineer makes about $140,000 a year. The food is free. There’s a walking trail with indigenous plants and a juice bar.

But the tone among highly sought-after computer scientists about the social network is changing. On a recent night at the University of California, Berkeley, as a group of young engineers gathered to show off their tech skills, many said they would avoid taking jobs at the social network.

“I’ve heard a lot of employees who work there don’t even use it,” said Niky Arora, 19, an engineering student, who was recently invited to a Facebook recruiting event at the company’s headquarters in Menlo Park, Calif. “I just don’t believe in the product because like, Facebook, the baseline of everything they do is desire to show people more ads.”

Emily Zhong, 20, a computer science major, piped up. “Surprisingly, a lot of my friends now are like, ‘I don’t really want to work for Facebook,’” she said, citing “privacy stuff, fake news, personal data, all of it.”

“Before it was this glorious, magical thing to work there,” said Jazz Singh, 18, also studying computer science. “Now it’s like, just because it does what you want doesn’t mean it’s doing good.”

As Facebook has been rocked by scandal after scandal, some young engineers are souring on the company. Many are still taking jobs there, but those who do are doing it a little more quietly, telling their friends that they will work to change it from within or that they have carved out more ethical work at a company whose reputation has turned toxic.

Facebook, which employs more than 30,000 full-time workers around the world, said, “In 2018, we’ve hired more engineers than ever before.” The company added, “We continue to see strong engagement and excitement within the engineering community at the prospect of joining our company.”

Image
Niky Arora, 19, a student at Berkeley, said she was skeptical about working for Facebook, which invited her to a recruiting event recently. “I’ve heard a lot of employees who work there don’t even use it,” she said.CreditMax Whittaker for The New York Times

The changing attitudes are happening beyond Facebook. Across Silicon Valley, tech recruiters said job applicants in general were asking more hard questions during interviews, wanting to know specifically what they would be asked to do at the company. Career coaches said they had tech employees reaching out to get tips on handling moral quandaries. The questions include “How do I avoid a project I disagree with?” and “How do I remind my bosses of the company mission statement?”

“Employees are wising up to the fact that you can have a mission statement on your website, but when you’re looking at how the company creates new products or makes decisions, the correlation between the two is not so tightly aligned,” said David Chie, the head of Palo Alto Staffing, a tech job placement service in Silicon Valley. “Everyone’s having this conversation.”

When engineers apply for jobs, they are also doing it differently.

“They do a lot more due diligence,” said Heather Johnston, Bay Area district president for the tech job staffing agency Robert Half. “Before, candidates were like: ‘Oh, I don’t want to do team interviews. I want a one-and-done.’” Now, she added, job candidates “want to meet the team.”

“They’re not just going to blindly take a company because of the name anymore,” she said.

Yet while many of the big tech companies have been hit by a change in public perception, Facebook seems uniquely tarred among young workers.

“I’ve had a couple of clients recently say they’re not as enthusiastic about Facebook because they’re frustrated with what they see happening politically or socially,” said Paul Freiberger, president of Shimmering Careers, a career counseling group based in San Mateo, Calif. “It’s privacy and political news, and concern that it’s going to be hard to correct these things from inside.”

Chad Herst, a leadership and career coach based in San Francisco since 2008, said that now, for the first time, he had clients who wanted to avoid working for big social media companies like Facebook or Twitter.

“They’re concerned about where democracy is going, that social media polarizes us, and they don’t want to be building it,” Mr. Herst said. “People really have been thinking about the mission of the company and what the companies are trying to achieve a little more.”

He said one client, a midlevel executive at Facebook, wanted advice on how to shift her group’s work to encourage users to connect offline as well. But she found resistance internally to her efforts.

“She was trying to figure out: ‘How do I politic this? How do I language this?’” Mr. Herst said. “And I was telling her to bring up some of Mark Zuckerberg’s past statements about connecting people.”

On the recent evening at the University of California, Berkeley, around 2,200 engineering students from around the country gathered for Cal Hacks 5.0 — a competition to build the best apps. The event spanned a weekend, so teenage competitors dragged pillows around with them. The hosts handed out 2,000 burritos as students registered.

It was also a hiring event. Recruiters from Facebook and Alphabet set up booths (free sunglasses from Facebook; $200 in credit to the Google Cloud platform from Alphabet).

In the auditorium, the head of Y Combinator, a start-up incubator and investment firm, gave opening remarks, recommending that young people avoid jobs in big tech.

“You get to program your life on a totally different scale,” said Michael Seibel, who leads Y Combinator. “The worst thing that can happen to you is you get a job at Google.” He called those jobs “$100,000-a-year welfare” — meaning, he said, that workers can get tethered to the paycheck and avoid taking risks.

The event then segued to a word from the sponsor, Microsoft. Justin Garrett, a Microsoft recruiter who on his LinkedIn profile calls himself a senior technical evangelist, stepped onstage, laughing a little.

“So, Michael’s a tough guy to follow, especially when you work for one of those big companies,” Mr. Garrett said. “He called it welfare. I like to call it tremendous opportunity.”

Then students flooded into the stadium, which was filled with long tables of computers where they would stay and compete. In the middle of the scrum, three friends joked around. Caleb Thomas, 21, was gently made fun of because he had accepted an internship at Facebook.

“Come on, guys,” Mr. Thomas said.

“These are the realities of how the business works,” said Samuel Resendez, 20, a computer science student at the University of Southern California.

It turned out Mr. Resendez had interned at Facebook in the summer. Olivia Brown, 20, head of Stanford’s Computer Science and Social Good club and an iOS intern at Mozilla, called him out on it. “But you still worked at Facebook, too,” she said.

“Well, at least I signed before Cambridge Analytica,” Mr. Resendez said, a little bashful about the data privacy and election manipulation scandal that rocked the company this year. “Ninety-five percent of what Facebook is doing is delivering memes.”

Ms. Brown said a lot of students criticize Facebook and talk about how they would not work there, but ultimately join. “Everyone cares about ethics in tech before they get a contract,” she said.

Ms. Brown said she thought that could change soon, though, as the social stigma of working for Facebook began outweighing the financial benefits.

“Defense companies have had this reputation for a long time,” she said. “Social networks are just getting that.”

Source: https://www.nytimes.com/2018/11/15/technology/jobs-facebook-computer-science-students.html

Advertisements

The Pentagon’s Push to Program Soldiers’ Brains

The military wants future super-soldiers to control robots with their thoughts.

I. Who Could Object?

“Tonight I would like to share with you an idea that I am extremely passionate about,” the young man said. His long black hair was swept back like a rock star’s, or a gangster’s. “Think about this,” he continued. “Throughout all human history, the way that we have expressed our intent, the way we have expressed our goals, the way we have expressed our desires, has been limited by our bodies.” When he inhaled, his rib cage expanded and filled out the fabric of his shirt. Gesturing toward his body, he said, “We are born into this world with this. Whatever nature or luck has given us.“

His speech then took a turn: “Now, we’ve had a lot of interesting tools over the years, but fundamentally the way that we work with those tools is through our bodies.” Then a further turn: “Here’s a situation that I know all of you know very well—your frustration with your smartphones, right? This is another tool, right? And we are still communicating with these tools through our bodies.”

And then it made a leap: “I would claim to you that these tools are not so smart. And maybe one of the reasons why they’re not so smart is because they’re not connected to our brains. Maybe if we could hook those devices into our brains, they could have some idea of what our goals are, what our intent is, and what our frustration is.”

So began “Beyond Bionics,” a talk by Justin C. Sanchez, then an associate professor of biomedical engineering and neuroscience at the University of Miami, and a faculty member of the Miami Project to Cure Paralysis. He was speaking at a tedx conference in Florida in 2012. What lies beyond bionics? Sanchez described his work as trying to “understand the neural code,” which would involve putting “very fine microwire electrodes”—the diameter of a human hair—“into the brain.” When we do that, he said, we would be able to “listen in to the music of the brain” and “listen in to what somebody’s motor intent might be” and get a glimpse of “your goals and your rewards” and then “start to understand how the brain encodes behavior.”

He explained, “With all of this knowledge, what we’re trying to do is build new medical devices, new implantable chips for the body that can be encoded or programmed with all of these different aspects. Now, you may be wondering, what are we going to do with those chips? Well, the first recipients of these kinds of technologies will be the paralyzed. It would make me so happy by the end of my career if I could help get somebody out of their wheelchair.”

Sanchez went on, “The people that we are trying to help should never be imprisoned by their bodies. And today we can design technologies that can help liberate them from that. I’m truly inspired by that. It drives me every day when I wake up and get out of bed. Thank you so much.” He blew a kiss to the audience.

A year later, Justin Sanchez went to work for the Defense Advanced Research Projects Agency, the Pentagon’s R&D department. At darpa, he now oversees all research on the healing and enhancement of the human mind and body. And his ambition involves more than helping get disabled people out of their wheelchair—much more.

DARPA has dreamed for decades of merging human beings and machines. Some years ago, when the prospect of mind-controlled weapons became a public-relations liability for the agency, officials resorted to characteristic ingenuity. They recast the stated purpose of their neurotechnology research to focus ostensibly on the narrow goal of healing injury and curing illness. The work wasn’t about weaponry or warfare, agency officials claimed. It was about therapy and health care. Who could object? But even if this claim were true, such changes would have extensive ethical, social, and metaphysical implications. Within decades, neurotechnology could cause social disruption on a scale that would make smartphones and the internet look like gentle ripples on the pond of history.

Most unsettling, neurotechnology confounds age-old answers to this question: What is a human being?

II. High Risk, High Reward

In his 1958 State of the Union address, President Dwight Eisenhower declared that the United States of America “must be forward-looking in our research and development to anticipate the unimagined weapons of the future.” A few weeks later, his administration created the Advanced Research Projects Agency, a bureaucratically independent body that reported to the secretary of defense. This move had been prompted by the Soviet launch of the Sputnik satellite. The agency’s original remit was to hasten America’s entry into space.

During the next few years, arpa’s mission grew to encompass research into “man-computer symbiosis” and a classified program of experiments in mind control that was code-named Project Pandora. There were bizarre efforts that involved trying to move objects at a distance by means of thought alone. In 1972, with an increment of candor, the word Defense was added to the name, and the agency became darpa. Pursuing its mission, darpa funded researchers who helped invent technologies that changed the nature of battle (stealth aircraft, drones) and shaped daily life for billions (voice-recognition technology, GPS devices). Its best-known creation is the internet.

The agency’s penchant for what it calls “high-risk, high-reward” research ensured that it would also fund a cavalcade of folly. Project Seesaw, a quintessential Cold War boondoggle, envisioned a “particle-beam weapon” that could be deployed in the event of a Soviet attack. The idea was to set off a series of nuclear explosions beneath the Great Lakes, creating a giant underground chamber. Then the lakes would be drained, in a period of 15 minutes, to generate the electricity needed to set off a particle beam. The beam would accelerate through tunnels hundreds of miles long (also carved out by underground nuclear explosions) in order to muster enough force to shoot up into the atmosphere and knock incoming Soviet missiles out of the sky. During the Vietnam War, darpa tried to build a Cybernetic Anthropomorphous Machine, a jungle vehicle that officials called a “mechanical elephant.”

The diverse and sometimes even opposing goals of darpa scientists and their Defense Department overlords merged into a murky, symbiotic research culture—“unencumbered by the typical bureaucratic oversight and uninhibited by the restraints of scientific peer review,” Sharon Weinberger wrote in a recent book, The Imagineers of War. In Weinberger’s account, darpa’s institutional history involves many episodes of introducing a new technology in the context of one appealing application, while hiding other genuine but more troubling motives. At darpa, the left hand knows, and doesn’t know, what the right hand is doing.

The agency is deceptively compact. A mere 220 employees, supported by about 1,000 contractors, report for work each day at darpa’s headquarters, a nondescript glass-and-steel building in Arlington, Virginia, across the street from the practice rink for the Washington Capitals. About 100 of these employees are program managers—scientists and engineers, part of whose job is to oversee about 2,000 outsourcing arrangements with corporations, universities, and government labs. The effective workforce of darpa actually runs into the range of tens of thousands. The budget is officially said to be about $3 billion, and has stood at roughly that level for an implausibly long time—the past 14 years.

The Biological Technologies Office, created in 2014, is the newest of darpa’s six main divisions. This is the office headed by Justin Sanchez. One purpose of the office is to “restore and maintain warfighter abilities” by various means, including many that emphasize neurotechnology—applying engineering principles to the biology of the nervous system. For instance, the Restoring Active Memory program develops neuroprosthetics—tiny electronic components implanted in brain tissue—that aim to alter memory formation so as to counteract traumatic brain injury. Does darpa also run secret biological programs? In the past, the Department of Defense has done such things. It has conducted tests on human subjects that were questionable, unethical, or, many have argued, illegal. The Big Boy protocol, for example, compared radiation exposure of sailors who worked above and below deck on a battleship, never informing the sailors that they were part of an experiment.

Eddie Guy
Last year I asked Sanchez directly whether any of darpa’s neurotechnology work, specifically, was classified. He broke eye contact and said, “I can’t—We’ll have to get off that topic, because I can’t answer one way or another.” When I framed the question personally—“Are you involved with any classified neuroscience project?”—he looked me in the eye and said, “I’m not doing any classified work on the neurotechnology end.”

If his speech is careful, it is not spare. Sanchez has appeared at public events with some frequency (videos are posted on darpa’s YouTube channel), to articulate joyful streams of good news about darpa’s proven applications—for instance, brain-controlled prosthetic arms for soldiers who have lost limbs. Occasionally he also mentions some of his more distant aspirations. One of them is the ability, via computer, to transfer knowledge and thoughts from one person’s mind to another’s.

III. “We Try to Find Ways to Say Yes”

Medicine and biology were of minor interest to darpa until the 1990s, when biological weapons became a threat to U.S. national security. The agency made a significant investment in biology in 1997, when darpa created the Controlled Biological Systems program. The zoologist Alan S. Rudolph managed this sprawling effort to integrate the built world with the natural world. As he explained it to me, the aim was “to increase, if you will, the baud rate, or the cross-communication, between living and nonliving systems.” He spent his days working through questions such as “Could we unlock the signals in the brain associated with movement in order to allow you to control something outside your body, like a prosthetic leg or an arm, a robot, a smart home—or to send the signal to somebody else and have them receive it?”

Human enhancement became an agency priority. “Soldiers having no physical, physiological, or cognitive limitation will be key to survival and operational dominance in the future,” predicted Michael Goldblatt, who had been the science and technology officer at McDonald’s before joining darpa in 1999. To enlarge humanity’s capacity to “control evolution,” he assembled a portfolio of programs with names that sounded like they’d been taken from video games or sci-fi movies: Metabolic Dominance, Persistence in Combat, Continuous Assisted Performance, Augmented Cognition, Peak Soldier Performance, Brain-Machine Interface.

The programs of this era, as described by Annie Jacobsen in her 2015 book, The Pentagon’s Brain, often shaded into mad-scientist territory. The Continuous Assisted Performance project attempted to create a “24/7 soldier” who could go without sleep for up to a week. (“My measure of success,” one darpa official said of these programs, “is that the International Olympic Committee bans everything we do.”)

Dick Cheney relished this kind of research. In the summer of 2001, an array of “super-soldier” programs was presented to the vice president. His enthusiasm contributed to the latitude that President George W. Bush’s administration gave darpa—at a time when the agency’s foundation was shifting. Academic science gave way to tech-industry “innovation.” Tony Tether, who had spent his career working alternately for Big Tech, defense contractors, and the Pentagon, became darpa’s director. After the 9/11 attacks, the agency announced plans for a surveillance program called Total Information Awareness, whose logo included an all-seeing eye emitting rays of light that scanned the globe. The pushback was intense, and Congress took darpa to task for Orwellian overreach. The head of the program—Admiral John Poindexter, who had been tainted by scandal back in the Reagan years—later resigned, in 2003. The controversy also drew unwanted attention to darpa’s research on super-soldiers and the melding of mind and machine. That research made people nervous, and Alan Rudolph, too, found himself on the way out.

In this time of crisis, darpa invited Geoff Ling, a neurology‑ICU physician and, at the time, an active-duty Army officer, to join the Defense Sciences Office. (Ling went on to work in the Biological Technologies Office when it spun out from Defense Sciences, in 2014.) When Ling was interviewed for his first job at darpa, in 2002, he was preparing for deployment to Afghanistan and thinking about very specific combat needs. One was a “pharmacy on demand” that would eliminate the bulk of powdery fillers from drugs in pill or capsule form and instead would formulate active ingredients for ingestion via a lighter, more compact, dissolving substance—like Listerine breath strips. This eventually became a darpa program. The agency’s brazen sense of possibility buoyed Ling, who recalls with pleasure how colleagues told him, “We try to find ways to say yes, not ways to say no.” With Rudolph gone, Ling picked up the torch.

Ling talks fast. He has a tough-guy voice. The faster he talks, the tougher he sounds, and when I met him, his voice hit top speed as he described a first principle of Defense Sciences. He said he had learned this “particularly” from Alan Rudolph: “Your brain tells your hands what to do. Your hands basically are its tools, okay? And that was a revelation to me.” He continued, “We are tool users—that’s what humans are. A human wants to fly, he builds an airplane and flies. A human wants to have recorded history, and he creates a pen. Everything we do is because we use tools, right? And the ultimate tools are our hands and feet. Our hands allow us to work with the environment to do stuff, and our feet take us where our brain wants to go. The brain is the most important thing.”

Ling connected this idea of the brain’s primacy with his own clinical experience of the battlefield. He asked himself, “How can I liberate mankind from the limitations of the body?” The program for which Ling became best known is called Revolutionizing Prosthetics. Since the Civil War, as Ling has said, the prosthetic arm given to most amputees has been barely more sophisticated than “a hook,” and not without risks: “Try taking care of your morning ablutions with that bad boy, and you’re going to need a proctologist every goddamn day.” With help from darpa colleagues and academic and corporate researchers, Ling and his team built something that was once all but unimaginable: a brain-controlled prosthetic arm.

No invention since the internet has been such a reliable source of good publicity for darpa. Milestones in its development were hailed with wonder. In 2012, 60 Minutes showed a paralyzed woman named Jan Scheuermann feeding herself a bar of chocolate using a robotic arm that she manipulated by means of a brain implant.

​Eddie Guy
Yet darpa’s work to repair damaged bodies was merely a marker on a road to somewhere else. The agency has always had a larger mission, and in a 2015 presentation, one program manager—a Silicon Valley recruit—described that mission: to “free the mind from the limitations of even healthy bodies.” What the agency learns from healing makes way for enhancement. The mission is to make human beings something other than what we are, with powers beyond the ones we’re born with and beyond the ones we can organically attain.

The internal workings of darpa are complicated. The goals and values of its research shift and evolve in the manner of a strange, half-conscious shell game. The line between healing and enhancement blurs. And no one should lose sight of the fact that D is the first letter in darpa’s name. A year and a half after the video of Jan Scheuermann feeding herself chocolate was shown on television, darpa made another video of her, in which her brain-computer interface was connected to an F-35 flight simulator, and she was flying the airplane. darpa later disclosed this at a conference called Future of War.

Geoff Ling’s efforts have been carried on by Justin Sanchez. In 2016, Sanchez appeared at darpa’s “Demo Day” with a man named Johnny Matheny, whom agency officials describe as the first “osseointegrated” upper-limb amputee—the first man with a prosthetic arm attached directly to bone. Matheny demonstrated what was, at the time, darpa’s most advanced prosthetic arm. He told the attendees, “I can sit here and curl a 45-pound dumbbell all day long, till the battery runs dead.” The next day, Gizmodo ran this headline above its report from the event: “darpa’s Mind-Controlled Arm Will Make You Wish You Were a Cyborg.”

Since then, darpa’s work in neurotechnology has avowedly widened in scope, to embrace “the broader aspects of life,” Sanchez told me, “beyond the person in the hospital who is using it to heal.” The logical progression of all this research is the creation of human beings who are ever more perfect, by certain technological standards. New and improved soldiers are necessary and desirable for darpa, but they are just the window-display version of the life that lies ahead.

IV. “Over the Horizon”

Consider memory, Sanchez told me: “Everybody thinks about what it would be like to give memory a boost by 20, 30, 40 percent—pick your favorite number—and how that would be transformative.” He spoke of memory enhancement through neural interface as an alternative form of education. “School in its most fundamental form is a technology that we have developed as a society to help our brains to do more,” he said. “In a different way, neurotechnology uses other tools and techniques to help our brains be the best that they can be.” One technique was described in a 2013 paper, a study involving researchers at Wake Forest University, the University of Southern California, and the University of Kentucky. Researchers performed surgery on 11 rats. Into each rat’s brain, an electronic array—featuring 16 stainless-steel wires—was implanted. After the rats recovered from surgery, they were separated into two groups, and they spent a period of weeks getting educated, though one group was educated more than the other.

The less educated group learned a simple task, involving how to procure a droplet of water. The more educated group learned a complex version of that same task—to procure the water, these rats had to persistently poke levers with their nose despite confounding delays in the delivery of the water droplet. When the more educated group of rats attained mastery of this task, the researchers exported the neural-firing patterns recorded in the rats’ brains—the memory of how to perform the complex task—to a computer.

“What we did then was we took those signals and we gave it to an animal that was stupid,” Geoff Ling said at a darpa event in 2015—meaning that researchers took the neural-firing patterns encoding the memory of how to perform the more complex task, recorded from the brains of the more educated rats, and transferred those patterns into the brains of the less educated rats—“and that stupid animal got it. They were able to execute that full thing.” Ling summarized: “For this rat, we reduced the learning period from eight weeks down to seconds.”

“They could inject memory using the precise neural codes for certain skills,” Sanchez told me. He believes that the Wake Forest experiment amounts to a foundational step toward “memory prosthesis.” This is the stuff of The Matrix. Though many researchers question the findings—cautioning that, really, it can’t be this simple—Sanchez is confident: “If I know the neural codes in one individual, could I give that neural code to another person? I think you could.” Under Sanchez, darpa has funded human experiments at Wake Forest, the University of Southern California, and the University of Pennsylvania, using similar mechanisms in analogous parts of the brain. These experiments did not transfer memory from one person to another, but instead gave individuals a memory “boost.” Implanted electrodes recorded neuronal activity associated with recognizing patterns (at Wake Forest and USC) and memorizing word lists (at Penn) in certain brain circuits. Then electrodes fed back those recordings of neuronal activity into the same circuits as a form of reinforcement. The result, in both cases, was significantly improved memory recall.

Doug Weber, a neural engineer at the University of Pittsburgh who recently finished a four-year term as a darpa program manager, working with Sanchez, is a memory-transfer skeptic. Born in Wisconsin, he has the demeanor of a sitcom dad: not too polished, not too rumpled. “I don’t believe in the infinite limits of technology evolution,” he told me. “I do believe there are going to be some technical challenges which are impossible to achieve.” For instance, when scientists put electrodes in the brain, those devices eventually fail—after a few months or a few years. The most intractable problem is blood leakage. When foreign material is put into the brain, Weber said, “you undergo this process of wounding, bleeding, healing, wounding, bleeding, healing, and whenever blood leaks into the brain compartment, the activity in the cells goes way down, so they become sick, essentially.” More effectively than any fortress, the brain rejects invasion.

Even if the interface problems that limit us now didn’t exist, Weber went on to say, he still would not believe that neuroscientists could enable the memory-prosthesis scenario. Some people like to think about the brain as if it were a computer, Weber explained, “where information goes from A to B to C, like everything is very modular. And certainly there is clear modular organization in the brain. But it’s not nearly as sharp as it is in a computer. All information is everywhere all the time, right? It’s so widely distributed that achieving that level of integration with the brain is far out of reach right now.”

Peripheral nerves, by contrast, conduct signals in a more modular fashion. The biggest, longest peripheral nerve is the vagus. It connects the brain with the heart, the lungs, the digestive tract, and more. Neuroscientists understand the brain’s relationship with the vagus nerve more clearly than they understand the intricacies of memory formation and recall among neurons within the brain. Weber believes that it may be possible to stimulate the vagus nerve in ways that enhance the process of learning—not by transferring experiential memories, but by sharpening the facility for certain skills.

To test this hypothesis, Weber directed the creation of a new program in the Biological Technologies Office, called Targeted Neuroplasticity Training (TNT). Teams of researchers at seven universities are investigating whether vagal-nerve stimulation can enhance learning in three areas: marksmanship, surveillance and reconnaissance, and language. The team at Arizona State has an ethicist on staff whose job, according to Weber, “is to be looking over the horizon to anticipate potential challenges and conflicts that may arise” regarding the ethical dimensions of the program’s technology, “before we let the genie out of the bottle.” At a TNT kickoff meeting, the research teams spent 90 minutes discussing the ethical questions involved in their work—the start of a fraught conversation that will broaden to include many others, and last for a very long time.

DARPA officials refer to the potential consequences of neurotechnology by invoking the acronym elsi, a term of art devised for the Human Genome Project. It stands for “ethical, legal, social implications.” The man who led the discussion on ethics among the research teams was Steven Hyman, a neuroscientist and neuroethicist at MIT and Harvard’s Broad Institute. Hyman is also a former head of the National Institute of Mental Health. When I spoke with him about his work on darpa programs, he noted that one issue needing attention is “cross talk.” A man-machine interface that does not just “read” someone’s brain but also “writes into” someone’s brain would almost certainly create “cross talk between those circuits which we are targeting and the circuits which are engaged in what we might call social and moral emotions,” he said. It is impossible to predict the effects of such cross talk on “the conduct of war” (the example he gave), much less, of course, on ordinary life.

Weber and a darpa spokesperson related some of the questions the researchers asked in their ethics discussion: Who will decide how this technology gets used? Would a superior be able to force subordinates to use it? Will genetic tests be able to determine how responsive someone would be to targeted neuroplasticity training? Would such tests be voluntary or mandatory? Could the results of such tests lead to discrimination in school admissions or employment? What if the technology affects moral or emotional cognition—our ability to tell right from wrong or to control our own behavior?

Recalling the ethics discussion, Weber told me, “The main thing I remember is that we ran out of time.”

V. “You Can Weaponize Anything”

In The Pentagon’s Brain, Annie Jacobsen suggested that darpa’s neurotechnology research, including upper-limb prosthetics and the brain-machine interface, is not what it seems: “It is likely that darpa’s primary goal in advancing prosthetics is to give robots, not men, better arms and hands.” Geoff Ling rejected the gist of her conclusion when I summarized it for him (he hadn’t read the book). He told me, “When we talk about stuff like this, and people are looking for nefarious things, I always say to them, ‘Do you honestly believe that the military that your grandfather served in, your uncle served in, has changed into being Nazis or the Russian army?’ Everything we did in the Revolutionizing Prosthetics program—everything we did—is published. If we were really building an autonomous-weapons system, why would we publish it in the open literature for our adversaries to read? We hid nothing. We hid not a thing. And you know what? That meant that we didn’t just do it for America. We did it for the world.”

I started to say that publishing this research would not prevent its being misused. But the terms use and misuse overlook a bigger issue at the core of any meaningful neurotechnology-ethics discussion. Will an enhanced human being—a human being possessing a neural interface with a computer—still be human, as people have experienced humanity through all of time? Or will such a person be a different sort of creature?

​Eddie Guy
The U.S. government has put limits on darpa’s power to experiment with enhancing human capabilities. Ling says colleagues told him of a “directive”: “Congress was very specific,” he said. “They don’t want us to build a superperson.” This can’t be the announced goal, Congress seems to be saying, but if we get there by accident—well, that’s another story. Ling’s imagination remains at large. He told me, “If I gave you a third eye, and the eye can see in the ultraviolet, that would be incorporated into everything that you do. If I gave you a third ear that could hear at a very high frequency, like a bat or like a snake, then you would incorporate all those senses into your experience and you would use that to your advantage. If you can see at night, you’re better than the person who can’t see at night.”

Enhancing the senses to gain superior advantage—this language suggests weaponry. Such capacities could certainly have military applications, Ling acknowledged—“You can weaponize anything, right?”—before he dismissed the idea and returned to the party line: “No, actually, this has to do with increasing a human’s capability” in a way that he compared to military training and civilian education, and justified in economic terms.

“Let’s say I gave you a third arm,” and then a fourth arm—so, two additional hands, he said. “You would be more capable; you would do more things, right?” And if you could control four hands as seamlessly as you’re controlling your current two hands, he continued, “you would actually be doing double the amount of work that you would normally do. It’s as simple as that. You’re increasing your productivity to do whatever you want to do.” I started to picture his vision—working with four arms, four hands—and asked, “Where does it end?”

“It won’t ever end,” Ling said. “I mean, it will constantly get better and better—” His cellphone rang. He took the call, then resumed where he had left off: “What darpa does is we provide a fundamental tool so that other people can take those tools and do great things with them that we’re not even thinking about.”

Judging by what he said next, however, the number of things that darpa is thinking about far exceeds what it typically talks about in public. “If a brain can control a robot that looks like a hand,” Ling said, “why can’t it control a robot that looks like a snake? Why can’t that brain control a robot that looks like a big mass of Jell-O, able to get around corners and up and down and through things? I mean, somebody will find an application for that. They couldn’t do it now, because they can’t become that glob, right? But in my world, with their brain now having a direct interface with that glob, that glob is the embodiment of them. So now they’re basically the glob, and they can go do everything a glob can do.”

VI. Gold Rush

darpa’s developing capabilities still hover at or near a proof-of-concept stage. But that’s close enough to have drawn investment from some of the world’s richest corporations. In 1990, during the administration of President George H. W. Bush, darpa Director Craig I. Fields lost his job because, according to contemporary news accounts, he intentionally fostered business development with some Silicon Valley companies, and White House officials deemed that inappropriate. Since the administration of the second President Bush, however, such sensitivities have faded.

Over time, darpa has become something of a farm team for Silicon Valley. Regina Dugan, who was appointed darpa director by President Barack Obama, went on to head Google’s Advanced Technology and Projects group, and other former darpa officials went to work for her there. She then led R&D for the analogous group at Facebook, called Building 8. (She has since left Facebook.)

darpa’s neurotechnology research has been affected in recent years by corporate poaching. Doug Weber told me that some darpa researchers have been “scooped up” by companies including Verily, the life-sciences division of Alphabet (the parent company of Google), which, in partnership with the British pharmaceutical conglomerate GlaxoSmithKline, created a company called Galvani Bioelectronics, to bring neuro-modulation devices to market. Galvani calls its business “bioelectric medicine,” which conveys an aura of warmth and trustworthiness. Ted Berger, a University of Southern California biomedical engineer who collaborated with the Wake Forest researchers on their studies of memory transfer in rats, worked as the chief science officer at the neurotechnology company Kernel, which plans to build “advanced neural interfaces to treat disease and dysfunction, illuminate the mechanisms of intelligence, and extend cognition.” Elon Musk has courted darpa researchers to join his company Neuralink, which is said to be developing an interface known as “neural lace.” Facebook’s Building 8 is working on a neural interface too. In 2017, Regina Dugan said that 60 engineers were at work on a system with the goal of allowing users to type 100 words a minute “directly from your brain.” Geoff Ling is on Building 8’s advisory board.

Talking with Justin Sanchez, I speculated that if he realizes his ambitions, he could change daily life in even more fundamental and lasting ways than Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey have. Sanchez blushes easily, and he breaks eye contact when he is uncomfortable, but he did not look away when he heard his name mentioned in such company. Remembering a remark that he had once made about his hope for neurotechnology’s wide adoption, but with “appropriate checks to make sure that it’s done in the right way,” I asked him to talk about what the right way might look like. Did any member of Congress strike him as having good ideas about legal or regulatory structures that might shape an emerging neural-interface industry? He demurred (“darpa’s mission isn’t to define or even direct those things”) and suggested that, in reality, market forces would do more to shape the evolution of neurotechnology than laws or regulations or deliberate policy choices. What will happen, he said, is that scientists at universities will sell their discoveries or create start-ups. The marketplace will take it from there: “As they develop their companies, and as they develop their products, they’re going to be subject to convincing people that whatever they’re developing makes sense, that it helps people to be a better version of themselves. And that process—that day-to-day development—will ultimately guide where these technologies go. I mean, I think that’s the frank reality of how it ultimately will unfold.”

He seemed entirely untroubled by what may be the most troubling aspect of darpa’s work: not that it discovers what it discovers, but that the world has, so far, always been ready to buy it.


This article appears in the November 2018 print edition with the headline “The Pentagon Wants to Weaponize the Brain. What Could Go Wrong?”

https://www.theatlantic.com/magazine/archive/2018/11/the-pentagon-wants-to-weaponize-the-brain-what-could-go-wrong/570841/

Google needs to apologize for violating the trust of its users once again

SAN FRANCISCO, CA - MAY 28: Google senior vice president of product Sundar Pichai delivers the keynote address during the 2015 Google I/O conference on May 28, 2015 in San Francisco, California. The annual Google I/O conference runs through May 29. (Photo by Justin Sullivan/Getty Images)Justin Sullivan/Getty Images

  • An Associated Press investigation recently discovered that Google still collects its users‘ location data even if they have their Location History turned off.
  • After the report was published, Google quietly updated its help page to describe how location settings work.
  • Previously, the page said „with Location History off, the places you go are no longer stored.“
  • Now, the page says, „This setting does not affect other location services on your device,“ adding that „some location data may be saved as part of your activity on other services, like Search and Maps.“
  • The quiet changing of false information is a major violation of users‘ trust.
  • Google needs to do better.

Google this week acknowledged that it quietly tracks its users‘ locations, even if those people turn off their Location History — a clarification that came in the wake of an Associated Press investigation.

It’s a major violation of users‘ trust.

And yet, nothing is going to happen as a result of this episode.

It’s happened before

Google has a history of bending the rules:

  • In 2010, Google’s Street View cars were caught eavesdropping on people’s Wi-Fi connections.
  • In 2011, Google agreed to forfeit $500 million after a criminal investigation by the Justice Department found that Google illegally allowed advertisements from online Canadian pharmacies to sell their products in the US.
  • In 2012, Google circumvented the no-cookies policy on Apple’s Safari web browser and paid a $22.5 million fine to the Federal Trade Commission as a result.

Ultimately, Google came out of all of these incidents just fine. It paid some money here and there, and sat in a few courtrooms, but nothing really happened to the company’s bottom line. People continued using Google’s services.

Other companies have done it too

Remember Cambridge Analytica?

Five months ago in March, a 28-year-old named Christopher Wylie blew the whistle on his employer, the data-analytics company, Cambridge Analytica, at which he served as a director of research.

It was later revealed that Cambridge Analytica had collected the data of over 87 million Facebook users in an attempt to influence the 2016 presidential election in favor of the Republican candidate, Donald Trump.

One month later, Facebook CEO Mark Zuckerberg was summoned in front of Congress to answer questions related to the Cambridge Analytica scandal over a two-day span.

mark zuckerberg congress facebook awkwardFacebook CEO Mark Zuckerberg, takes a drink of water while testifying before a joint hearing of the Commerce and Judiciary Committees on Capitol Hill in Washington, Tuesday, April 10, 2018, about the use of Facebook data to target American voters in the 2016 election.AP Photo/Andrew Harnik

Many users felt like their trust was violated. A hashtag movement called „#DeleteFacebook“ was born.

And yet, nothing has really changed at Facebook since that scandal, which similarly involved the improper collection of user data, and the violation of users‘ trust.

Facebook seems to be doing just fine. During its Q2 earnings report in late July, Facebook reported over $13 billion in revenue — a 42% jump year-over-year — and an 11% increase in both daily and monthly active users.

In short, Facebook is not going anywhere. And neither is Google.

Too big — and too good — to fail

Just like Facebook has no equal among the hundreds of other social networks out there, the same goes for Google and competing search engines.

According to StatCounter, Google has a whopping 90% share of the global search engine market.

The next biggest search engine in the world is Microsoft’s Bing, which has a paltry 3% market share.

In other words, a cataclysmic event would have to occur for people to switch search engines. Or, another search engine would have to come along and completely unseat Google.

But that’s probably not going to happen.

GoogleUladzik Kryhin/Shutterstock

For almost 20 years now, Google dominated the search engine game. Its other services have become similarly prevalent: Gmail, and Google Docs, have all become integral parts of people’s personal and work lives. Of course, there are similar mail and productivity services out there, but using Google is far more convenient, since most people use more than one Google product, and having all of your applications talk to each other and share information is mighty convenient.

This isn’t meant to cry foul: Google is one of the top software makers in the world, but it has earned that status by constantly improving and iterating on its products, and even itself, over the past two decades. But one does wonder what event, if any, could possibly make people quit a service as big and convenient and powerful as Google once and for all.

The fact is: That probably won’t happen. People likely won’t quit Google’s services, unless there’s some major degradation of quality. But Google, as a leader in Silicon Valley, should strive to do better for its customers. Intentional or not, misleading customers about location data is a bad thing. Google failed its customers: It let users think they had more control when they did, and they only corrected their language about location data after a third-party investigation. But there was no public acknowledgement of an error, and no mea culpa.

Google owes its users a true apology. Quietly updating an online help page isn’t good enough.

 

http://uk.businessinsider.com/google-location-data-violates-user-trust-nothing-will-happen-2018-8?r=US&IR=T

resting and vesting — showing up to Facebook and barely working to collect a $450 million payday

Jan Koum 5The WhatsApp cofounder Jan Koum.Reuters

  • Back in April, the WhatsApp cofounder Jan Koum announced plans to leave Facebook.
  • But he’s still showing up to the office once a month so he can continue to collect $450 million in Facebook stock he’s contractually due from when Facebook bought his company.
  • It’s a high-dollar example of „rest and vest,“ in which big tech companies pay senior employees who don’t do much work.
  • Koum has already sold over $7 billion in Facebook stock.

The WhatsApp cofounder Jan Koum said in April that he planned to leave Facebook, which bought his company for $19 billion in 2014. He’s already sold $7.1 billion worth of Facebook shares.

But he’s still showing up to the office, The Wall Street Journal reports, to collect one last payday: $450 million in stock.

Koum is resting and vesting, in Silicon Valley lingo, a state that often refers to wealthy entrepreneurs and engineers with one foot out the door at big tech companies who are allowed to continue to be officially employed until they’re able to collect stock and options in quarterly or annual increments.

Usually, stock awards after a merger are distributed on a four-year vesting cliff — if you last all four years, you get your entire stock grant. Koum’s last vesting date is November. He showed up at Facebook’s offices in mid-July, fulfilling a requirement of his employment contract, according to The Wall Street Journal.

„Resting and vesting“ is an open secret in Silicon Valley, Business Insider has reported. At some companies, the employees are called „coasters.“ The HBO show „Silicon Valley“ even spoofed it in an episode in which engineers hang out on a roof and don’t do any work.

„I’ve actually had a number of people, including today at Google X … send me pictures of themselves on a roof, kicking back doing nothing, with the hashtag ‚unassigned‘ or ‚rest and vest.‘ It’s something that really happens, and apparently, somewhat often,“ Josh Brener, the actor who plays the lucky character who got to rest and vest in HBO’s „Silicon Valley,“ told Business Insider last year.

From Business Insider’s report on the phenomenon:

„Facebook, for instance, has a fairly hush bonus program called ‚discretionary equity,‘ a former Facebook engineer who received it said.

„DE is when the company hands an engineer a massive, extra chunk of restricted stock units, worth tens to hundreds of thousands of dollars. It’s a thank-you for a job well done. It also helps keep the person from jumping ship because DE vests over time. These are bonus grants that are signed by top executives, sometimes even CEO Mark Zuckerberg.“

Koum’s payday isn’t related to discretionary equity; it’s instead a result of the over 20 million restricted shares of Facebook he received when he sold WhatsApp. He has one more vesting day in August and one in November, according to filings with the Securities and Exchange Commission.

Koum reportedly decided to leave Facebook in the middle of a spat over how to integrate advertising into WhatsApp. A WhatsApp representative declined to comment, but The Journal reports that Koum is still employed at the social-networking giant.

When Koum left, he wrote that he was taking time off to collect „rare air-cooled Porsches“ and play ultimate Frisbee.

How many Porsches can one buy with $450 million?

 

http://uk.businessinsider.com/whatsapp-founder-jan-koum-rest-and-vest-for-450-million-facebook-stock-2018-8?r=US&IR=T

Google’s DeepMind AI can accurately detect 50 types of eye disease just by looking at scans

Mustafa Suleyman 1831_preview (1)DeepMind cofounder Mustafa Suleyman.DeepMind
  • Google’s artificial intelligence company DeepMind has published „really significant“ research showing its algorithm can identify around 50 eye diseases by looking at retinal eye scans.
  • DeepMind said its AI was as good as expert clinicians, and that it could help prevent people from losing their sight.
  • DeepMind has been criticised for its practices around medical data, but cofounder Mustafa Suleyman said all the information in this research project was anonymised.
  • The company plans to hand the technology over for free to NHS hospitals for five years, provided it passes the next phase of research.

Google’s artificial intelligence company, DeepMind, has developed an AI which can successfully detect more than 50 types of eye disease just by looking at 3D retinal scans.

DeepMind published on Monday the results of joint research with Moorfields Eye Hospital, a renowned centre for treating eye conditions in London, in Nature Medicine.

The company said its AI was as accurate as expert clinicians when it came to detecting diseases, such as diabetic eye disease and macular degeneration. It could also recommend the best course of action for patients and suggest which needed urgent care.

OCT scanA technician examines an OCT scan.DeepMind

What is especially significant about the research, according to DeepMind cofounder Mustafa Suleyman, is that the AI has a level of „explainability“ that could boost doctors‘ trust in its recommendations.

„It’s possible for the clinician to interpret what the algorithm is thinking,“ he told Business Insider. „[They can] look at the underlying segmentation.“

In other words, the AI looks less like a mysterious black box that’s spitting out results. It labels pixels on the eye scan that corresponds to signs of a particular disease, Suleyman explained, and can calculate its confidence in its own findings with a percentage score. „That’s really significant,“ he said.

DeepMind's algorithm analysing an OCT eye scanDeepMind’s AI analysing an OCT scan.DeepMind

Suleyman described the findings as a „research breakthrough“ and said the next step was to prove the AI works in a clinical setting. That, he said, would take a number of years. Once DeepMind is in a position to deploy its AI across NHS hospitals in the UK, it will provide the service for free for five years.

Patients are at risk of losing their sight because doctors can’t look at their eye scans in time

British eye specialists have been warning for years that patients are at risk of losing their sight because the NHS is overstretched, and because the UK has an ageing population.

Part of the reason DeepMind and Moorfields took up the research project was because clinicians are „overwhelmed“ by the demand for eye scans, Suleyman said.

„If you have a sight-threatening disease, you want treatment as soon as possible,“ he explained. „And unlike in A&E, where a staff nurse will talk to you and make an evaluation of how serious your condition is, then use that evaluation to decide how quickly you are seen. When an [eye] scan is submitted, there isn’t a triage of your scan according to its severity.“

OCT scanA patient having an OCT scan.DeepMind

Putting eye scans through the AI could speed the entire process up.

„In the future, I could envisage a person going into their local high street optician, and have an OCT scan done and this algorithm would identify those patients with sight-threatening disease at the very early stage of the condition,“ said Dr Pearse Keane, consultant ophthalmologist at Moorfields Eye Hospital.

DeepMind’s AI was trained on a database of almost 15,000 eye scans, stripped of any identifying information. DeepMind worked with clinicians to label areas of disease, then ran those labelled images through its system. Suleyman said the two-and-a-half project required „huge investment“ from DeepMind and involved 25 staffers, as well as the researchers from Moorfields.

People are still worried about a Google-linked company having access to medical data

Google acquired DeepMind in 2014 for £400 million ($509 million), and the British AI company is probably most famous for AlphaGo, its algorithm that beat the world champion at the strategy game Go.

While DeepMind has remained UK-based and independent from Google, the relationship has attracted scrutiny. The main question is whether Google, a private US company, should have access to the sensitive medical data required for DeepMind’s health arm.

DeepMind was criticised in 2016 for failing to disclose its access to historical medical data during a project with Royal Free Hospital. Suleyman said the eye scans processed by DeepMind were „completely anonymised.“

„You can’t identify whose scans it was. We’re in quite a different regime, this is very much research, and we’re a number of years from being able to deploy in practice,“ he said.

Suleyman added: „How this has the potential to have transform the NHS is very clear. We’ve been very conscious that this will be a model that’s published, and available to others to implement.

„The labelled dataset is available to other researchers. So this is very much an open and collaborative relationship between equals that we’ve worked hard to foster. I’m proud of that work.“

 

https://www.businessinsider.de/google-deepmind-ai-detects-eye-disease-2018-8?r=US&IR=T

Microsoft wants regulation of facial recognition technology to limit ‚abuse‘

Facial recognition put to the test
Facial recognition put to the test

Microsoft has helped innovate facial recognition software. Now it’s urging the US government to enact regulation to control the use of the technology.

In a blog post, Microsoft (MSFT)President Brad Smith said new laws are necessary given the technology’s „broad societal ramifications and potential for abuse.“

He urged lawmakers to form „a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.“

Facial recognition — a computer’s ability to identify or verify people’s faces from a photo or through a camera — has been developing rapidly. Apple (AAPL), Google (GOOG), Amazon and Microsoft are among the big tech companies developing and selling such systems. The technology is being used across a range of industries, from private businesses like hotels and casinos, to social media and law enforcement.

Supporters say facial recognition software improves safety for companies and customers and can help police track police down criminals or find missing children. Civil rights groups warn it can infringe on privacy and allow for illegal surveillance and monitoring. There is also room for error, they argue, since the still-emerging technology can result in false identifications.

The accuracy of facial recognition technologies varies, with women and people of color being identified with less accuracy, according to MIT research.

„Facial recognition raises a critical question: what role do we want this type of technology to play in everyday society?“ Smith wrote on Friday.

Smith’s call for a regulatory framework to control the technology comes as tech companies face criticism over how they’ve handled and shared customer data, as well as their cooperation with government agencies.

Last month, Microsoft was scrutinized for its working relationship with US Immigration and Customs Enforcement. ICE had been enforcing the Trump administration’s „zero tolerance“ immigration policy that separated children from their parents when they crossed the US border illegally. The administration has since abandoned the policy.

Microsoft urges Trump administration to change its policy separating families at border

Microsoft wrote a blog post in January about ICE’s use of its cloud technology Azure, saying it could help it „accelerate facial recognition and identification.“

After questions arose about whether Microsoft’s technology had been used by ICE agents to carry out the controversial border separations, the company released a statement calling the policy „cruel“ and „abusive.“

In his post, Smith reiterated Microsoft’s opposition to the policy and said he had confirmed its contract with ICE does not include facial recognition technology.

Amazon(AMZN) has also come under fire from its own shareholders and civil rights groups over local police forces using its face identifying software Rekognition, which can identify up to 100 people in a single photo.

Some Amazon shareholders coauthored a letter pressuring Amazon to stop selling the technology to the government, saying it was aiding in mass surveillance and posed a threat to privacy rights.

Amazon asked to stop selling facial recognition technology to police

And Facebook (FB) is embroiled in a class-action lawsuit that alleges the social media giant used facial recognition on photos without user permission. Its facial recognition tool scans your photos and suggests you tag friends.

Neither Amazon nor Facebook immediately responded to a request for comment about Smith’s call for new regulations on face ID technology.

Smith said companies have a responsibility to police their own innovations, control how they are deployed and ensure that they are used in a „a manner consistent with broadly held societal values.“

„It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike,“ he said.

https://money.cnn.com/2018/07/14/technology/microsoft-facial-recognition-letter-government/index.html

June 2018 Tech News & Trends to Watch

1. Companies Worldwide Strive for GDPR Compliance

By now, everyone with an email address has seen a slew of emails announcing privacy policy updates. You have Europe’s GDPR legislation to thank for your overcrowded inbox. GDPR creates rules around how much data companies are allowed to collect, how they’re able to use that data, and how clear they have to be with consumers about it all.

Companies around the world are scrambling to get their business and its practices into compliance – a significant task for many of them. While technically, the deadline to get everything in order passed on May 25, for many companies the process will continue well into June and possibly beyond. Some companies are even shutting down in Europe for good, or for as long as it takes them to get in compliance.

Even with the deadline behind us, the GDPR continues to be a top story for the tech world and may remain so for some time to come.

 

2. Amazon Provides Facial Recognition Tech to Law Enforcement

Amazon can’t seem to go a whole month without showing up in a tech news roundup. This month it’s for a controversial story: selling use of Rekognition, their facial recognition software, to law enforcement agencies on the cheap.

Civil rights groups have called for the company to stop allowing law enforcement access to the tech out of concerns that increased government surveillance can pose a threat to vulnerable communities in the country. In spite of the public criticism, Amazon hasn’t backed off on providing the tech to authorities, at least as of this time.

 

3. Apple Looks Into Self-Driving Employee Shuttles

Of the many problems facing our world, the frustrating work commute is one that many of the brightest minds in tech deal with just like the rest of us. Which makes it a problem the biggest tech companies have a strong incentive to try to solve.

Apple is one of many companies that’s invested in developing self-driving cars as a possible solution, but while that goal is still (probably) years away, they’ve narrowed their focus to teaming up with VW to create self-driving shuttles just for their employees.  Even that project is moving slower than the company had hoped, but they’re aiming to have some shuttles ready by the end of the year.

 

4. Court Weighs in on President’s Tendency to Block Critics on Twitter

Three years ago no one would have imagined that Twitter would be a president’s go-to source for making announcements, but today it’s used to that effect more frequently than official press conferences or briefings.

In a court battle that may sound surreal to many of us, a judge just found that the president can no longer legally block other users on Twitter.  The court asserted that blocking users on a public forum like Twitter amounts to a violation of their First Amendment rights. The judgment does still allow for the president and other public officials to mute users they don’t agree with, though.

 

5. YouTube Launches Music Streaming Service

YouTube joined the ranks of Spotify, Pandora, and Amazon this past month with their own streaming music service. Consumers can use a free version of the service that includes ads, or can pay $9.99 for the ad-free version.

youtube music service

With so many similar services already on the market, people weren’t exactly clamoring for another music streaming option. But since YouTube is likely to remain the reigning source for videos, it doesn’t necessarily need to unseat Spotify to still be okay. And with access to Google’s extensive user data, it may be able to provide more useful recommendations than its main competitors in the space, which is one way the service could differentiate itself.

 

6. Facebook Institutes Political Ad Rules

Facebook hasn’t yet left behind the controversies of the last election. The company is still working to proactively respond to criticism of its role in the spread of political propaganda many believe influenced election results. One of the solutions they’re trying is a new set of rules for any political ads run on the platform.

Any campaign that intends to run Facebook ads is now required to verify their identity with a card Facebook mails to their address that has a verification code. While Facebook has been promoting these new rules for a few weeks to politicians active on the platform, some felt blindsided when they realized, right before their primaries no less, that they could no longer place ads without waiting 12 to 15 days for a verification code to come in the mail. Politicians in this position blame the company for making a change that could affect their chances in the upcoming election.

Even in their efforts to avoid swaying elections, Facebook has found themselves criticized for doing just that. They’re probably feeling at this point like they just can’t win.

 

7. Another Big Month for Tech IPOs

This year has seen one tech IPO after another and this month is no different. Chinese smartphone company Xiaomi has a particularly large IPO in the works. The company seeks to join the Hong Kong stock exchange on June 7 with an initial public offering that experts anticipate could reach $10 billion.

The online lending platform Greensky started trading on the New York Stock Exchange on May 23 and sold 38 million shares in its first day, 4 million more than expected. This month continues 2018’s trend of tech companies going public, largely to great success.

 

8. StumbleUpon Shuts Down

In the internet’s ongoing evolution, there will always be tech companies that win and those that fall by the wayside. StumbleUpon, a content discovery platform that had its heyday in the early aughts, is officially shutting down on June 30.

Since its 2002 launch, the service has helped over 40 million users “stumble upon” 60 billion new websites and pieces of content. The company behind StumbleUpon plans to create a new platform that serves a similar purpose that may be more useful to former StumbleUpon users called Mix.

 

9. Uber and Lyft Invest in Driver Benefits

In spite of their ongoing success, the popular ridesharing platforms Uber and Lyft have faced their share of criticism since they came onto the scene. One of the common complaints critics have made is that the companies don’t provide proper benefits to their drivers. And in fact, the companies have fought to keep drivers classified legally as contractors so they’re off the hook for covering the cost of employee taxes and benefits.

Recently both companies have taken steps to make driving for them a little more attractive. Uber has begun offering Partner Protection to its drivers in Europe, which includes health insurance, sick pay, and parental leave ­ ­– so far nothing similar in the U.S. though. For its part, Lyft is investing $100 million in building driver support centers where their drivers can stop to get discounted car maintenance, tax help, and customer support help in person from Lyft staff. It’s not the same as getting full employee benefits (in the U.S. at least), but it’s something.

Source: https://www.hostgator.com/blog/june-tech-trends-to-watch/