Presidential hopefuls are arguing about it. Officials like FBI Director James Comey have publicly criticized tech companies for their encryption practices. Facebook-owned WhatsApp was temporarily banned in Brazil last week for failing to hand over user info it claims it didn’t have.
Almost all messaging companies encrypt messages en route between a user’s device and company servers, where a company could then read them if needed. The problem arises, though, when messages are end-to-end encrypted, which means they are only readable on the sender’s and receiver’s devices. That means the messaging companies can’t read them. Companies like Apple offer this level of security to satisfy users looking for total privacy. Law enforcement officials hate it because it poses a serious security threat.
Who can read your private messages? We checked in with some of the most popular messaging companies out there, and here’s what we found.
These Companies Can’t Always Read Your Messages
Apple: Apple’s iMessages are end-to-end encrypted, which means they can only be read on users’ phones and the company can’t read them. There’s a caveat here, though. If you back up your messages in iCloud, then Apple can read them and could be forced to hand them over to authorities if provided with an appropriate warrant.
WhatsApp*: WhatsApp gets an asterisk here because while it’s almost done rolling out end-to-end encryption to all of its users, it’s not officially there yet. Either way, the company claims that it does not store messages on its servers, which means it can’t hand over messages if approached by law enforcement officials. (This is what got WhatsApp into trouble in Brazil.)
Telegram**: Telegram messages can be totally private if you want them to be. The company offers end-to-end encryption if users turn on the app’s “secret chat” feature and thus can’t read those user messages. Regular messages are stored on Telegram’s servers. The app benefited immensely from Brazil’s temporary WhatsApp ban. Telegram claims that it added 5.7 million new users on the day WhatsApp was blocked.
Signal: Owned by Open Whisper Systems, Signal is also end-to-end encrypted. The company explicitly states on its website that it “does not have access to the contents of any messages sent by Signal users.”
Line*: Line offers end-to-end encryption, but only if both the sender and recipient of a message turn on a feature called “Letter Sealing.” This will encrypt your messages so the company can’t read them, but regular messages without the feature are not end-to-end encrypted and Line may have to hand them over if required by Japanese law.
These Companies Can Read Your Messages
Kik*: Kik also gets an asterisk here. Messages are not end-to-end encrypted, so the company can theoretically read them. But Kik claims it deletes user messages from its servers as soon as they’re delivered to a user’s device. That means it wouldn’t be able to share your messages with authorities if requested, and the length of time during which it could read your messages is extremely short.
Facebook (Messenger and Instagram): Both Facebook Messenger and Facebook-owned Instagram encrypt messages only when they are en route between a user’s device and company servers where they are stored. This means Facebook might have to hand over private messages if required by law.
Google: Messages sent via Google Hangouts are also encrypted en route and even on the company’s servers, but Google can still read them if needed. Encrypting the messages while on Google servers is intended to keep others from jacking in and reading them, but Google itself has the encryption key. This means Google might have to hand over private messages if required by law.
Snapchat: Like Google, Snapchat messages are encrypted while at rest on Snapchat’s servers (though the company has the encryption key if needed). Snaps are deleted from the servers as soon as they’re opened by the intended recipients, and Snapchat claims these delivered messages “typically cannot be retrieved from Snapchat’s servers by anyone, for any reason.” But unopened Snaps are kept on the servers for 30 days before being deleted. That means Snapchat might have to hand over unopened, private messages if required by law.
Twitter: Direct messages on Twitter are not end-to-end encrypted. The company might have to hand over private messages if required by law.
Skype: Microsoft-owned Skype does not offer end-to-end encryption for instant messages. They are stored on Skype’s servers for a “limited time,” which means Skype might have to hand over private messages if required by law.
**The Telegram section was updated to include distinction that end-to-end encryption is only available for the app’s “secret chats.”
Tesla CEO Elon Musk has made a bold prediction: Tesla Motors will have a self-driving car within two years.
“I think we have all the pieces,” Musk told Fortune, “and it’s just about refining those pieces, putting them in place, and making sure they work across a huge number of environments — and then we’re done. It’s a much easier problem than people think it is.”
Although Musk’s comments to Fortune came Monday, The Street pegged a rise in Tesla’s shares to the comments on Tuesday. The ambitious timeframe appeared to be offering support to the stock again today, with shares trading up $1.47, or 0.64 percent, at $231.42 around 7:18 a.m. PST.
This is the most aggressive timeline Musk has mentioned. While Musk claims the problem is easier than people think it is, he doesn’t think the tech is so accessible that any hacker could create a self-driving car. Musk took the opportunity to call out hacker George Hotz, who claimed via a Bloomberg article last week that he had developed self-driving car technology that could compete with Tesla’s. Musk said he wasn’t buying it.
“But it’s not like George Hotz, a one-guy-and-three-months problem,” Musk said to Fortune. “You know, it’s more like, thousands of people for two years.”
The company went so far as to post a statement last week about Hotz’s achievement.
“We think it is extremely unlikely that a single person or even a small company that lacks extensive engineering validation capability will be able to produce an autonomous driving system that can be deployed to production vehicles,” the company stated. “It may work as a limited demo on a known stretch of road — Tesla had such a system two years ago — but then requires enormous resources to debug over millions of miles of widely differing roads.”
While Tesla is unconcerned about Hotz, the company’s new timeline may have other autonomous car developers hitting the accelerator. Tech companies like Google and Apple, in addition to automakers such as Volvo and General Motors are all competing to be among the first to offer some form of self-driving tech. Many believe the early 2020s would be a realistic timeframe to expect to see the public engaging with self-driving cars.
Just yesterday, it was reported that Google and Ford will enter into a joint venture to build self-driving vehicles with Google’s technology, according to Yahoo Autos, citing sources familiar with the plans. The official announcement is expected to come during the Consumer Electronics Show in January, but there is no manufacturing timeline.
But even if Tesla moves quickly on self-driving cars, are consumers ready for them? The Palo Alto-based carmaker’s recent Firmware 7.1 Autopilot update includes restrictions on self-driving features. The update only allows its Autosteer feature to engage when the Model S is traveling below the posted speed limit. The update came shortly after it was reported that drivers were involved in dangerous activities while the Autopilot features were engaged.
Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving the World
Elon Musk. Nathaniel Wood Elon Musk and Sam Altman worry that artificial intelligence will take over the world. So, the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI—and then share it with anyone who wants it.At least, this is the message that Musk, the founder of electric car company Tesla Motors, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called OpenAI. In an interview with Steven Levy of Backchannel, timed to the company’s launch, Altman said they expect this decades-long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be “usable by everyone instead of usable by, say, just Google.”
If OpenAI stays true to its mission, it will act as a check on powerful companies like Google and Facebook.
Naturally, Levy asked whether their plan to freely share this technology would actually empower bad actors, if they would end up giving state-of-the-art AI to the Dr. Evils of the world. But they played down this risk. They feel that the power of the many will outweigh the power of the few. “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements,” said Altman, “we think its far more likely that many, many AIs, will work to stop the occasional bad actors.”
It’ll be years before we know if this counterintuitive argument holds up. Super-human artificial intelligence is an awfully long way away, if it arrives at all. “This idea has a lot of intuitive appeal,” says Miles Brundage, a PhD student at the Arizona State University who deals in the human and social dimensions of science and technology, says of OpenAI. “But it’s not yet an open-and-shut argument. At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”
But in the creation of OpenAI, there are more forces at work than just the possibility of super-human intelligence achieving world domination. In the shorter term, OpenAI can directly benefit Musk and Altman and their companies (Y Combinator backed such unicorns as Airbnb, Dropbox, and Stripe). After luring top AI researchers from companies like Google and setting them up at OpenAI, the two entrepreneurs can access ideas they couldn’t get their hands on before. And in pooling online data from their respective companies as they’ve promised to, they’ll have the means to realize those ideas. Nowadays, one key to advancing AI is engineering talent, and the other is data.
If OpenAI stays true to its mission of giving everyone access to new ideas, it will at least serve as a check on powerful companies like Google and Facebook. With Musk, Altman, and others pumping more than a billion dollars into the venture, OpenAI is showing how the very notion of competition has changed in recent years. Increasingly, companies and entrepreneurs and investors are hoping to compete with rivals by giving away their technologies. Talk about counterintuitive.
Yes, such sharing is a way of competing. If a company like Google or Facebook openly shares software or hardware designs, it can accelerate the progress of AI as a whole. And that, ultimately, advances their own interests as well. For one, as larger community improves these open source technologies, Google and Facebook can push the improvement back into their own businesses. But open sourcing also is a way of recruiting and retaining talent. In the field of deep learning in particular, researchers—many of whom come from academia—are very much attracted to the idea of openly sharing their work, of benefiting as many people as possible. “It is certainly a competitive advantage when it comes to hiring researchers,” Altman tells WIRED. “The people we hired … love the fact that [OpenAI is] open and they can share their work.”
‚Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid.‘ Chris Nicholson, Skymind
This competition may be more direct than it might seem. We can’t help but think that Google open sourced its AI engine, TensorFlow, because it knew OpenAI was on the way—and that Facebook shared its Big Sur server design as an answer to both Google and OpenAI. Facebook says this was not the case. Google didn’t immediately respond to a request for comment. And Altman declines to speculate. But he does say that Google knew OpenAI was coming. How could it not? The project nabbed Ilya Sutskever, one of its top AI researchers.
That doesn’t diminish the value of Google’s open source project. Whatever the company’s motives, the code is available to everyone to use as they see fit. But it’s worth remembering that, in today’s world, giving away tech is about more than magnanimity. The deep learning community is relatively small, and all of these companies are vying for the talent that can help them take advantage of this extremely powerful technology. They want to share, but they also want to win. They may release some of their secret sauce, but not all. Open source will accelerate the progress of AI, but as this happens, it’s important that no one company or technology becomes too powerful. That’s why OpenAI is such a meaningful idea.
His Own Apollo Program
You can also bet that, on some level, Musk too sees sharing as a way of winning. “As you know, I’ve had some concerns about AI for some time,” he told Backchannel. And certainly, his public fretting over the threat of an AI apocalypse is well known. But he also runs Tesla, which stands to benefit from the sort of technology OpenAI will develop. Like Google, Tesla is building self-driving cars, which can benefit from deep learning in enormous ways.
Deep learning relies on what are called neural networks, vast networks of software and hardware that approximate the web of neurons in the human brain. Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive.
Yes, Musk could just hire AI researchers to work at Tesla. And he is. But with OpenAI, he can hire better researchers (because it’s open, and because it’s not constrained by any one company’s business model or short-term interest). He can even lure researchers away from Google. Plus, he can create a far more powerful pool of data that can help feed the work of these researchers. Altman says that Y Combinator companies will share their data with OpenAI, and that’s no small thing. Pair their data with Tesla’s, and you start to rival Google—at least in some ways.
“It’s probably better in some dimensions and worse in others,” says Chris Nicholson, the CEO of deep learning startup called Skymind, which was recently accepted into the Y Combinator program. “I’m sure Airbnb has great housing data that Google can’t touch.”
Musk was an early investor in a company called DeepMind—a UK-based outfit that describes itself as “an Apollo program for AI.” And this investment gave him a window into how this remarkable technology was developing. But then Google bought DeepMind, and that window closed. Now, Musk has started his own Apollo program. He once again has the inside track. And OpenAI’s other investors are in a similar position, including Amazon, an Internet giant trails Google and Facebook in the race to AI.
Pessimistic Optimists
But, no, this doesn’t diminish the value of Musk’s open source project. He may have selfish as well as altruistic motives. But the end result is still enormously beneficial to the wider world of AI. In sharing its tech with the world, OpenAI will nudge Google, Facebook, and others to do so as well—if it hasn’t already. That’s good for Tesla and all those Y Combinator companies. But it’s also good for everyone that’s interested in using AI.
Of course, in sharing its tech, OpenAI will also provide new ammunition to Google and Facebook. And Dr. Evil, wherever he may lurk. He can feed anything OpenAI builds back into his own systems. But the biggest concern isn’t necessarily that Dr. Evil will turn this tech loose on the world. It’s that the tech will turn itself loose on the world. Deep learning won’t stop at self-driving cars and natural language understanding. Top researchers believe that, given the right mix of data and algorithms, its understanding can extend to what humans call common sense. It could even extend to super-human intelligence.
“The fear is of a super-intelligence that recursively improves itself, reaches an escape velocity, and becomes orders of magnitude smarter than any human could ever hope to be,” Nicholson says. “That’s a long ways away. And some people think it might not happen. But if it did, that will be scary.”
This is what Musk and Altman are trying to fight. “Developing and enabling and enriching with technology protects people,” Altman tells us. “Doing this is the best way to protect all of us.” But at the same time, they’re shortening the path to super-human intelligence. And though Altman and Musk may believe that giving access to super-human intelligence to everyone will keep any rogue AI in check, the opposite could happen. As Brundage points out: If companies know that everyone is racing towards the latest AI at breakneck speed, they may be less inclined to put safety precautions in place.
How necessary those precautions really are depend, ironically, on how optimistic you are about humanity’s ability to accelerate technological progress. Based on their past successes, Musk and Altman have every reason to believe the arc of progress will keep bending upward. But others aren’t so sure that AI will threaten humanity in the way that Musk and Altman believe it will. “Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid,” Nicholson says.
Either way, the Googles and the Facebooks of the world are rapidly pushing AI towards new horizons. And at least in small ways, OpenAI can help keep them—and everyone else—in check. “I think that Elon and that group can see AI is unstoppable,” Nicholson says, “so all they can hope to do is affect its trajectory.”
Today, Porsche announced it’s investing more than a billion dollars to bring the Mission E to production. As in, you’ll be able to buy one. We’re light on details—like the size of the battery, or when we’ll actually see one on the road—but we’ve got the most important numbers. The motor (or motors, Porsche hasn’t said) will produce more than 600 horsepower. The four-seater Mission E will go from 0 to 62 mph in under 3.5 seconds. And it will go 310 miles on a charge.
Porsche, which faces increasingly strict fuel emission standards from US and European authorities, been working with batteries for a few years now, with top notch results. It already offers plug-in hybrid versions of the Panamera and Cayenne, it’s successfully raced a 911 hybrid. Then there’s the flat-out amazing gas-electric 918 Spyder supercar and 919 Hybrid that won at Le Mans this year. So it makes sense to make the next step a full electric.
Compared to Tesla’s current range-topper, the excellent Model S P90D, the Mission E will offer a bit less power and a slower acceleration time. But Porsche wins on range—the longest-legged Tesla goes roughly 286 miles on a charge. Here, the Germans have a second advantage: They’re working on an 800-volt charger that will power the car up to 80 percent in just 15 minutes, half the time it takes the Tesla.
Porsche, which faces increasingly strict fuel emission standards from US and European authorities, been working with batteries for a few years now, with top notch results. It already offers plug-in hybrid versions of the Panamera and Cayenne, it’s successfully raced a 911 hybrid. Then there’s the flat-out amazing gas-electric 918 Spyder supercar and 919 Hybrid that won at Le Mans this year. So it makes sense to make the next step a full electric.
Porsche plans to build the battery into the floor of the car, like Tesla does, so you can expect a very low center of gravity, great news for performance. But really, the Mission E wins on looks. The Model S and Model X SUV are lovely designs, but the Porsche is simply gorgeous, in the way only a Porsche can be. We’ve only seen the concept version, but hopefully Porsche will be smart enough to change as little as possible on the way to production.
Apple is known for being one of the most challenging and exciting places to work, so it’s not surprising to learn that getting a job there is no easy task.
Like Google and other big tech companies, Apple asks both technical questions based on your past work experience and some mind-boggling puzzles.
We combed through recent posts on Glassdoor to find some of the toughest interview questions candidates have been asked.
Some require solving tricky math problems, while others are simple but vague enough to keep you on your toes.
“If you have 2 eggs, and you want to figure out what’s the highest floor from which you can drop the egg without breaking it, how would you do it? What’s the optimal solution?” — Software Engineer candidate
„You have a 100 coins laying flat on a table, each with a head side and a tail side. 10 of them are heads up, 90 are tails up. You can’t feel, see or in any other way find out which side is up. Split the coins into two piles such that there are the same number of heads in each pile.“ — Software Engineer candidate
AP Photo/Ariel Schalit
„Describe yourself, what excites you?“ — Software Engineer candidate
„There are three boxes, one contains only apples, one contains only oranges, and one contains both apples and oranges. The boxes have been incorrectly labeled such that no label identifies the actual contents of the box it labels. Opening just one box, and without looking in the box, you take out one piece of fruit. By looking at the fruit, how can you immediately label all of the boxes correctly?“ — Software QA Engineer candidate
„Scenario: You’re dealing with an angry customer who was waiting for help for the past 20 minutes and is causing a commotion. She claims that she’ll just walk over to Best Buy or the Microsoft Store to get the computer she wants. Resolve this issue.“ — Specialist candidate
„Have you ever disagreed with a manager’s decision, and how did you approach the disagreement? Give a specific example and explain how you rectified this disagreement, what the final outcome was, and how that individual would describe you today.“ — Software Engineer candidate
Jamie Squire / Getty
“You put a glass of water on a record turntable and begin slowly increasing the speed. What happens first — does the glass slide off, tip over, or does the water splash out?“ — Mechanical Engineer candidate
Digital Trends
„Tell me something that you have done in your life which you are particularly proud of.“ — Software Engineering Manager candidate
„Given an iTunes type of app that pulls down lots of images that get stale over time, what strategy would you use to flush disused images over time?“ — Software Engineer candidate
iTunes
„If you’re given a jar with a mix of fair and unfair coins, and you pull one out and flip it 3 times, and get the specific sequence heads heads tails, what are the chances that you pulled out a fair or an unfair coin?“ — Lead Analyst candidate
slgckgc/flickr
„What was your best day in the last 4 years? What was your worst?“ — Engineering Project Manager candidate
Kreatives Denken, knifflige Logikprobleme Den Jobkandidaten werden je nach dem Bereich, für den sie sich bewerben, Fragen zu ihrem technischen Verständnis gestellt. Teilweise müssen sie Empathie beweisen oder Logikrätsel lösen und kreatives Denken an den Tag legen.
Frage an einen Softwareentwickler: Wenn Sie zwei Eier halten und überprüfen möchten aus welcher Höhe Sie sie fallen lassen können, ohne sie kaputt zu machen. Wie würden Sie das angehen?
Frage an einen Hardware-Ingenieur: Sie stellen ein Glas Wasser auf einen Plattenspieler, der sich zunehmend schneller dreht. Was geschieht zuerst: rutscht das Glas herunter, schwappt das Wasser über oder kippt das Glas um?
Frage an einen Kandidaten für den Telefonsupport: Erklären Sie einem Achtjährigen wie ein Modem/Router funktioniert.
Frage an einen Bewerber im globalen Vertrieb: Wie viele Kinder kommen täglich zur Welt?
Frage für einen Family-Room-Bewerber: Sie wirken sehr positiv, was sorgt bei Ihnen für schlechte Laune?
Frage an einen Apple-Specialist-Kandidaten: Warum änderte Apple seinen Namen von Apples Computer Incorporated zu Apple Inc.?
Frage an einen Software-Entwickler: Auf einem Tisch liegen 100 Münzen. Zehn mit der Kopfseite nach oben, 90 mit der Zahl. Sie können weder erfühlen, noch sehen, noch auf irgendeine andere Weise herausfinden mit welcher Seite die Münzen nach oben zeigen. Wie teilen sie die Münzen in zwei Stapel, damit bei beiden dieselbe Anzahl mit dem Kopf nach oben zeigt?
Frage an einen Software-Entwickler: Wie würden Sie einen Toaster testen?
Frage an einen Bewerber im globalen Vertrieb: Wie berechnen Sie die Kosten für einen Kugelschreiber?
Frage an einen Apple-Specialist-Kandidaten: Sie haben es mit einer verärgerten Kundin zu tun, die seit 20 Minuten auf Hilfe wartet und für Wirbel sorgt. Sie sagt, dass sie nun zu Best Buy oder einem Microsoft-Store geht, um den Computer zu kaufen, die sie möchte. Lösen Sie dieses Problem.
Fragen an einen Bewerber für den Apple-Care-Telefonsupport: Ein Mann ruft an und hat einen Computer, der im Grunde nur noch Schrott ist. Was tun Sie?
Click to Open Overlay GalleryBuried deep within each cell in Sergey Brin’s body—in a gene called LRRK2, which sits on the 12th chromosome—is a genetic mutation that has been associated with higher rates of Parkinson’s. Rafa JennSeveral evenings a week, after a day’s work at Google headquarters in Mountain View, California, Sergey Brin drives up the road to a local pool. There, he changes into swim trunks, steps out on a 3-meter springboard, looks at the water below, and dives.
Brin is competent at all four types of springboard diving—forward, back, reverse, and inward. Recently, he’s been working on his twists, which have been something of a struggle. But overall, he’s not bad; in 2006 he competed in the master’s division world championships. (He’s quick to point out he placed sixth out of six in his event.)
The diving is the sort of challenge that Brin, who has also dabbled in yoga, gymnastics, and acrobatics, is drawn to: equal parts physical and mental exertion. “The dive itself is brief but intense,” he says. “You push off really hard and then have to twist right away. It does get your heart rate going.”
There’s another benefit as well: With every dive, Brin gains a little bit of leverage—leverage against a risk, looming somewhere out there, that someday he may develop the neurodegenerative disorder Parkinson’s disease. Buried deep within each cell in Brin’s body—in a gene called LRRK2, which sits on the 12th chromosome—is a genetic mutation that has been associated with higher rates of Parkinson’s.
Not everyone with Parkinson’s has an LRRK2 mutation; nor will everyone with the mutation get the disease. But it does increase the chance that Parkinson’s will emerge sometime in the carrier’s life to between 30 and 75 percent. (By comparison, the risk for an average American is about 1 percent.) Brin himself splits the difference and figures his DNA gives him about 50-50 odds.
That’s where exercise comes in. Parkinson’s is a poorly understood disease, but research has associated a handful of behaviors with lower rates of disease, starting with exercise. One study found that young men who work out have a 60 percent lower risk. Coffee, likewise, has been linked to a reduced risk. For a time, Brin drank a cup or two a day, but he can’t stand the taste of the stuff, so he switched to green tea. (“Most researchers think it’s the caffeine, though they don’t know for sure,” he says.) Cigarette smokers also seem to have a lower chance of developing Parkinson’s, but Brin has not opted to take up the habit. With every pool workout and every cup of tea, he hopes to diminish his odds, to adjust his algorithm by counteracting his DNA with environmental factors.
“This is all off the cuff,” he says, “but let’s say that based on diet, exercise, and so forth, I can get my risk down by half, to about 25 percent.” The steady progress of neuroscience, Brin figures, will cut his risk by around another half—bringing his overall chance of getting Parkinson’s to about 13 percent. It’s all guesswork, mind you, but the way he delivers the numbers and explains his rationale, he is utterly convincing.
Brin, of course, is no ordinary 36-year-old. As half of the duo that founded Google, he’s worth about $15 billion. That bounty provides additional leverage: Since learning that he carries a LRRK2 mutation, Brin has contributed some $50 million to Parkinson’s research, enough, he figures, to “really move the needle.” In light of the uptick in research into drug treatments and possible cures, Brin adjusts his overall risk again, down to “somewhere under 10 percent.” That’s still 10 times the average, but it goes a long way to counterbalancing his genetic predisposition.
It sounds so pragmatic, so obvious, that you can almost miss a striking fact: Many philanthropists have funded research into diseases they themselves have been diagnosed with. But Brin is likely the first who, based on a genetic test, began funding scientific research in the hope of escaping a disease in the first place.
His approach is notable for another reason. This isn’t just another variation on venture philanthropy—the voguish application of business school practices to scientific research. Brin is after a different kind of science altogether. Most Parkinson’s research, like much of medical research, relies on the classic scientific method: hypothesis, analysis, peer review, publication. Brin proposes a different approach, one driven by computational muscle and staggeringly large data sets. It’s a method that draws on his algorithmic sensibility—and Google’s storied faith in computing power—with the aim of accelerating the pace and increasing the potential of scientific research. “Generally the pace of medical research is glacial compared to what I’m used to in the Internet,” Brin says. “We could be looking lots of places and collecting lots of information. And if we see a pattern, that could lead somewhere.”
In other words, Brin is proposing to bypass centuries of scientific epistemology in favor of a more Googley kind of science. He wants to collect data first, then hypothesize, and then find the patterns that lead to answers. And he has the money and the algorithms to do it.
Click to Open Overlay GalleryGiven what seems like very bad news, most of us would actually do what Brin did: Go over our options, get some advice, and move on with life. Nathan Fox
Brin’s faith in the power of numbers—and the power of knowledge, more generally—is likely something he inherited from his parents, both scientists. His father, Michael, is a second-generation mathematician; his mother Eugenia is trained in applied mathematics and spent years doing meteorology research at NASA. The family emigrated from Russia when Brin was 6. At 17, he took up mathematics himself at the University of Maryland, later adding a second major in computer science. When he reached Stanford for his PhD—a degree he still hasn’t earned, much to his parents’ chagrin—he focused on data mining. That’s when he began thinking about the power of large data sets and what might come of analyzing them for unexpected patterns and insights.
Around the same time, in 1996, Brin’s mother started to feel some numbness in her hands. The initial diagnosis was repetitive stress injury, brought on by years of working at a computer. When tests couldn’t confirm that diagnosis, her doctors were stumped. Soon, though, Eugenia’s left leg started to drag. “It was just the same as my aunt, who had Parkinson’s years ago,” she recalls. “The symptoms started in the same way, at the same age. To me, at least, it was obvious there was a connection.”
At the time, scientific opinion held that Parkinson’s was not hereditary, so Brin didn’t understand his mother’s concern. “I thought it was crazy and completely irrational,” he says. After further tests at Johns Hopkins and the Mayo Clinic, though, she was diagnosed with Parkinson’s in 1999.
Even after the LRRK2 connection was made in 2004, Brin still didn’t connect his mother’s Parkinson’s to his own health. Then, in 2006, his wife-to-be, Anne Wojcicki, started the personal genetics company 23andMe (Google is an investor). As an alpha tester, Brin had the chance to get an early look at his genome. He didn’t find much of concern. But then Wojcicki suggested he look up a spot known as G2019S—the notch on the LRRK2 gene where an adenine nucleotide, the A in the ACTG code of DNA, sometimes substitutes for a guanine nucleotide, the G. And there it was: He had the mutation. His mother’s 23andMe readout showed that she had it, too.
Brin didn’t panic; for one thing, his mother’s experience with the disease has been reassuring. “She still goes skiing,” he says. “She’s not in a wheelchair.” Instead, he spent several months mulling over the results. He began to consult experts, starting with scientists at the Michael J. Fox Foundation and at the Parkinson’s Institute, which is not far from Google’s headquarters. He quickly realized it was going to be impractical to keep his risk from the public. “I can’t talk to 1,000 people in secret,” he says. “So I might as well put it out there to the world. It seemed like information that was worthy of sharing and might even be interesting.”
So one day in September 2008, Brin started a blog. His first post was called simply “LRRK2.”
“I know early in my life something I am substantially predisposed to,” Brin wrote. “I now have the opportunity to adjust my life to reduce those odds (e.g., there is evidence that exercise may be protective against Parkinson’s). I also have the opportunity to perform and support research into this disease long before it may affect me. And, regardless of my own health, it can help my family members as well as others.”
Brin continued: “I feel fortunate to be in this position. Until the fountain of youth is discovered, all of us will have some conditions in our old age, only we don’t know what they will be. I have a better guess than almost anyone else for what ills may be mine—and I have decades to prepare for it.”
In a sense, we’ve been using genetics to foretell disease risk forever. When we talk about “family history,” we’re largely talking about DNA, about how our parents’ health might hint at our own. A genetic scan is just a more modern way to link our familial past with our potential future. But there’s something about the precision of a DNA test that can make people believe that chemistry is destiny—that it holds dark, implacable secrets. This is why genetic information is sometimes described as “toxic knowledge”: Giving people direct access to their genetic information, in the words of Stanford bioethicist Hank Greely, is out and out “reckless.”
It’s true that in the early days of the science, genetic testing meant learning about a dreaded degenerative disease like Huntington’s or cystic fibrosis. But these diseases, although easy to identify, are extremely rare. Newer research has shown that when it comes to getting sick, a genetic predisposition is usually just one factor. The vast majority of conditions are also influenced by environment and day-to-day habits, areas where we can actually take some action.
But, surprisingly, the concept of genetic information as toxic has persisted, possibly because it presumes that people aren’t equipped to learn about themselves. But research shows this presumption to be unfounded. In 2009, The New England Journal of Medicine published results of the Risk Evaluation and Education for Alzheimer’s Disease study, an 11-year project that sought to examine how people react to finding out that they have a genetic risk for Alzheimer’s. Like Parkinson’s, Alzheimer’s is a neurodegenerative condition centering on the brain. But unlike Parkinson’s, Alzheimer’s has no known treatment. So learning you have a genetic predisposition should be especially toxic.
In the study, a team of researchers led by Robert Green, a neurologist and geneticist at Boston University, contacted adults who had a parent with Alzheimer’s and asked them to be tested for a variation in a gene known as ApoE. Depending on the variation, an ApoE mutation can increase a person’s risk for Alzheimer’s from three to 15 times the average. One hundred sixty-two adults agreed; 53 were told they had the mutation.
The results were delivered to the participants with great care: A genetic counselor walked each individual through the data, and all the subjects had follow-up appointments with the counselor. Therapists were also on call. “People were predicting catastrophic reactions,” Green recalls. “Depression, suicide, quitting their jobs, abandoning their families. They were anticipating the worst.”
But that isn’t what happened. People told that they were at dramatically higher risk for developing Alzheimer’s later in life seemed to process the information and integrate it into their lives, often choosing to lead more healthy lifestyles. “People are handling it,” Green says. “It doesn’t seem to be producing any clinically apparent distress.”
In other experiments, Green has further challenged the conventional wisdom about the toxicity of genetic information: He has begun questioning the need for counselors and therapists. “We’re looking at what happens if you don’t do this elaborate thing. What if you do it like a lab test in your doctor’s office? We’re treating it more like cholesterol and less like Huntington’s disease.”
In other words, given what seems like very bad news, most of us would do what Sergey Brin did: Go over our options, get some advice, and move on with life. “Everyone’s got their challenges; everyone’s got something to deal with,” Brin says. “This is mine. To me, it’s just one of any number of things that I could get in old age. And the most important factor is that I can do something about it.”
High-Speed Science
Can a model fueled by data sets and computational power compete with the gold standard of research? Maybe: Here are two timelines—one from an esteemed traditional research project run by the NIH, the other from the 23andMe Parkinson’s Genetics Initiative. They reached almost the same conclusion about a possible association between Gaucher’s disease and Parkinson’s disease, but the 23andMe project took a fraction of the time.—Rachel Swaby
Traditional Model
1. Hypothesis: An early study suggests that patients with Gaucher’s disease (caused by a mutation to the GBA gene) might be at increased risk of Parkinson’s.
2. Studies: Researchers conduct further studies, with varying statistical significance.
3. Data aggregation: Sixteen centers pool information on more than 5,500 Parkinson’s patients.
4. Analysis: A statistician crunches the numbers.
5. Writing: A paper is drafted and approved by 64 authors.
6. Submission: The paper is submitted to The New England Journal of Medicine. Peer review ensues.
7. Acceptance:NEJM accepts the paper.
8. Publication: The paper notes that people with Parkinson’s are 5.4 times more likely to carry the GBA mutation.
Total time elapsed: 6 years
Parkinson’s Genetics initiative
1. Tool Construction: Survey designers build the questionnaire that patients will use to report symptoms.
2. Recruitment: The community is announced, with a goal of recruiting 10,000 subjects with Parkinson’s.
3. Data aggregation: Community members get their DNA analyzed. They also fill out surveys.
4. Analysis: Reacting to the NEJM paper, 23andMe researchers run a database query based on 3,200 subjects. The results are returned in 20 minutes.
5. Presentation: The results are reported at a Royal Society of Medicine meeting in London: People with GBA are 5 times more likely to have Parkinson’s, which is squarely in line with the NEJM paper. The finding will possibly be published at a later date.
Total time elapsed: 8 months
If Brin’s blog post betrayed little fear about his risk for Parkinson’s, it did show a hint of disappointment with the state of knowledge on the disease. (His critique was characteristically precise: “Studies tend to have small samples with various selection biases.”)
His frustration is well founded. For decades, Parkinson’s research has been a poor cousin to the study of Alzheimer’s, which affects 10 times as many Americans and is therefore much more in the public eye. What is known about Parkinson’s has tended to emerge from observing patients in clinical practice, rather than from any sustained research. Nearly all cases are classified as idiopathic, meaning there’s no known cause. Technically, the disease is a result of the loss of brain cells that produce the neurotransmitter dopamine, but what causes those cells to die is unclear. The classic symptoms of the condition—tremors, rigidity, balance problems—come on gradually and typically don’t develop until dopamine production has declined by around 80 percent, meaning that a person can have the disease for years before experiencing the first symptom.
As far as treatments go, the drug levodopa, which converts to dopamine in the brain, remains the most effective. But the drug, developed in 1967, has significant side effects, including involuntary movements and confusion. Other interventions, like deep-brain stimulation, are invasive and expensive. Stem cell treatments, which generated great attention and promise a decade ago, “didn’t really work,” says William Langston, director of the Parkinson’s Institute. “Transferring nerve cells into the brain and repairing the brain has been harder than anybody thought.”
There are, however, some areas of promise—including the 2004 discovery of the LRRK2 connection. It’s especially common among people of Ashkenazi descent, like Brin, and appears in just about 1 percent of Parkinson’s patients. Rare as the mutation is, however, LRRK2 cases of Parkinson’s appear indistinguishable from other cases, making LRRK2 a potential window onto the disease in general.
LRRK2 stands for leucine-rich repeat kinase. Kinases are enzymes that activate proteins in cells, making them critical to cell growth and death. In cancer, aberrant kinases are known to contribute to tumor growth. That makes them a promising target for research. Drug companies have already developed kinase inhibitors for cancer; it’s a huge opportunity for Parkinson’s treatment, as well: If overactive kinases interfere with dopamine-producing cells in all Parkinson’s cases, then a kinase inhibitor may be able to help not just the LRRK2 carriers but all people with the disease.
Another promising area for research is that delay between the loss of dopamine-producing cells and the onset of symptoms. As it stands, this lag makes treatment a much more difficult problem. “By the time somebody has full-blown Parkinson’s, it’s way too late,” Langston says. “Any number of promising drugs have failed, perhaps because we’re getting in there so late.” But doctors can’t tell who should get drugs earlier, because patients are asymptomatic. If researchers could find biomarkers—telltale proteins or enzymes detected by, say, a blood or urine test—that were produced before symptoms emerged, a drug regimen could be started early enough to work.
And indeed, Brin has given money to both these areas of research, predominantly through gifts to the Parkinson’s Institute and to the Michael J. Fox Foundation, which is committed to what’s called translational research—getting therapies from researchers to the clinic as quickly as possible. Last February the Fox Foundation launched an international consortium of scientists working on LRRK2, with a mandate for collaboration, openness, and speed. “The goal is to get people to change their behavior and share information much more quickly and openly,” says Todd Sherer, head of the Fox Foundation’s research team. “We need to change the thinking.”
As Brin’s understanding of Parkinson’s grew, though, and as he talked with Wojcicki about research models, he realized that there was an even bolder experiment in the offing.
In 1899, scientists at Bayer unveiled Aspirin, a drug it offered as an effective remedy for colds, lumbago, and toothaches, among other ills. How aspirin—or acetylsalicylic acid—actually worked was a mystery. All people knew was that it did (though a discouraging side effect, gastric bleeding, emerged in some people).
It wasn’t until the 1960s and ’70s that scientists started to understand the mechanism: Aspirin inhibits the production of chemicals in the body called prostaglandins, fatty acids that can cause inflammation and pain. That insight proved essential to understanding the later discovery, in 1988, that people who took aspirin every other day had remarkably reduced rates of heart attack—cases in men dropped by 44 percent. When the drug inhibits prostaglandins, it seems, it inhibits the formation of blood clots, as well—reducing the risk of heart attack or stroke.
The second coming of aspirin is considered one of the triumphs of contemporary medical research. But to Brin, who spoke of the drug in a talk at the Parkinson’s Institute last August, the story offers a different sort of lesson—one drawn from that period after the drug was introduced but before the link to heart disease was established. During those decades, Brin notes, surely “many millions or hundreds of millions of people who took aspirin had a variety of subsequent health benefits.” But the association with aspirin was overlooked, because nobody was watching the patients. “All that data was lost,” Brin said.
In Brin’s way of thinking, each of our lives is a potential contribution to scientific insight. We all go about our days, making choices, eating things, taking medications, doing things—generating what is inelegantly called data exhaust. A century ago, of course, it would have been impossible to actually capture this information, particularly without a specific hypothesis to guide a researcher in what to look for. Not so today. With contemporary computing power, that data can be tracked and analyzed. “Any experience that we have or drug that we may take, all those things are individual pieces of information,” Brin says. “Individually, they’re worthless, they’re anecdotal. But taken together they can be very powerful.”
In computer science, the process of mining such large data sets for useful associations is known as a market-basket analysis. Conventionally, it has been used to divine patterns in retail purchases. It’s how Amazon.com can tell you that “customers who bought X also bought Y.”
But a problem emerges as the data in a basket become less uniform. This was the focus of much of Brin’s work at Stanford, where he published several papers on the subject. One, from 1997, argued that given the right algorithms, meaningful associations can be drawn from all sorts of unconventional baskets—”student enrollment in classes, word occurrence in text documents, users’ visits of Web pages, and many more.” It’s not a stretch to say that our experiences as patients might conceivably be the next item on the list.
This is especially true given the advances in computational power since 1997, when Brin and his fellow Stanford comp-sci student Larry Page were starting Google. “When Larry and I started the company,” Brin says, “we had to get some hard drives to, you know, store the entire Web. We ended up in a back alley in San Jose, dealing with some shady guy. We spent $10,000 or $20,000, all our life savings. We got these giant stacks of hard drives that we had to fit in our cars and get home. Just last week I happened to go to Fry’s and I picked up a hard drive that was 1 terabyte and cost like $100. And it was bigger than all those hard drives put together.”
This computing power can be put to work to answer questions about health. As an example, Brin cites a project developed at his company’s nonprofit research arm, Google.org. Called Google Flu Trends, the idea is elegantly simple: Monitor the search terms people enter on Google, and pull out those words and phrases that might be related to symptoms or signs of influenza, particularly swine flu.
In epidemiology, this is known as syndromic surveillance, and it usually involves checking drugstores for purchases of cold medicines, doctor’s offices for diagnoses, and so forth. But because acquiring timely data can be difficult, syndromic surveillance has always worked better in theory than in practice. By looking at search queries, though, Google researchers were able to analyze data in near real time. Indeed, Flu Trends can point to a potential flu outbreak two weeks faster than the CDC’s conventional methods, with comparable accuracy. “It’s amazing that you can get that kind of signal out of very noisy data,” Brin says. “It just goes to show that when you apply our newfound computational power to large amounts of data—and sometimes it’s not perfect data—it can be very powerful.” The same, Brin argues, would hold with patient histories. “Even if any given individual’s information is not of that great quality, the quantity can make a big difference. Patterns can emerge.”
Brin’s tolerance for “noisy data” is especially telling, since medical science tends to consider it poisonous. Biomedical researchers often limit their experiments to narrow questions that can be rigorously measured. But the emphasis on purity can mean fewer patients to study, which results in small data sets. That limits the research’s “power”—a statistical term that generally means the probability that a finding is actually true. And by design it means the data almost never turn up insights beyond what the study set out to examine.
Increasingly, though, scientists—especially those with a background in computing and information theory—are starting to wonder if that model could be inverted. Why not start with tons of data, a deluge of information, and then wade in, searching for patterns and correlations?
This is what Jim Gray, the late Microsoft researcher and computer scientist, called the fourth paradigm of science, the inevitable evolution away from hypothesis and toward patterns. Gray predicted that an “exaflood” of data would overwhelm scientists in all disciplines, unless they reconceived their notion of the scientific process and applied massive computing tools to engage with the data. “The world of science has changed,” Gray said in a 2007 speech—from now on, the data would come first.
Gray’s longtime employer, Bill Gates, recently made a small wager on the fourth paradigm when he invested $10 million in Schrödinger, a Portland, Oregon-based firm that’s using massive computation to rapidly simulate the trial and error of traditional pharmaceutical research.
And Andy Grove, former chair and CEO of Intel, has likewise called for a “cultural revolution” in science, one modeled on the tech industry’s penchant for speedy research and development. Grove, who was diagnosed with Parkinson’s in 2000 and has since made the disease his casus belli, shakes his fist at the pace of traditional science: “After 10 years in the Parkinson’s field, we may finally have three drugs in Phase I and Phase II trials next year—that’s more than ever before. But let’s get real. We’ll get the results in 2012, then they’ll argue about it for a year, then Phase III results in 2015, then argue about that for a year—if I’m around when they’re done …” He doesn’t finish his thought. “The whole field is not pragmatic enough. They’re too nice to themselves.”
Grove disagrees somewhat with Brin’s emphasis on patterns over hypothesis. “You have to be looking for something,” he says. But the two compare notes on the disease from time to time; both are enthusiastic and active investors in the Michael J. Fox Foundation. (Grove is even known to show up on the online discussion forums.)
In the world of traditional drug research, however, there’s more than a little skepticism about swapping out established biomedical approaches for technological models. Derek Lowe, a longtime medicinal chemist and author of a widely read drug industry blog, grants that big hardware and big data can be helpful. But for a disease as opaque as Parkinson’s, he argues, the challenge of drug development will always come down to basic chemistry and biology. “I don’t have a problem with data,” Lowe says. “The problem is that the data is tremendously noisy stuff. We just don’t know enough biology. If Brin’s efforts will help us understand that, I’m all for it. But I doubt they will.”
To be sure, biomedicine, and pharmaceutical research in particular, is not the same as software or computer chips. It’s a much more complicated process, and Brin acknowledges as much: “I’m not an expert in biological research. I write a bunch of computer code and it crashes, no big deal. But if you create a drug and it kills people, that’s a different story.” Brin knows that his method will require follow-up research to get through the traditional hoops of drug discovery and approvals. But, he adds, “in my profession you really make progress based on how quick your development cycle is.”
So, with the cooperation of the Parkinson’s Institute, the Fox Foundation, and 23andMe, he has proposed a new development cycle. Brin has contributed $4 million to fund an online Parkinson’s Disease Genetics Initiative at 23andMe: 10,000 people who’ve been diagnosed with the disease and are willing to pour all sorts of personal information into a database. (They’ve tapped about 4,000 so far.) Volunteers spit into a 23andMe test tube to have their DNA extracted and analyzed. That information is then matched up with surveys that extract hundreds of data points about the volunteers’ environmental exposures, their family history, disease progression, and treatment response. The questions range from the mundane (“Are you nearsighted?”)—to the perplexing (“Have you had trouble staying awake?”). It is, in short, an attempt to create the always-on data-gathering project that Brin believes could aid all medical research—and, potentially, himself. “We have no grand unified theory,” says Nicholas Eriksson, a 23andMe scientist. “We have a lot of data.”
Click to Open Overlay GalleryWhy not do science differently? Gather tons of data, then start searching for correlations. Steven Wilson
It’s hard to overstate the difference between this approach and conventional research. “Traditionally, an experiment with 10 or 20 subjects was big,” says the Parkinson’s Institute’s Langston. “Then it went up to the hundreds. Now 1,000 subjects would be a lot—so with 10,000, suddenly we’ve reached a scale never seen before. This could dramatically advance our understanding.”
Langston offers a case in point. Last October, the New England Journal of Medicinepublished the results of a massive worldwide study that explored a possible association between people with Gaucher’s disease—a genetic condition where too much fatty substances build up in the internal organs—and a risk for Parkinson’s. The study, run under the auspices of the National Institutes of Health, hewed to the highest standards and involved considerable resources and time. After years of work, it concluded that people with Parkinson’s were five times more likely to carry a Gaucher mutation.
Langston decided to see whether the 23andMe Research Initiative might be able to shed some insight on the correlation, so he rang up 23andMe’s Eriksson, and asked him to run a search. In a few minutes, Eriksson was able to identify 350 people who had the mutation responsible for Gaucher’s. A few clicks more and he was able to calculate that they were five times more likely to have Parkinson’s disease, a result practically identical to the NEJM study. All told, it took about 20 minutes. “It would’ve taken years to learn that in traditional epidemiology,” Langston says. “Even though we’re in the Wright brothers early days with this stuff, to get a result so strongly and so quickly is remarkable.”
Mark Hallett, chief of the Human Motor Control section at the National Institute of Neurological Disorders and Stroke, saw Langston present his results at a recent conference and came away very impressed. “The quality of the data is probably not as good as it could be, since it’s provided by the patient,” he says. “But it’s an impressive research tool. It sounds like it’d be useful to generate new hypotheses as opposed to prove anything.”
But hypotheses are what Parkinson’s research needs more of, especially now that we can study people who, like Brin, have an LRRK2 mutation. Since some of these carriers don’t get the disease, we should try to discern why. “This is an information-rich opportunity,” Brin says. “It’s not just the genes—it could be environment or behaviors, it could be that they take aspirin. We don’t know.”
This approach—huge data sets and open questions—isn’t unknown in traditional epidemiology. Some of the greatest insights in medicine have emerged from enormous prospective projects like the Framingham Heart Study, which has followed 15,000 citizens of one Massachusetts town for more than 60 years, learning about everything from smoking risks to cholesterol to happiness. Since 1976, the Nurses Health Study has tracked more than 120,000 women, uncovering risks for cancer and heart disease. These studies were—and remain—rigorous, productive, fascinating, even lifesaving. They also take decades and demand hundreds of millions of dollars and hundreds of researchers. The 23andMe Parkinson’s community, by contrast, requires fewer resources and demands far less manpower. Yet it has the potential to yield just as much insight as a Framingham or a Nurses Health. It automates science, making it something that just … happens. To that end, later this month 23andMe will publish several new associations that arose out of their main database, which now includes 50,000 individuals, that hint at the power of this new scientific method.
“The exciting thing about this sort of research is the breadth of possibilities that it tests,” Brin says. “Ultimately many medical discoveries owe a lot to just some anecdotal thing that happened to have happened, that people happened to have noticed. It could have been the dime they saw under the streetlight. And if you light up the whole street, it might be covered in dimes. You have no idea. This is trying to light up the whole street.”
Sergey Brin is different. Few people have the resources to bend the curve of science; fewer still have spouses who run genetics companies. Given these circumstances and his data-driven mindset, Brin is likely more comfortable with genetic knowledge than most of us. And few people are going to see their own predicament as an opportunity to forge a new sort of science. So yeah, he’s different.
Ask Brin whether he’s a rare breed, and you won’t get much; on-the-record self-reflection doesn’t come easily to him. “Obviously I’m somewhat unusual in the resources that I can bring to bear,” he allows. “But all the other things that I do—the lifestyle, the self-education, many people can do that. So I’m not really that unique. I’m just early. It’s more that I’m on the leading edge of something.”
A decade ago, scientists spent $3 billion to sequence one human genome. Today, at least 20 people have had their whole genomes sequenced, and anyone with $48,000 can add their name to the list. That cost is expected to plummet still further in the next few years. (Brin is in line to have his whole genome sequenced, and 23andMe is considering offering whole-genome tests, though the company hasn’t determined a price.)
As the cost of sequencing drops and research into possible associations increases, whole genome sequencing will become a routine part of medical treatment, just as targeted genetic tests are a routine part of pregnancy today. The issue won’t be whether to look; it will be what to do with what’s found.
Today, the possibility of a rudimentary genetic test appearing on the shelves of Walgreens is headline news—delivered, inevitably, with the subtext that ordinary people will come undone upon learning about their genetic propensities. But other tests have gone from incendiary to innocuous. (Walgreens already stocks at-home paternity tests and HIV tests.) And other disclosures have gone from radical to routine. (In 1961, 90 percent of physicians said they wouldn’t tell their patients if they had cancer.) And other data points have gone from baffling to banal. (Blood pressure, LDL cholesterol, and blood sugar are now the stuff of watercooler chats.)
So, too, will it go with DNA. We’ll all find out about our propensities for disease in great detail and be compelled to work our own algorithms to address that risk. In many cases, this will be straightforward. There will be things we can do today and treatments we can undergo tomorrow.
But in some cases, undoubtedly, we may find ourselves in a circumstance like Brin’s, with an elevated risk for a disease with no cure. So we’ll exercise more, start eating differently, and do whatever else we can think of while we wait for science to catch up. In that way, Brin’s story isn’t just a billionaire’s tale. It’s everyone’s.
This historical chart compiled by Statista shows how quickly and utterly Apple has dominated the smartphone market. Samsung is now the only other major handset company earning significant profits from smartphones.
Five years ago, the iPhone was still the top profit-maker, but a lot of other companies were in the game. Since then, the platform battle has become a two-player race between Apple’s iOS and Google’s Android, driving third-way competitors like BlackBerry and Microsoft/Nokia down into the loss zone. The fierce competition between Android handset makers, particularly with the rise of inexpensive Chinese Android phones, has also sucked a lot of profit out of the market.
Well that took no time at all. Intelligence agencies rolled right into the horror and fury in the immediate wake of the latest co-ordinated terror attacks in the French capital on Friday, to launch their latest co-ordinated assault on strong encryption — and on the tech companies creating secure comms services — seeking to scapegoat end-to-end encryption as the enabling layer for extremists to perpetrate mass murder.
There’s no doubt they were waiting for just such an ‘opportune moment’ to redouble their attacks on encryption after recent attempts to lobby for encryption-perforating legislation foundered. (A strategy confirmed by a leaked email sent by the intelligence community’s top lawyer, Robert S. Litt, this August — and subsequently obtained by the Washington Post — in which he anticipated that a “very hostile legislative environment… could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement”. Et voila Paris… )
Speaking to CBS News the weekend in the immediate aftermath of the Paris attacks, former CIA deputy director Michael Morell said: “I think this is going to open an entire new debate about security versus privacy.”
“We, in many respects, have gone blind as a result of the commercialization and the selling of these devices that cannot be accessed either by the manufacturer or, more importantly, by us in law enforcement, even equipped with search warrants and judicial authority,” added New York City police commissioner, William J. Bratton, quoted by the NYT in a lengthy article probing the “possible” role of encrypted messaging apps in the Paris attacks.
Elsewhere the fast-flowing attacks on encrypted tech services have come without a byline — from unnamed European and American officials who say they are “not authorized to speak publicly”. Yet are happy to speak publicly, anonymously.
The NYT published an article on Sunday alleging that attackers had used “encryption technology” to communicate — citing “European officials who had been briefed on the investigation but were not authorized to speak publicly”. (The paper subsequently pulled the article from its website, as noted by InsideSources, although it can still be read via the Internet Archive.)
The irony of government/intelligence agency sources briefing against encryption on condition of anonymity as they seek to undermine the public’s right to privacy would be darkly comic if it weren’t quite so brazen.
Seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.
Here’s what one such unidentified British intelligence source told Politico: “As members of the general public get preoccupied that the government is spying on them, they have adopted these applications and terrorists have found them tailor-made for their own use.”
It’s a pretty incredible claim when you examine it. This unknown spook mouthpiece is saying terrorists are able to organize acts of mass murder as a direct consequence of the public’s dislike of government mass surveillance. Take even a cursory glance at the history of terrorism and that claim folds in on itself immediately. The highly co-ordinated 9/11 attacks of 2001 required no backdrop of public privacy fears in order to be carried out — and with horrifying, orchestrated effectiveness.
In the same Politico article, an identified source — J.M. Berger, the co-author of a book about ISIS — makes a far more credible claim: “Terrorists use technology improvisationally.”
Of course they do. The co-founder of secure messaging app Telegram, Pavel Durov, made much the same point earlier this fall when asked directly by TechCrunch about ISIS using his app to communicate. “Ultimately the ISIS will always find a way to communicate within themselves. And if any means of communication turns out to be not secure for them, then they switch to another one,” Durov argued. “I still think we’re doing the right thing — protecting our users privacy.”
Bottom line: banning encryption or enforcing tech companies to backdoor communications services has zero chance of being effective at stopping terrorists finding ways to communicate securely. They can and will route around such attempts to infiltrate their comms, as others have detailed at length.
Here’s a recap: terrorists can use encryption tools that are freely distributed from countries where your anti-encryption laws have no jurisdiction. Terrorists can (and do) build their own securely encrypted communication tools. Terrorists can switch to newer (or older) technologies to circumvent enforcement laws or enforced perforations. They can use plain old obfuscation to code their communications within noisy digital platforms like the Playstation 4 network, folding their chatter into general background digital noise (of which there is no shortage). And terrorists can meet in person, using a network of trusted couriers to facilitate these meetings, as Al Qaeda — the terrorist group that perpetrated the highly sophisticated 9/11 attacks at a time when smartphones were far less common, nor was there a ready supply of easy-to-use end-to-end encrypted messaging apps — is known to have done.
Point is, technology is not a two-lane highway that can be regulated with a couple of neat roadblocks — whatever many politicians appear to think. All such roadblocks will do is catch the law-abiding citizens who rely on digital highways to conduct more and more aspects of their daily lives. And make those law-abiding citizens less safe in multiple ways.
There’s little doubt that the lack of technological expertise in the upper echelons of governments is snowballing into a very ugly problem indeed as technology becomes increasingly sophisticated yet political rhetoric remains grounded in age-old kneejerkery. Of course we can all agree it would be beneficial if we were able to stop terrorists from communicating. But the hard political truth of the digital era is that’s never going to be possible. It really is putting the proverbial finger in the dam. (There are even startups working on encryption that’s futureproofed against quantum computers — and we don’t even have quantum computers yet.)
Another hard political truth is that effective counter terrorism policy requires spending money on physical, on-the-ground resources — putting more agents on the ground, within local communities, where they can gain trust and gather intelligence. (Not to mention having a foreign policy that seeks to promote global stability, rather than generating the kind of regional instability that feeds extremism by waging illegal wars, for instance, or selling arms to regimes known to support the spread of extremist religious ideologies.)
The draft Investigatory Powers Bill also has some distinctly ambiguous wording when it comes to encryption — suggesting the U.K. government is still seeking to legislate a general ability that companies be able to decrypt communications. Ergo, to outlaw end-to-end encryption. Yes, we’re back here again. You’d be forgiven for thinking politicians lacked a long-term memory.
Effective encryption might be a politically convenient scapegoat to kick around in the wake of a terror attack — given it can be used to detract attention from big picture geopolitical failures of governments. And from immediate on the ground intelligence failures — whether those are due to poor political direction, or a lack of resources, or bad decision-making/prioritization by overstretched intelligence agency staff. Pointing the finger of blame at technology companies’ use of encryption is a trivial diversion tactic to detract from wider political and intelligence failures with much more complex origins.
But seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.
Mandating vulnerabilities be built into digital communications opens up an even worse prospect: new avenues for terrorists and criminals to exploit. As officials are busy spinning the notion that terrorism is all-but only possible because of the rise of robust encryption, consider this: if the public is deprived of its digital privacy — with terrorism applied as the justification to strip out the robust safeguard of strong encryption — then individuals become more vulnerable to acts of terrorism, given their communications cannot be safeguarded from terrorists. Or criminals. Or fraudsters. Or anyone incentivized by malevolent intent.
If you want to speculate on fearful possibilities, think about terrorists being able to target individuals at will via legally-required-to-be insecure digital services. If you think terror tactics are scary right now, think about terrorists having the potential to single out, track and terminate anyone at will based on whatever twisted justification fits their warped ideology — perhaps after that person expressed views they do not approve of in an online forum.
In a world of guaranteed insecure digital services it’s a far more straightforward matter for a terrorist to hack into communications to obtain the identity of a person they deem a target, and to use other similarly perforated technology services to triangulate and track someone’s location to a place where they can be made the latest victim of a new type of hyper-targeted, mass surveillance-enabled terrorism. Inherently insecure services could also be more easily compromised by terrorists to broadcast their own propaganda, or send out phishing scams, or otherwise divert attention en masse.
The only way to protect against these scenarios is to expand the reach of properly encrypted services. To champion the cause of safeguarding the public’s personal data and privacy, rather than working to undermine it — and undermining the individual freedoms the West claims to be so keen to defend in the process.
While, when it comes to counter terrorism strategy, what’s needed is more intelligent targeting, not more mass measures that treat everyone as a potential suspect and deluge security agencies in an endless churn of irrelevant noise. Even the robust end-to-end encryption that’s now being briefed against as a ‘terrorist-enabling evil’ by shadowy officials on both sides of the Atlantic can be compromised at the level of an individual device. There’s no guaranteed shortcut to achieve that. Nor should there be — that’s the point. It takes sophisticated, targeted work.
But blanket measures to compromise the security of the many in the hopes of catching out the savvy few are even less likely to succeed on the intelligence front. We have mass surveillance already, and we also have blood on the streets of Paris once again. Encryption is just a convenient scapegoat for wider policy failures of an industrial surveillance complex.
So let’s not be taken in by false flags flown by anonymous officials trying to mask bad political decision-making. And let’s redouble our efforts to fight bad policy which seeks to entrench a failed ideology of mass surveillance — instead of focusing intelligence resources where they are really needed; honing in on signals, not drowned out by noise.
n the US 28% of cars are leased. While it is uncommon to lease inexpensive vehicles and family cars, close to half of all luxury cars are. That percentage is only higher in one other car-segment: electric vehicles (EVs): In the first 3 quarters of 2015 75% of new EVs have been leased!
The most common explanation is that EVs are still too expensive to buy. Another popular reason is that customers do not trust the durability of electric powertrains and lithium-ion battery technology. Finally, customers claim that driving range might be an issue and thus prefer leasing over buying (more on my thoughts on driving range anxiety)
All 3 reasons play a major role. All of them have been researched by J.D. Power back in 2010. However, they don’t sufficiently explain the high lease rates among EV customers today. Here are three insights why car leases are 3-4x more common in the EV segment and why car ownership is becoming rare among young customers.
GenY (Millennials) Adapts New Purchasing Habits
Average Earnings for Young Adults in $2013
Cars Sold in Millions per Generation
Car leases are already the most popular way of „purchasing“ a luxury and electric vehicle (EV). First, I documented why millennials/younger customers are more likely to lease. Second, I described why technology changes can lead to reduced interest in buying. Finally, I tried to proof that smartphones have given users the ability to experience freedom without owning a car.
These 3 points lead to an assumption: GenY, as the second largest car buying generation, is leading the ownership disruption in the car segment. They buy fewer cars per 1000 citizens, have the highest % of leases and have different expectations for cars (in terms of technologies and features). How can car manufacturers attract GenY and bring driving back?
Lets take a look outside the car industry. How are technology firms attracting young customers? The smartphone market, like the car market, has taken a hit in the last few years. The handset replacement cycle has slowed down significantly. It is the slowest since the introduction of the iPhone in 2007. In 2014, 143 million mobile phones were sold in the United States (-15%). Of them ~90% were smartphones. 2007 users upgraded their phones every ~19 months; today they upgrade every 26+ months.