Don’t wanna Cry? Use Linux. Life is too short to reboot.
So far, over 213,000 computers across 99 countries around the world have been infected, and the infection is still rising even hours after the kill switch was triggered by the 22-years-old British security researcher behind the twitter handle ‚MalwareTech.‘
For those unaware, WannaCry is an insanely fast-spreading ransomware malware that leverages a Windows SMB exploit to remotely target a computer running on unpatched or unsupported versions of Windows.
Once infected, WannaCry also scans for other vulnerable computers connected to the same network, as well scans random hosts on the wider Internet, to spread quickly.
The SMB exploit, currently being used by WannaCry, has been identified as EternalBlue, a collection of hacking tools allegedly created by the NSA and then subsequently dumped by a hacking group calling itself „The Shadow Brokers“ over a month ago.
„If NSA had privately disclosed the flaw used to attack hospitals when they *found* it, not when they lost it, this may not have happened,“ NSA whistleblower Edward Snowden says.
MEET MOXIE MARLINSPIKE, THE ANARCHIST BRINGING ENCRYPTION TO ALL OF US
MICHAEL FRIBERG
ON THE FIRST DAY of the sprawling RSA security industry conference in San Francisco, a giant screen covering the wall of the Moscone Center’s cavernous lobby cycles through the names and headshots of keynote speakers: steely-eyed National Security Agency director Michael Rogers in a crisp military uniform; bearded and besuited Whitfield Diffie and Ron Rivest, legendary inventors of seminal encryption protocols that made the Internet safe for communication and commerce. And then there’s Moxie Marlinspike, peering somberly into the distance wearing a bicycle jersey and an18-inch-tall helmet shaped like a giant spear of asparagus. “It was the only picture I could find,” Marlinspike deadpans as we walk into the building.
Even without the vegetable headwear, Marlinspike’s wire-thin 6’2″ frame and topknot of blond dreadlocks doesn’t fit the usual profile of the crypto world’s spooks and academics, nor RSA’s corporate types. Walking toward the ballroom where he’s set to speak on the annual Cryptographers’ Panel, however, he tells me it’s not his first time at the conference.
In fact, when Marlinspike made his debut visit to RSA 20 years ago, as a teenager, he wasn’t invited. Lured by the promise of seeing his cryptographer heroes in person, he snuck in, somehow snagging a conference badge without paying the $1,000 registration fee. Later, he made the mistake of handing it off to friends who were more interested in scoring lunch than in hearing about pseudo-random-number generators. They were spotted and kicked out. RSA organizers must have gone so far as to report Marlinspike’s mischief to law enforcement, he says; years later he requested his FBI file and discovered a reference to the incident.
A middle-aged man in a sports coat and jeans approaches us, carrying a Wall Street Journal. He shakes Marlinspike’s hand and thanks him for creating the encrypted messaging app Signal, which the man says was recommended to him by a friend, a former FBI agent. Marlinspike looks back at me with raised eyebrows.
Signal, widely considered the most secure and easiest-to-use free encrypted messaging and voice-calling app, is the reason he’s been invited to speak as part of the very same crypto Jedi Council he had worshipped as a teenager. Marlinspike designed Signal to bring uncrackable encryption to regular people. And though he hadn’t yet revealed it at the time of the conference in March, Signal’s encryption protocol had been integrated into WhatsApp, the world’s most popular messaging app, with over a billion users.
“I think law enforcement should be difficult. And it should actually be possible to break the law.”
For any cypherpunk with an FBI file, it’s already an interesting morning. At the very moment the Cryptographers’ Panel takes the stage, Apple and the FBI are at the height of a six-week battle, arguing in front of the House Judiciary Committee over the FBI’s demand that Apple help it access an encrypted iPhone 5c owned by San Bernardino killer Syed Rizwan Farook. Before that hearing ends, Apple’s general counsel will argue that doing so would set a dangerous legal precedent, inviting foreign governments to make similar demands, and that the crypto-cracking software could be co-opted by criminals or spies.
The standoff quickly becomes the topic of the RSA panel, and Marlinspike waits politely for his turn to speak. Then he makes a far simpler and more radical argument than any advanced by Apple: Perhaps law enforcement shouldn’t be omniscient. “They already have a tremendous amount of information,” he tells the packed ballroom. He points out that the FBI had accessed Farook’s call logs as well as an older phone backup. “What the FBI seems to be saying is that we need this because we might be missing something. Obliquely, they’re asking us to take steps toward a world where that isn’t possible. And I don’t know if that’s the world we want to live in.”
Marlinspike follows this remark with a statement that practically no one else in the privacy community is willing to make in public: that yes, people will use encryption to do illegal things. And that may just be the whole point. “I actually think that law enforcement should be difficult,” Marlinspike says, looking calmly out at the crowd. “And I think it should actually be possible to break the law.”
OVER THE PAST several years, Marlinspike has quietly positioned himself at the front lines of a quarter-century-long war between advocates of encryption and law enforcement. Since the first strong encryption tools became publicly available in the early ’90s, the government has warned of the threat posed by “going dark”—that such software would cripple American police departments and intelligence agencies, allowing terrorists and organized criminals to operate with impunity. In 1993 it unsuccessfully tried to implement a backdoor system called the Clipper Chip to get around encryption. In 2013, Edward Snowden’s leaks revealed that the NSA had secretly sabotaged a widely used crypto standard in the mid- 2000s and that since 2007 the agency had been ingesting a smorgasbord of tech firms’ data with and without their cooperation. Apple’s battle with the FBI over Farook’s iPhone destroyed any pretense of a truce.
As the crypto war once again intensifies, Signal and its core protocol have emerged as darlings of the privacy community. Johns Hopkins computer science professor Matthew Green recalls that the first time he audited Marlinspike’s code, he was so impressed that he “literally discovered a line of drool running down my face.”
Marlinspike has enabled the largest end-to-end encrypted communications network in history.
While Marlinspike may present himself as an eccentric outsider, his ability to write freakishly secure software has aligned him with some of the tech industry’s biggest companies. For a time he led Twitter’s security team. His deal with WhatsApp means that the Facebook-owned company now uses his tools to encrypt every message, image, video, and voice call that travels over its global network; in effect Marlinspike has enabled the largest end-to-end encrypted communications network in history, transmitting more texts than every phone company in the world combined. In May, Google revealed that it too would integrate Signal—into the incognito mode of its messaging app Allo. And last month, Facebook Messenger began its own rollout of the protocol in an encryption feature called “secret conversations,” which promises to bring Signal to hundreds of millions more users. “The entire world is making this the standard for encrypted messaging,” Green says.
So far, governments aren’t having much luck pushing back. In March, Brazilian police briefly jailed a Facebook exec after WhatsApp failed to comply with a surveillance order in a drug investigation. The same month, The New York Timesrevealed that WhatsApp had received a wiretap order from the US Justice Department. The company couldn’t have complied in either case, even if it wanted to. Marlinspike’s crypto is designed to scramble communications in such a way that no one but the people on either end of the conversation can decrypt them (see sidebar). “Moxie has brought us a world-class, state-of-the-art, end-to-end encryption system,” WhatsApp cofounder Brian Acton says. “I want to emphasize: world-class.”
For Marlinspike, a failed wiretap can mean a small victory. A few days after Snowden’s first leaks, Marlinspike posted an essay to his blog titled “We Should All Have Something to Hide,” emphasizing that privacy allows people to experiment with lawbreaking as a precursor for social progress. “Imagine if there were an alternate dystopian reality where law enforcement was 100 percent effective, such that any potential offenders knew they would be immediately identified, apprehended, and jailed,” he wrote. “How could people have decided that marijuana should be legal, if nobody had ever used it? How could states decide that same-sex marriage should be permitted?”
To some, Marlinspike’s logic isn’t quite as airtight as his code. Not all criminals are tech masterminds.
He admits that dangerous criminals and terrorists may use apps like Signal and WhatsApp. (ISIS has even circulated a manual recommending Signal.) But he argues that those elements have always had the incentive and ability to encrypt their communications with tougher-to-use tools like the encryption software PGP. His work, he says, is to make those protections possible for the average person without much tech savvy.
To some, Marlinspike’s logic isn’t quite as airtight as his code. Not all criminals are tech masterminds—the San Bernardino killers, for example. Former NSA attorney and Brookings Institution fellow Susan Hennessey wonders who determines which lawbreakers deserve to be wiretapped, if not a democratically elected government? Americans have long agreed, she argues, to enable a certain degree of police surveillance to prevent truly abhorrent crimes like child pornography, human trafficking, and terrorism. “We could set up our laws to reject surveillance outright, but we haven’t,” she says. “We’ve made a collective agreement that we derive value from some degree of government intrusion.” A spokesman for the FBI, when asked to comment on Marlinspike’s law-breaking philosophy, replied, “The First Amendment protects people who hold whatever view they want. Some people are members of the KKK. I’m not going to engage in a debate with him.”
Marlinspike isn’t particularly interested in a debate, either; his mind was made up long ago, during years as an anarchist living on the fringes of society. “From very early in my life I’ve had this idea that the cops can do whatever they want, that they’re not on your team,” Marlinspike told me. “That they’re an armed, racist gang.”
Marlinspike views encryption as a preventative measure against a slide toward Orwellian fascism that makes protest and civil disobedience impossible, a threat he traces as far back as J. Edgar Hoover’s FBI wiretapping and blackmailing of Martin Luther King Jr. “Moxie is compelled by the troublemakers of history and their stories,” says Tyler Reinhard, a designer who worked on Signal. “He sees encryption tools not as taking on the state directly but making sure that there’s still room for people to have those stories.”
MICHAEL FRIBERG
ASK MARLINSPIKE TO tell his own story, and—no surprise for a privacy zealot—he’ll often answer with diversions, monosyllables, and guarded smiles. But anyone who’s crossed paths with him seems to have an outsize anecdote: how he once biked across San Francisco carrying a 40-foot-tall sailboat mast. The time he decided to teach himself to pilot a hot-air balloon, bought a used one from Craigslist, and spent a month on crutches after crashing it in the desert. One friend swears he’s seen Marlinspike play high-stakes rock-paper-scissors dozens of times—with bets of hundreds of dollars or many hours of his time on the line—and has never seen him lose.
But before Marlinspike was a subcultural contender for “most interesting man in the world,” he was a kid growing up with a different and far less interesting name on his birth certificate, somewhere in a region of central Georgia that he describes as “one big strip mall.” His parents—who called him Moxie as a nickname—separated early on. He lived mostly with his mother, a secretary and paralegal at a string of companies. Any other family details, like his real name, are among the personal subjects he prefers not to comment on.
Marlinspike hated the curiosity-killing drudgery of school. But he had the idea to try programming videogames on an Apple II in the school library. The computer had a Basic interpreter but no hard drive or even a floppy disk to save his code. Instead, he’d retype simple programs again and again from scratch with every reboot, copying in commands from manuals to make shapes fill the screen. Browsing the computer section of a local bookstore, the preteen Marlinspike found a copy of 2600 magazine, the catechism of the ’90s hacker scene. After his mother bought a cheap desktop computer with a modem, he used it to trawl bulletin board services, root friends’ computers to make messages appear on their screens, and run a “war-dialer” program overnight, reaching out to distant servers at random.
“Moxie likes the idea that there is an unknown, that the world is not a completely surveilled thing.”
To a bored middle schooler, it was all a revelation. “You look around and things don’t feel right, but you’ve never been anywhere else and you don’t know what you’re missing,” Marlinspike says. “The Internet felt like a secret world hidden within this one.”
By his teens, Marlinspike was working after school for a German software company, writing developer tools. After graduating high school—barely—he headed to Silicon Valley in 1999. “I thought it would be like a William Gibson novel,” he says. “Instead it was just office parks and highways.” Jobless and homeless, he spent his first nights in San Francisco sleeping in Alamo Square Park beside his desktop computer.
Eventually, Marlinspike found a programming job at BEA-owned WebLogic. But almost as soon as he’d broken in to the tech industry, he wanted out, bored by the routine of spending 40 hours a week in front of a keyboard. “I thought, ‘I’m supposed to do this every day for the rest of my life?’” he recalls. “I got interested in experimenting with a way to live that didn’t involve working.”
For the next few years, Marlinspike settled into a Bay Area scene that was, if not cyberpunk, at least punk. He started squatting in abandoned buildings with friends, eventually moving into an old postal service warehouse. He began bumming rides to political protests around the country and uploading free audio books to the web of himself reading anarchist theorists like Emma Goldman.
Forget Apple vs. the FBI: WhatsApp Just Switched on Encryption for a Billion People
He took up hitchhiking, then he upgraded his wanderlust to hopping freight trains. And in 2003 he spontaneously decided to learn to sail. He spent a few hundred dollars—all the money he had—on a beat-up 27-foot Catalina and rashly set out alone from San Francisco’s harbor for Mexico, teaching himself by trial and error along the way. The next year, Marlinspike filmed his own DIY sailing documentary, called Hold Fast. It follows his journey with three friends as they navigate a rehabilitated, leaky sloop called the Pestilence from Florida to the Bahamas, finally ditching the boat in the Dominican Republic.
Even today, Marlinspike describes those reckless adventures in the itinerant underground as a kind of peak in his life. “Looking back, I and everyone I knew was looking for that secret world hidden in this one,” he says, repeating the same phrase he’d used to describe the early Internet. “I think we were already there.”
If anything can explain Marlinspike’s impulse for privacy, it may be that time spent off society’s grid: a set of experiences that have driven him to protect a less observed way of life. “I think he likes the idea that there is an unknown,” says Trevor Perrin, a security engineer who helped Marlinspike design Signal’s core protocol. “That the world is not a completely surveilled thing.”
THE KEYS TO PRIVACY
Beneath its ultrasimple interface, Moxie Marlinspike’s crypto protocol hides a Rube Goldberg machine of automated moving parts. Here’s how it works.
1. When Alice installs an app that uses Marlinspike’s protocol, it generates pairs of numeric sequences known as keys. With each pair, one sequence, known as a public key, will be sent to the app’s server and shared with her contacts. The other, called a private key, is stored on Alice’s phone and is never shared with anyone. The first pair of keys serves as an identity for Alice and never changes. Subsequent pairs will be generated with each message or voice call, and these temporary keys won’t be saved.
2. When Alice contacts her friend Bob, the app combines their public and private keys—both their identity keys and the temporary ones generated for a new message or voice call—to create a secret shared key. The shared key is then used to encrypt and decrypt their messages or calls.
3. The secret shared key changes with each message or call, and old shared keys aren’t stored. That means an eavesdropper who is recording their messages can’t decrypt their older communications even if that spy hacks one of their devices. (Alice and Bob should also periodically delete their message history.)
4. To make sure she’s communicating with Bob and not an impostor, Alice can check Bob’s fingerprint, a shortened version of his public identity key. If that key changes, either because someone is impersonating Bob in a so-called man-in-the-middle attack or simply because he reinstalled the app, Alice’s app will display a warning.
THROUGH THOSE YEARS, Marlinspike took for granted that authority was the enemy. He describes harbor patrols and train yard guards who harassed him and his fellow hobo voyagers. Cops evicted him from squats, hassled him in the towns he and his friends passed through, and impounded their car on what seemed to be thin pretenses. But merely going to demonstrations never felt like the right way to challenge the world’s power structures.
Instead, around 2007 he turned his political interests back to the digital world, where he’d seen a slow shift toward post–Patriot Act surveillance. “When I was young, there was something fun about the insecurity of the Internet,” he says, with its bounty of hackable flaws available to benign pranksters. “Now Internet insecurity is used by people I don’t like against people I do: the government against the people.”
In 2008, Marlinspike settled in a decrepit brick mansion in Pittsburgh and started churning out a torrent of security software. The next year he appeared for the first time at the Black Hat security conference to demonstrate a program he called SSLstrip, which exposed a critical flaw in web encryption. In 2010 he debuted GoogleSharing, a Firefox plugin that let anyone use Google services anonymously.
That year, with the growth of smartphones, Marlinspike saw his biggest opportunity yet: to secure mobile communications. Helped by a friend who was getting a robotics PhD at Carnegie Mellon, he launched Whisper Systems, along with a pair of Android apps: TextSecure, to encrypt text messages, and RedPhone, to protect voice calls. Anti-authoritarian ideals were built in from the beginning; when the Arab Spring exploded across North Africa, Whisper Systems was ready with an Arabic version to aid protesters.
Alone in the dark, Marlinspike clung to the hull and realized, with slow and lonely certainty, that he was very likely going to die.
Marlinspike dreamed of bringing his encryption tools to millions of people, an ambition that required some sort of business model to fund them. He moved back to San Francisco to promote Whisper Systems as a for-profit startup. The company had barely gotten off the ground when Twitter approached him with a buyout offer, hoping to use his expertise to fix the shambolic security that had led to repeated hacks of celebrity and journalist accounts. The terms of the resulting deal were never made public. Marlinspike describes it only as “more money than I’d ever encountered before. But that’s a low bar.”
Marlinspike became the director of product security at Twitter. A coworker remembers that his expertise was “revered” within the company. But his greater goal was to alter the platform so that it didn’t keep logs of users’ IP addresses, which would make it impossible for authorities to demand someone’s identity, as they’d done with one Occupy Wall Street protester in 2012.
That project clashed with the priorities of executives, a coworker says. “Moxie couldn’t care less if Twitter made a lot of money,” the former colleague says. “He was more interested in protecting users.” Meanwhile, his contract stipulated that he’d have to work for four years before cashing out the stock he’d been paid for his startup. Marlinspike’s cypherpunk apotheosis would have to wait.
https://www.instagram.com/p/8bVAX-LOCz/embed/?v=7ONE FALL EVENING after work, Marlinspike and a friend made a simple plan to sail a 15-foot catamaran out 600 feet into the San Francisco Bay, where they’d drop anchor and row back in a smaller boat, leaving the sailboat to wait for their next adventure. (Anarchist sailors don’t like to pay dockage fees.) Marlinspike headed out into the bay on the catamaran with his friend following in a rowboat.
Only after Marlinspike had passed the pier did he realize the wind was blowing at a treacherous 30 miles an hour. He decided to turn back but discovered that he’d misrigged the craft and had to fix his mistake. As the sun sank toward the horizon, he shouted to his friend that they should give up and return to shore, and the friend rowed back to safety.
Then, without warning, the wind gusted. The catamaran flipped, throwing Marlinspike into the ice-cold water. “The suddenness of it was unbelievable, as if I was on a tiny model made of paper which someone had simply flicked with their finger,” he would later write in a blog post about the experience.
Soon the boat was fully upside down, pinned in place by the wind. Marlinspike tried to swim for shore. But the pier was too far away, the waves too strong, and he could feel his body succumbing to hypothermia, blackness creeping into the edges of his vision. He headed back to the overturned boat. Alone now in the dark, he clung to the hull, took stock of the last hour’s events, and realized, with slow and lonely certainty, that he was very likely going to die.
When a tugboat finally chanced upon his soaked and frozen form he was nearly unconscious and had to be towed up with a rope. When he arrived at the hospital, Marlinspike says, the nurses told him his temperature was so low their digital thermometers couldn’t register it. As he recovered over the next days, he had the sort of realization that sometimes results from a near-death experience. “It definitely sharpened my focus,” he says of the incident. “It made me question what I was doing with my life.”
Marlinspike’s time at Twitter had given him an ambitious sense of scale: He was determined to encrypt core chunks of the Internet.
A normal person might have quit sailing. Instead, Marlinspike quit Twitter. A year and a day after he had started, he walked away from over $1 million in company stock.
Marlinspike quickly picked up where he’d left off. In early 2013 he relaunched his startup as an open source project called Open Whisper Systems. To fund it, he turned to Dan Meredith, director of the Open Technology Fund, a group supported by the Broadcasting Board of Governors, best known for running Radio Free Europe. Meredith had long admired Marlinspike’s encryption apps. As a former security tech at Al Jazeera, he had relied on them to protect reporters and sources during the Arab Spring. “They were what our most sensitive sources used,” Meredith says. “I knew Moxie could do this, and we had the money to make it possible.” The OTF gave Open Whisper Systems around $500,000 in its first year and in total has funneled close to $2.3 million to the group.
With that funding and more from wealthy donors that Marlinspike declines to name, he began recruiting developers and hosting them at periodic retreats in Hawaii, where they’d alternate surfing and coding. In quick succession, Open Whisper Systems released Signal and then versions for Android and the Chrome browser. (Open Whisper Systems has since integrated changes from dozens of open source contributors but still uses the same cryptographic skeleton laid out by Marlinspike and Trevor Perrin in 2013.)
Marlinspike’s time at Twitter had given him an ambitious sense of scale: He was determined to encrypt core chunks of the Internet, not just its fringes. By chance, he met a WhatsApp engineer at a family reunion his girlfriend at the time threw at his house. Through that connection, Marlinspike wangled a meeting with WhatsApp’s cofounder Brian Acton. Later, Marlinspike met with the company’s other cofounder, Jan Koum, who had grown up in Soviet Ukraine under the constant threat of KGB eavesdropping.
Both men were almost immediately interested in using Marlinspike’s protocols to protect WhatsApp’s international users, particularly its massive user bases in privacy-loving Germany and surveillance regimes in the Middle East and South America. “We were aligned pretty early,” Acton says. “When we got past the hairstyle, we were like, ‘Let’s get down to business.’”
IN A HOTEL ROOM above San Francisco’s Soma district a few hours after his RSA panel, Marlinspike pulls out a slim laptop and enters his password to decrypt its hard drive. Or rather, attempts to; the string of characters is so long and complex that he mistypes it three times and, with a slightly embarrassed grin, has to reboot the computer. Finally he succeeds and opens a video file. It’s a rough cut of an ad for Signal he’s hoping to spread online, a montage of footage of the Russian punk protest band Pussy Riot, Daniel Ellsberg, Jesse Owens, Hong Kong’s pro-democracy Umbrella protesters, and Martin Luther King Jr. “They tell us to stay quiet and follow the rules,” a rough voice intones over the images. “We believe in the power of your words … Speak up, send a message.”
Marlinspike’s intention with the spot, whose script he wrote, was to create a “Nike ad for privacy,” he says. “Nike has a boring product. They don’t talk about the shoes. They celebrate great athletes. We’re trying to do the same thing, celebrating people with a contestational relationship to power. Activists, whistle-blowers, journalists, artists.”
“The big win is when a billion people are using WhatsApp and don’t even know it’s encrypted. I think we’ve already won the future.”
Today, those people include Edward Snowden, who has written that he uses Signal “every day.” (Marlinspike recently visited the exiled whistle-blower in Moscow.) Laura Poitras, the Pulitzer- and Oscar-winning recipient of Snowden’s NSA leaks, recommends it to documentary filmmakers and journalists. Women’s rights activists in Latin America who help women find abortions use Signal. So do North Korean defectors evading Kim Jong-un’s spies. Attorneys at the National Lawyers Guild use it to speak about clients. Members of Hands Up United, one of the groups leading the Black Lives Matter movement in Ferguson, Missouri, two years ago, started using Signal after noticing police cars following them home or parked outside of their meetings and strange tones and dropped calls on their cell phones. (The Intercept revealed last summer that the Department of Homeland Security monitored the protesters.) “Signal gave us so much confidence to continue our work,” says Hands Up United organizer Idalin Bobé.
But these are only the early adopters in Marlinspike’s master plan. He outlines his endgame: In the past, government-friendly phone companies have practically partnered with law enforcement to make wiretaps easy. Now people are increasingly shifting to what he calls overlay services—apps like WhatsApp and Facebook Messenger—to communicate. And that switch offers a chance to start fresh, with a communications infrastructure that can be built to resist surveillance. “The big win for us is when a billion people are using WhatsApp and they don’t even know it’s encrypted,” Marlinspike says. “At this point, I think we’ve already won the future.”
https://www.instagram.com/p/_KvwFCrOOu/embed/captioned/?v=7THE NEXT DAY, Marlinspike is rushing over to Open Whisper Systems headquarters, where he’s late for a meeting. As I speed-walk to keep up with his long legs, he grouses about the day-to-day of running a software project: the bug reports and constant tweaks to keep up with operating systems’ improvements, the deadening hours of sitting in front of a computer.
Marlinspike surprises me by admitting that he looks forward to the moment when he can quit. “Someday Signal will fade away,” he states unsentimentally. Instead, he says, Open Whisper System’s legacy will be the changes Signal will have inspired in better-funded, for-profit communication apps.
That time may not be so far off. “I don’t really want to do this with the rest of my life,” Marlinspike says. “Eventually, you have to declare victory.”
But cypherpunks like Marlinspike—let’s be honest—haven’t yet won the crypto war. In fact, the war may be unwinnable by either side. If the rise of end-to-end encrypted messaging enables the sort of benign law breaking Marlinspike has preached, sooner or later it will also shield some indefensible crimes. And that means every technological move toward privacy will be answered with a legal one aimed at shifting the equilibrium back toward surveillance: If law enforcement continues to be foiled by uncrackable encryption, it will come back with an order for “technical assistance,” demanding companies weaken their security measures and rewrite their code to help the cops, as the FBI demanded of Apple. Some form of crypto backdoor might even be built in secret. And Congress still threatens to advance legislation that could ban user-controlled encryption outright.
But these legal and political battles may not be Marlinspike’s to fight. “He definitely romanticizes being an amateur,” says one particularly frank friend. “He likes to give up once he’s an expert.” Marlinspike, she says, seeks the “zero point, when you have nothing to lose, when you have no property, no lover, nothing to hold you back.”
Cypherpunks like Marlinspike haven’t yet won the crypto war. In fact, the war may be unwinnable by either side.
I’m reminded of that underlying restlessness on the last evening I spend with Marlinspike, at a Sunday night screening of Hold Fast, hosted by a sailing club at the Berkeley Marina. As his doc plays to a crowd of a few dozen people, we sit in the back next to a wood-burning stove, with a spring storm churning the bay outside the window behind us.
Early in the film, the narration goes off on a tangent, telling the story of Bernard Moitessier, whom Marlinspike describes reverentially as a sailing mystic. In 1969, Moitessier was winning the Golden Globe, a solo, globe-circling yacht race. Moitessier, a monklike eccentric, didn’t even carry a radio, instead using a slingshot to hurl film canisters containing messages to nearby ships. Just as Moitessier was set to finish ahead of his competitors in Plymouth, England, he shot off a message rejecting the competition and explaining that he would rather simply keep sailing for the Pacific Islands. “I am continuing nonstop because I am happy at sea,” the note read, “and perhaps because I want to save my soul.”
When the screening ends, the lights come up and Marlinspike takes questions. A middle-aged woman asks him what he’s doing now, nine years after the film’s release. Along with plenty of other people in this audience, she knows him only as Moxie Marlinspike the rogue sailor, not as a cryptographer.
Marlinspike takes a second to think, as if he’s never actually considered the question before. “I don’t know,” he says finally, sighing with what sounds like sincere uncertainty. “Maybe I should go back to sailing cheap.”
The crowd laughs at Marlinspike’s show of self-effacing confusion. But he seems to mean what he says. And over their heads, out the window, past the bay, lies the Pacific Ocean: dark, unknown, and inviting.
I met Edward Snowden in a hotel in central Moscow, just blocks away from Red Square. It was the first time we’d met in person; he first emailed me nearly two years earlier, and we eventually created an encrypted channel to journalists Laura Poitras and Glenn Greenwald, to whom Snowden would disclose overreaching mass surveillance by the National Security Agency and its British equivalent, GCHQ.
This time around, Snowden’s anonymity was gone; the world knew who he was, much of what he’d leaked, and that he’d been living in exile in Moscow, where he’s been stranded ever since the State Department canceled his passport while he was en route to Latin America. His situation was more stable, the threats against him a bit easier to predict. So I approached my 2015 Snowden meeting with less paranoia than was warranted in 2013, and with a little more attention to physical security, since this time our communications would not be confined to the internet.
Our first meeting would be in the hotel lobby, and I arrived with all my important electronic gear in tow. I had powered down my smartphone and placed it in a “faraday bag” designed to block all radio emissions. This, in turn, was tucked inside my backpack next to my laptop (which I configured and hardened specifically for traveling to Russia), also powered off. Both electronic devices stored their data in encrypted form, but disk encryption isn’t perfect, and leaving these in my hotel room seemed like an invitation to tampering.
Most of the lobby seats were taken by well-dressed Russians sipping cocktails. I planted myself on an empty couch off in a nook hidden from most of the action and from the only security camera I could spot. Snowden had told me I’d have to wait awhile before he met me, and for a moment I wondered if I was being watched: A bearded man wearing glasses and a trench coat stood a few feet from me, apparently doing nothing aside from staring at a stained-glass window. Later he shifted from one side of my couch to the other, walking away just after I made eye contact.
Eventually, Snowden appeared. We smiled and said good to see you, and then walked up the spiral staircase near the elevator to the room where I would be conducting the interview, before we really started talking.
It also turns out that I didn’t need to be quite so cautious. Later, he told me to feel free to take out my phone so I could coordinate a rendezvous with some mutual friends who were in town. Operational security, or “opsec,” was a recurring theme across our several chats in Moscow.
In most of Snowden’s interviews he speaks broadly about the importance of privacy, surveillance reform, and encryption. But he rarely has the opportunity to delve into the details and help people of all technical backgrounds understand opsec and begin to strengthen their own security and privacy. He and I mutually agreed that our interview would focus more on nerdy computer talk and less on politics, because we’re both nerds and not many of his interviews get to be like that. I believe he wanted to use our chats to promote cool projects and to educate people. For example, Snowden had mentioned prior to our in-person meeting that he had tweeted about the Tor anonymity system and was surprised by how many people thought it was some big government trap. He wanted to fix those kinds of misconceptions.
Our interview, conducted over room-service hamburgers, started with the basics.
Micah Lee: What are some operational security practices you think everyone should adopt? Just useful stuff for average people.
Edward Snowden: [Opsec] is important even if you’re not worried about the NSA. Because when you think about who the victims of surveillance are, on a day-to-day basis, you’re thinking about people who are in abusive spousal relationships, you’re thinking about people who are concerned about stalkers, you’re thinking about children who are concerned about their parents overhearing things. It’s to reclaim a level of privacy.
The first step that anyone could take is to encrypt their phone calls and their text messages. You can do that through the smartphone app Signal, by Open Whisper Systems. It’s free, and you can just download it immediately. And anybody you’re talking to now, their communications, if it’s intercepted, can’t be read by adversaries. [Signal is available for iOS and Android, and, unlike a lot of security tools, is very easy to use.]
You should encrypt your hard disk, so that if your computer is stolen the information isn’t obtainable to an adversary — pictures, where you live, where you work, where your kids are, where you go to school. [I’ve written a guide to encrypting your disk on Windows, Mac, and Linux.]
Use a password manager. One of the main things that gets people’s private information exposed, not necessarily to the most powerful adversaries, but to the most common ones, are data dumps. Your credentials may be revealed because some service you stopped using in 2007 gets hacked, and your password that you were using for that one site also works for your Gmail account. A password manager allows you to create unique passwords for every site that are unbreakable, but you don’t have the burden of memorizing them. [The password manager KeePassX is free, open source, cross-platform, and never stores anything in the cloud.]
The other thing there is two-factor authentication. The value of this is if someone does steal your password, or it’s left or exposed somewhere … [two-factor authentication] allows the provider to send you a secondary means of authentication — a text message or something like that. [If you enable two-factor authentication, an attacker needs both your password as the first factor and a physical device, like your phone, as your second factor, to login to your account. Gmail, Facebook, Twitter, Dropbox, GitHub, Battle.net, and tons of other services all support two-factor authentication.]
We should not live lives as if we are electronically naked.
We should armor ourselves using systems we can rely on every day. This doesn’t need to be an extraordinary lifestyle change. It doesn’t have to be something that is disruptive. It should be invisible, it should be atmospheric, it should be something that happens painlessly, effortlessly. This is why I like apps like Signal, because they’re low friction. It doesn’t require you to re-order your life. It doesn’t require you to change your method of communications. You can use it right now to talk to your friends.
Micah Lee and Edward Snowden, Moscow, Russia.
Photo: Sue Gardner
Lee: What do you think about Tor? Do you think that everyone should be familiar with it, or do you think that it’s only a use-it-if-you-need-it thing?
Snowden: I think Tor is the most important privacy-enhancing technology project being used today. I use Tor personally all the time. We know it works from at least one anecdotal case that’s fairly familiar to most people at this point. That’s not to say that Tor is bulletproof. What Tor does is it provides a measure of security and allows you to disassociate your physical location. …
But the basic idea, the concept of Tor that is so valuable, is that it’s run by volunteers. Anyone can create a new node on the network, whether it’s an entry node, a middle router, or an exit point, on the basis of their willingness to accept some risk. The voluntary nature of this network means that it is survivable, it’s resistant, it’s flexible.
[Tor Browser is a great way to selectively use Tor to look something up and not leave a trace that you did it. It can also help bypass censorship when you’re on a network where certain sites are blocked. If you want to get more involved, you can volunteer to run your own Tor node, as I do, and support the diversity of the Tor network.]
Lee: So that is all stuff that everybody should be doing. What about people who have exceptional threat models, like future intelligence-community whistleblowers, and other people who have nation-state adversaries? Maybe journalists, in some cases, or activists, or people like that?
Snowden: So the first answer is that you can’t learn this from a single article. The needs of every individual in a high-risk environment are different. And the capabilities of the adversary are constantly improving. The tooling changes as well.
What really matters is to be conscious of the principles of compromise. How can the adversary, in general, gain access to information that is sensitive to you? What kinds of things do you need to protect? Because of course you don’t need to hide everything from the adversary. You don’t need to live a paranoid life, off the grid, in hiding, in the woods in Montana.
What we do need to protect are the facts of our activities, our beliefs, and our lives that could be used against us in manners that are contrary to our interests. So when we think about this for whistleblowers, for example, if you witnessed some kind of wrongdoing and you need to reveal this information, and you believe there are people that want to interfere with that, you need to think about how to compartmentalize that.
Tell no one who doesn’t need to know. [Lindsay Mills, Snowden’s girlfriend of several years, didn’t know that he had been collecting documents to leak to journalists until she heard about it on the news, like everyone else.]
When we talk about whistleblowers and what to do, you want to think about tools for protecting your identity, protecting the existence of the relationship from any type of conventional communication system. You want to use something like SecureDrop, over the Tor network, so there is no connection between the computer that you are using at the time — preferably with a non-persistent operating system like Tails, so you’ve left no forensic trace on the machine you’re using, which hopefully is a disposable machine that you can get rid of afterward, that can’t be found in a raid, that can’t be analyzed or anything like that — so that the only outcome of your operational activities are the stories reported by the journalists. [SecureDrop is a whistleblower submission system. Here is a guide to using The Intercept’s SecureDrop server as safely as possible.]
And this is to be sure that whoever has been engaging in this wrongdoing cannot distract from the controversy by pointing to your physical identity. Instead they have to deal with the facts of the controversy rather than the actors that are involved in it.
Lee: What about for people who are, like, in a repressive regime and are trying to …
Snowden: Use Tor.
Lee: Use Tor?
Snowden: If you’re not using Tor you’re doing it wrong. Now, there is a counterpoint here where the use of privacy-enhancing technologies in certain areas can actually single you out for additional surveillance through the exercise of repressive measures. This is why it’s so critical for developers who are working on security-enhancing tools to not make their protocols stand out.
Lee: So you mentioned that what you want to spread are the principles of operational security. And you mentioned some of them, like need-to-know, compartmentalization. Can you talk more about what are the principles of operating securely?
Snowden: Almost every principle of operating security is to think about vulnerability. Think about what the risks of compromise are and how to mitigate them. In every step, in every action, in every point involved, in every point of decision, you have to stop and reflect and think, “What would be the impact if my adversary were aware of my activities?” If that impact is something that’s not survivable, either you have to change or refrain from that activity, you have to mitigate that through some kind of tools or system to protect the information and reduce the risk of compromise, or ultimately, you have to accept the risk of discovery and have a plan to mitigate the response. Because sometimes you can’t always keep something secret, but you can plan your response.
Lee: Are there principles of operational security that you think would be applicable to everyday life?
Snowden: Yes, that’s selective sharing. Everybody doesn’t need to know everything about us. Your friend doesn’t need to know what pharmacy you go to. Facebook doesn’t need to know your password security questions. You don’t need to have your mother’s maiden name on your Facebook page, if that’s what you use for recovering your password on Gmail. The idea here is that sharing is OK, but it should always be voluntary. It should be thoughtful, it should be things that are mutually beneficial to people that you’re sharing with, and these aren’t things that are simply taken from you.
If you interact with the internet … the typical methods of communication today betray you silently, quietly, invisibly, at every click. At every page that you land on, information is being stolen. It’s being collected, intercepted, analyzed, and stored by governments, foreign and domestic, and by companies. You can reduce this by taking a few key steps. Basic things. If information is being collected about you, make sure it’s being done in a voluntary way.
For example, if you use browser plugins like HTTPS Everywhere by EFF, you can try to enforce secure encrypted communications so your data is not being passed in transit electronically naked.
Lee: Do you think people should use adblock software?
Snowden: Yes.
Everybody should be running adblock software, if only from a safety perspective …
We’ve seen internet providers like Comcast, AT&T, or whoever it is, insert their own ads into your plaintext http connections. … As long as service providers are serving ads with active content that require the use of Javascript to display, that have some kind of active content like Flash embedded in it, anything that can be a vector for attack in your web browser — you should be actively trying to block these. Because if the service provider is not working to protect the sanctity of the relationship between reader and publisher, you have not just a right but a duty to take every effort to protect yourself in response.
Lee: Nice. So there’s a lot of esoteric attacks that you hear about in the media. There’s disk encryption attacks like evil maid attacks, and cold-boot attacks. There’s all sorts of firmware attacks. There’s BadUSB and BadBIOS, and baseband attacks on cellphones. All of these are probably unlikely to happen to many people very often. Is this something people should be concerned about? How do you go about deciding if you personally should be concerned about this sort of attack and try to defend against it?
Snowden: It all comes down to personal evaluation of your personal threat model, right? That is the bottom line of what operational security is about. You have to assess the risk of compromise. On the basis of that determine how much effort needs to be invested into mitigating that risk.
Now in the case of cold-boot attacks and things like that, there are many things you can do. For example, cold-boot attacks can be defeated by never leaving your machine unattended. This is something that is not important for the vast majority of users, because most people don’t need to worry about someone sneaking in when their machine is unattended. … There is the evil maid attack, which can be protected against by keeping your bootloader physically on you, but wearing it as a necklace, for example, on an external USB device.
You’ve got BadBIOS. You can protect against this by dumping your BIOS, hashing it (hopefully not with SHA1 anymore), and simply comparing your BIOS. In theory, if it’s owned badly enough you need to do this externally. You need to dump it using a JTAG or some kind of reader to make sure that it actually matches, if you don’t trust your operating system.
There’s a counter to every attack. The idea is you can play the cat-and-mouse game forever.
You can go to any depth, you can drive yourself crazy thinking about bugs in the walls and cameras in the ceiling. Or you can think about what are the most realistic threats in your current situation? And on that basis take some activity to mitigate the most realistic threats. In that case, for most people, that’s going to be very simple things. That’s going to be using a safe browser. That’s going to be disabling scripts and active content, ideally using a virtual machine or some other form of sandboxed browser, where if there’s a compromise it’s not persistent. [I recently wrote about how to set up virtual machines.] And making sure that your regular day-to-day communications are being selectively shared through encrypted means.
Lee: What sort of security tools are you currently excited about? What are you finding interesting?
Snowden: I’ll just namecheck Qubes here, just because it’s interesting. I’m really excited about Qubes because the idea of VM-separating machines, requiring expensive, costly sandbox escapes to get persistence on a machine, is a big step up in terms of burdening the attacker with greater resource and sophistication requirements for maintaining a compromise. I’d love to see them continue this project. I’d love to see them make it more accessible and much more secure. [You can read more about how to use Qubes here and here.]
Something that we haven’t seen that we need to see is a greater hardening of the overall kernels of every operating system through things like grsecurity [a set of patches to improve Linux security], but unfortunately there’s a big usability gap between the capabilities that are out there, that are possible, and what is attainable for the average user.
Lee: People use smartphones a lot. What do you think about using a smartphone for secure communications?
Snowden: Something that people forget about cellphones in general, of any type, is that you’re leaving a permanent record of all of your physical locations as you move around. … The problem with cellphones is they’re basically always talking about you, even when you’re not using them. That’s not to say that everyone should burn their cellphones … but you have to think about the context for your usage. Are you carrying a device that, by virtue of simply having it on your person, places you in a historic record in a place that you don’t want to be associated with, even if it’s something as simple as your place of worship?
Lee: There are tons of software developers out there that would love to figure out how to end mass surveillance. What should they be doing with their time?
Snowden: Mixed routing is one of the most important things that we need in terms of regular infrastructure because we haven’t solved the problem of how to divorce the content of communication from the fact that it has occurred at all. To have real privacy you have to have both. Not just what you talked to your mother about, but the fact that you talked to your mother at all. …
The problem with communications today is that the internet service provider knows exactly who you are. They know exactly where you live. They know what your credit card number is, when you last paid, how much it was.
You should be able to buy a pile of internet the same way you buy a bottle of water.
We need means of engaging in private connections to the internet. We need ways of engaging in private communications. We need mechanisms affording for private associations. And ultimately, we need ways to engage in private payment and shipping, which are the basis of trade.
These are research questions that need to be resolved. We need to find a way to protect the rights that we ourselves inherited for the next generation. If we don’t, today we’re standing at a fork in the road that divides between an open society and a controlled system. If we don’t do anything about this, people will look back at this moment and they’ll say, why did you let that happen? Do you want to live in a quantified world? Where not only is the content of every conversation, not only are the movements of every person known, but even the location of all the objects are known? Where the book that you leant to a friend leaves a record that they have read it? These things might be useful capabilities that provide value to society, but that’s only going to be a net good if we’re able to mitigate the impact of our activity, of our sharing, of our openness.
Lee: Ideally, governments around the world shouldn’t be spying on everybody. But that’s not really the case, so where do you think — what do you think the way to solve this problem is? Do you think it’s all just encrypting everything, or do you think that trying to get Congress to pass new laws and trying to do policy stuff is equally as important? Where do you think the balance is between tech and policy to combat mass surveillance? And what do you think that Congress should do, or that people should be urging Congress to do?
Snowden: I think reform comes with many faces. There’s legal reform, there’s statutory reform more generally, there are the products and outcomes of judicial decisions. … In the United States it has been held that these programs of mass surveillance, which were implemented secretly without the knowledge or the consent of the public, violate our rights, that they went too far, that they should end. And they have been modified or changed as a result. But there are many other programs, and many other countries, where these reforms have not yet had the impact that is so vital to free society. And in these contexts, in these situations, I believe that we do — as a community, as an open society, whether we’re talking about ordinary citizens or the technological community specifically — we have to look for ways of enforcing human rights through any means.
That can be through technology, that can be through politics, that can be through voting, that can be through behavior. But technology is, of all of these things, perhaps the quickest and most promising means through which we can respond to the greatest violations of human rights in a manner that is not dependent on every single legislative body on the planet to reform itself at the same time, which is probably somewhat optimistic to hope for. We would be instead able to create systems … that enforce and guarantee the rights that are necessary to maintain a free and open society.
Lee: On a different note — people said I should ask about Twitter — how long have you had a Twitter account for?
Snowden: Two weeks.
Lee: How many followers do you have?
Snowden: A million and a half, I think.
Lee: That’s a lot of followers. How are you liking being a Twitter user so far?
Snowden: I’m trying very hard not to mess up.
Lee: You’ve been tweeting a lot lately, including in the middle of the night Moscow time.
Snowden: Ha. I make no secret about the fact that I live on Eastern Standard Time. The majority of my work and associations, my political activism, still occurs in my home, in the United States. So it only really make sense that I work on the same hours.
Lee: Do you feel like Twitter is sucking away all your time? I mean I kind of have Twitter open all day long and I sometimes get sucked into flame wars. How is it affecting you?
Snowden: There were a few days when people kept tweeting cats for almost an entire day. And I know I shouldn’t, I have a lot of work to do, but I just couldn’t stop looking at them.
Lee: The real question is, what was your Twitter handle before this? Because you were obviously on Twitter. You know all the ins and outs.
Snowden: I can neither confirm nor deny the existence of other Twitter accounts.