Archiv der Kategorie: Privacy

How NSA identifies you by just starting your windows PC

Thanks to the fine research paper found here http://www.icir.org/vern/papers/trackers-pets16.pdf  YOU ARE easiliy identified when you just start your windows PC and log onto the internet – not requiring you any user-inaction.

You are identified by either: HTTP Identifiers or NON-HTTP Identifiers

HTTP Identifiers

Application-specific: The first category is identifiers sent by applications other than browsers. For example, Skype sends a user identifier uhash in a URL of the format http://ui.skype.com/ui/2/2.1.0.81/ en/getlatestversion?ver=2.1.0.81&uhash= . The parameter uhash is a hash of the user ID, their password, and a salt, and remains constant for a given Skype user [12]. uhash can very well act as an identifier for a user; a monitor who observes the same value from two different clients/networks can infer that it reflects the same user on both. Another example in this category is a Dropbox user_id sent as a URL parameter. We discovered that since the Dropbox application regularly syncs with its server, it sends out this identifier—surprisingly, every minute—without requiring any user action.

Mobile devices: Our methodology enabled us to discover that the Apple weather app sends IMEI and IMSI numbers in POST requests to iphone-wu.apple.com. We can recognize these as such, because the parameter name in the context clearly names them as IMEI and IMSI; the value also matches the expected format for these identifiers. Other apps also send a number of device identifiers, such as phone make, advertising ID,4 SHA1 hashes of serial number, MAC address, and UDID (unique device identifier) across various domains, such as s.amazon-adsystem.com, jupiter.apads.com and ads.mp.mydas.mobi. The iOS and Android mobile SDKs provide access to these identifiers.

http-identifiers

NON-HTTP Identifiers

Device identifiers sent by iOS/OSX: We found instances of device identifiers sent on port 5223. Apple devices use this port to maintain a persistent connection with Apple’s Push Notification (APN) service, through which they receive push notifications for installed apps.

An app-provider sends to an APN server the push notification along with the device token of the recipient device. The APN server in turn forwards the notification to the device, identifiying it via the device token [2]. This device token is an opaque device identifier, which the APN service gives to the device when it first connects. The device sends this token (in clear text) to the APN server on every connection, and to each app-provider upon app installation. This identifier enabled us to identify 68 clients in our dataset as Apple devices. The devices sent their device token to a total of 407 IP addresses in two networks belonging to Apple (17.172.232/24, 17.149/16).

non-http-identifiers

The work http://www.icir.org/vern/papers/trackers-pets16.pdf was supported by the Intel Science and Technology Center for Secure Computing, the U.S. Army Research Office and by the National Science Foundation.

Copy of Publication here: trackers-pets16

MEET MOXIE MARLINSPIKE, THE ANARCHIST BRINGING ENCRYPTION TO ALL OF US

MEET MOXIE MARLINSPIKE, THE ANARCHIST BRINGING ENCRYPTION TO ALL OF US

marlinspike_wide.jpg

Apple confirms iOS kernel code left unencrypted intentionally

When Apple released a preview version of iOS 10 at its annual developers conference last week, the company slipped in a surprise for security researchers — it left the core of its operating system, the kernel, unencrypted.

“The kernel cache doesn’t contain any user info, and by unencrypting it we’re able to optimize the operating system’s performance without compromising security,” an Apple spokesperson told TechCrunch.

Apple has kept the inner workings of the kernel obfuscated by encryption in previous versions of iOS, leaving developers and researchers in the dark. The kernel manages security and limits the ways applications on an iPhone or iPad can access the hardware of the device, making it a crucial part of the operating system.

Although encryption is often thought to be synonymous with security, the lack of encryption in this case doesn’t mean that devices running iOS 10 are less secure. It just means that that researchers and developers can poke around in the kernel’s code for the first time, and any security flaws will come to light more quickly. If flaws are revealed, they can be quickly patched.

Leaving the kernel unencrypted is a rare move of transparency for Apple. The company is so notoriously secretive about its products that some security experts speculated in the MIT Technology Review that the lack of encryption in the kernel was accidental. But such a mistake would be so shocking as to be practically unbelievable, researchers said. “This would have been an incredibly glaring oversight, like forgetting to put doors on an elevator,” iOS security expert Jonathan Zdziarski told the MIT Technology Review.

Apple has begun to shift towards greater transparency, particularly on security issues, in the wake of its battle with the FBI over unlocking an iPhone used by the San Bernardino shooter. When the FBI attempted to compel Apple to unlock the phone, CEO Tim Cook penned a rare open letter to Apple’s customers, explaining his decision to resist. “We feel we must speak up in the face of what we see as an overreach by the U.S. government,” Cook wrote. (The FBI eventually dropped its request after paying a third party to break into the device.)

Opening up the kernel’s code for inspection could weaken the market for security flaws like the one the FBI is presumed to have used to get into the San Bernardino iPhone. If flaws are revealed quickly and widely, it will reduce the prices law enforcement and black markets will pay for them — and it could mean quicker fixes for Apple’s customers.

Apple confirms iOS kernel code left unencrypted intentionally

Encryption Is Being Scapegoated To Mask The Failures Of Mass Surveillance

Source: http://techcrunch.com/2015/11/17/the-blame-game/

Well that took no time at all. Intelligence agencies rolled right into the horror and fury in the immediate wake of the latest co-ordinated terror attacks in the French capital on Friday, to launch their latest co-ordinated assault on strong encryption — and on the tech companies creating secure comms services — seeking to scapegoat end-to-end encryption as the enabling layer for extremists to perpetrate mass murder.

There’s no doubt they were waiting for just such an ‘opportune moment’ to redouble their attacks on encryption after recent attempts to lobby for encryption-perforating legislation foundered. (A strategy confirmed by a leaked email sent by the intelligence community’s top lawyer, Robert S. Litt, this August — and subsequently obtained by the Washington Post — in which he anticipated that a “very hostile legislative environment… could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement”. Et voila Paris… )

Speaking to CBS News the weekend in the immediate aftermath of the Paris attacks, former CIA deputy director Michael Morell said: “I think this is going to open an entire new debate about security versus privacy.”

“We, in many respects, have gone blind as a result of the commercialization and the selling of these devices that cannot be accessed either by the manufacturer or, more importantly, by us in law enforcement, even equipped with search warrants and judicial authority,” added New York City police commissioner, William J. Bratton, quoted by the NYT in a lengthy article probing the “possible” role of encrypted messaging apps in the Paris attacks.

Elsewhere the fast-flowing attacks on encrypted tech services have come without a byline — from unnamed European and American officials who say they are “not authorized to speak publicly”. Yet are happy to speak publicly, anonymously.

The NYT published an article on Sunday alleging that attackers had used “encryption technology” to communicate — citing “European officials who had been briefed on the investigation but were not authorized to speak publicly”. (The paper subsequently pulled the article from its website, as noted by InsideSources, although it can still be read via the Internet Archive.)

The irony of government/intelligence agency sources briefing against encryption on condition of anonymity as they seek to undermine the public’s right to privacy would be darkly comic if it weren’t quite so brazen.

Seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.

Here’s what one such unidentified British intelligence source told Politico: “As members of the general public get preoccupied that the government is spying on them, they have adopted these applications and terrorists have found them tailor-made for their own use.”

It’s a pretty incredible claim when you examine it. This unknown spook mouthpiece is saying terrorists are able to organize acts of mass murder as a direct consequence of the public’s dislike of government mass surveillance. Take even a cursory glance at the history of terrorism and that claim folds in on itself immediately. The highly co-ordinated 9/11 attacks of 2001 required no backdrop of public privacy fears in order to be carried out — and with horrifying, orchestrated effectiveness.

In the same Politico article, an identified source — J.M. Berger, the co-author of a book about ISIS — makes a far more credible claim: “Terrorists use technology improvisationally.”

Of course they do. The co-founder of secure messaging app Telegram, Pavel Durov, made much the same point earlier this fall when asked directly by TechCrunch about ISIS using his app to communicate. “Ultimately the ISIS will always find a way to communicate within themselves. And if any means of communication turns out to be not secure for them, then they switch to another one,” Durov argued. “I still think we’re doing the right thing — protecting our users privacy.”

Bottom line: banning encryption or enforcing tech companies to backdoor communications services has zero chance of being effective at stopping terrorists finding ways to communicate securely. They can and will route around such attempts to infiltrate their comms, as others have detailed at length.

Here’s a recap: terrorists can use encryption tools that are freely distributed from countries where your anti-encryption laws have no jurisdiction. Terrorists can (and do) build their own securely encrypted communication tools. Terrorists can switch to newer (or older) technologies to circumvent enforcement laws or enforced perforations. They can use plain old obfuscation to code their communications within noisy digital platforms like the Playstation 4 network, folding their chatter into general background digital noise (of which there is no shortage). And terrorists can meet in person, using a network of trusted couriers to facilitate these meetings, as Al Qaeda — the terrorist group that perpetrated the highly sophisticated 9/11 attacks at a time when smartphones were far less common, nor was there a ready supply of easy-to-use end-to-end encrypted messaging apps — is known to have done.

Point is, technology is not a two-lane highway that can be regulated with a couple of neat roadblocks — whatever many politicians appear to think. All such roadblocks will do is catch the law-abiding citizens who rely on digital highways to conduct more and more aspects of their daily lives. And make those law-abiding citizens less safe in multiple ways.

There’s little doubt that the lack of technological expertise in the upper echelons of governments is snowballing into a very ugly problem indeed as technology becomes increasingly sophisticated yet political rhetoric remains grounded in age-old kneejerkery. Of course we can all agree it would be beneficial if we were able to stop terrorists from communicating. But the hard political truth of the digital era is that’s never going to be possible. It really is putting the proverbial finger in the dam. (There are even startups working on encryption that’s futureproofed against quantum computers — and we don’t even have quantum computers yet.)

Another hard political truth is that effective counter terrorism policy requires spending money on physical, on-the-ground resources — putting more agents on the ground, within local communities, where they can gain trust and gather intelligence. (Not to mention having a foreign policy that seeks to promote global stability, rather than generating the kind of regional instability that feeds extremism by waging illegal wars, for instance, or selling arms to regimes known to support the spread of extremist religious ideologies.)

Yet, in the U.K. at least, the opposite is happening — police force budgets are being slashed. Meanwhile domestic spy agencies are now being promised more staff, yet spooks’ time is increasingly taken up with remote analysis of data, rather than on the ground intelligence work. The U.K. government’s draft new surveillance laws aim to cement mass surveillance as the officially sanctioned counter terror modus operandi, and will further increase the noise-to-signal ratio with additional data capture measures, such as mandating that ISPs retain data on the websites every citizen in the country has visited for the past year. Truly the opposite of a targeted intelligence strategy.

The draft Investigatory Powers Bill also has some distinctly ambiguous wording when it comes to encryption — suggesting the U.K. government is still seeking to legislate a general ability that companies be able to decrypt communications. Ergo, to outlaw end-to-end encryption. Yes, we’re back here again. You’d be forgiven for thinking politicians lacked a long-term memory.

Effective encryption might be a politically convenient scapegoat to kick around in the wake of a terror attack — given it can be used to detract attention from big picture geopolitical failures of governments. And from immediate on the ground intelligence failures — whether those are due to poor political direction, or a lack of resources, or bad decision-making/prioritization by overstretched intelligence agency staff. Pointing the finger of blame at technology companies’ use of encryption is a trivial diversion tactic to detract from wider political and intelligence failures with much more complex origins.

(On the intelligence failures point, questions certainly need to be asked, given that French and Belgian intelligence agencies apparently knew about the jihadi backgrounds of perpetrators of the Paris attacks. Yet weren’t, apparently, targeting them closely enough to prevent Saturday’s attack. And all this despite France having hugely draconian counter-terrorism digital surveillance laws…)

But seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.

Mandating vulnerabilities be built into digital communications opens up an even worse prospect: new avenues for terrorists and criminals to exploit. As officials are busy spinning the notion that terrorism is all-but only possible because of the rise of robust encryption, consider this: if the public is deprived of its digital privacy — with terrorism applied as the justification to strip out the robust safeguard of strong encryption — then individuals become more vulnerable to acts of terrorism, given their communications cannot be safeguarded from terrorists. Or criminals. Or fraudsters. Or anyone incentivized by malevolent intent.

If you want to speculate on fearful possibilities, think about terrorists being able to target individuals at will via legally-required-to-be insecure digital services. If you think terror tactics are scary right now, think about terrorists having the potential to single out, track and terminate anyone at will based on whatever twisted justification fits their warped ideology — perhaps after that person expressed views they do not approve of in an online forum.

In a world of guaranteed insecure digital services it’s a far more straightforward matter for a terrorist to hack into communications to obtain the identity of a person they deem a target, and to use other similarly perforated technology services to triangulate and track someone’s location to a place where they can be made the latest victim of a new type of hyper-targeted, mass surveillance-enabled terrorism. Inherently insecure services could also be more easily compromised by terrorists to broadcast their own propaganda, or send out phishing scams, or otherwise divert attention en masse.

The only way to protect against these scenarios is to expand the reach of properly encrypted services. To champion the cause of safeguarding the public’s personal data and privacy, rather than working to undermine it — and undermining the individual freedoms the West claims to be so keen to defend in the process.

While, when it comes to counter terrorism strategy, what’s needed is more intelligent targeting, not more mass measures that treat everyone as a potential suspect and deluge security agencies in an endless churn of irrelevant noise. Even the robust end-to-end encryption that’s now being briefed against as a ‘terrorist-enabling evil’ by shadowy officials on both sides of the Atlantic can be compromised at the level of an individual device. There’s no guaranteed shortcut to achieve that. Nor should there be — that’s the point. It takes sophisticated, targeted work.

But blanket measures to compromise the security of the many in the hopes of catching out the savvy few are even less likely to succeed on the intelligence front. We have mass surveillance already, and we also have blood on the streets of Paris once again. Encryption is just a convenient scapegoat for wider policy failures of an industrial surveillance complex.

So let’s not be taken in by false flags flown by anonymous officials trying to mask bad political decision-making. And let’s redouble our efforts to fight bad policy which seeks to entrench a failed ideology of mass surveillance — instead of focusing intelligence resources where they are really needed; honing in on signals, not drowned out by noise.

Edward Snowden Explains How To Reclaim Your Privacy

Source: https://theintercept.com/2015/11/12/edward-snowden-explains-how-to-reclaim-your-privacy/

I met Edward Snowden in a hotel in central Moscow, just blocks away from Red Square. It was the first time we’d met in person; he first emailed me nearly two years earlier, and we eventually created an encrypted channel to journalists Laura Poitras and Glenn Greenwald, to whom Snowden would disclose overreaching mass surveillance by the National Security Agency and its British equivalent, GCHQ.

This time around, Snowden’s anonymity was gone; the world knew who he was, much of what he’d leaked, and that he’d been living in exile in Moscow, where he’s been stranded ever since the State Department canceled his passport while he was en route to Latin America. His situation was more stable, the threats against him a bit easier to predict. So I approached my 2015 Snowden meeting with less paranoia than was warranted in 2013, and with a little more attention to physical security, since this time our communications would not be confined to the internet.

Our first meeting would be in the hotel lobby, and I arrived with all my important electronic gear in tow. I had powered down my smartphone and placed it in a “faraday bag” designed to block all radio emissions. This, in turn, was tucked inside my backpack next to my laptop (which I configured and hardened specifically for traveling to Russia), also powered off. Both electronic devices stored their data in encrypted form, but disk encryption isn’t perfect, and leaving these in my hotel room seemed like an invitation to tampering.

Most of the lobby seats were taken by well-dressed Russians sipping cocktails. I planted myself on an empty couch off in a nook hidden from most of the action and from the only security camera I could spot. Snowden had told me I’d have to wait awhile before he met me, and for a moment I wondered if I was being watched: A bearded man wearing glasses and a trench coat stood a few feet from me, apparently doing nothing aside from staring at a stained-glass window. Later he shifted from one side of my couch to the other, walking away just after I made eye contact.

Eventually, Snowden appeared. We smiled and said good to see you, and then walked up the spiral staircase near the elevator to the room where I would be conducting the interview, before we really started talking.

It also turns out that I didn’t need to be quite so cautious. Later, he told me to feel free to take out my phone so I could coordinate a rendezvous with some mutual friends who were in town. Operational security, or “opsec,” was a recurring theme across our several chats in Moscow.

In most of Snowden’s interviews he speaks broadly about the importance of privacy, surveillance reform, and encryption. But he rarely has the opportunity to delve into the details and help people of all technical backgrounds understand opsec and begin to strengthen their own security and privacy. He and I mutually agreed that our interview would focus more on nerdy computer talk and less on politics, because we’re both nerds and not many of his interviews get to be like that. I believe he wanted to use our chats to promote cool projects and to educate people. For example, Snowden had mentioned prior to our in-person meeting that he had tweeted about the Tor anonymity system and was surprised by how many people thought it was some big government trap. He wanted to fix those kinds of misconceptions.

Our interview, conducted over room-service hamburgers, started with the basics.

 

Micah Lee: What are some operational security practices you think everyone should adopt? Just useful stuff for average people.

Edward Snowden: [Opsec] is important even if you’re not worried about the NSA. Because when you think about who the victims of surveillance are, on a day-to-day basis, you’re thinking about people who are in abusive spousal relationships, you’re thinking about people who are concerned about stalkers, you’re thinking about children who are concerned about their parents overhearing things. It’s to reclaim a level of privacy.

  • The first step that anyone could take is to encrypt their phone calls and their text messages. You can do that through the smartphone app Signal, by Open Whisper Systems. It’s free, and you can just download it immediately. And anybody you’re talking to now, their communications, if it’s intercepted, can’t be read by adversaries. [Signal is available for iOS and Android, and, unlike a lot of security tools, is very easy to use.]
  • You should encrypt your hard disk, so that if your computer is stolen the information isn’t obtainable to an adversary — pictures, where you live, where you work, where your kids are, where you go to school. [I’ve written a guide to encrypting your disk on Windows, Mac, and Linux.]
  • Use a password manager. One of the main things that gets people’s private information exposed, not necessarily to the most powerful adversaries, but to the most common ones, are data dumps. Your credentials may be revealed because some service you stopped using in 2007 gets hacked, and your password that you were using for that one site also works for your Gmail account. A password manager allows you to create unique passwords for every site that are unbreakable, but you don’t have the burden of memorizing them. [The password manager KeePassX is free, open source, cross-platform, and never stores anything in the cloud.]
  • The other thing there is two-factor authentication. The value of this is if someone does steal your password, or it’s left or exposed somewhere … [two-factor authentication] allows the provider to send you a secondary means of authentication — a text message or something like that. [If you enable two-factor authentication, an attacker needs both your password as the first factor and a physical device, like your phone, as your second factor, to login to your account. Gmail, Facebook, Twitter, Dropbox, GitHub, Battle.net, and tons of other services all support two-factor authentication.]

We should not live lives as if we are electronically naked.

We should armor ourselves using systems we can rely on every day. This doesn’t need to be an extraordinary lifestyle change. It doesn’t have to be something that is disruptive. It should be invisible, it should be atmospheric, it should be something that happens painlessly, effortlessly. This is why I like apps like Signal, because they’re low friction. It doesn’t require you to re-order your life. It doesn’t require you to change your method of communications. You can use it right now to talk to your friends.

DSC_0650-color-1

Micah Lee and Edward Snowden, Moscow, Russia.

Photo: Sue Gardner

Lee: What do you think about Tor? Do you think that everyone should be familiar with it, or do you think that it’s only a use-it-if-you-need-it thing?

Snowden: I think Tor is the most important privacy-enhancing technology project being used today. I use Tor personally all the time. We know it works from at least one anecdotal case that’s fairly familiar to most people at this point. That’s not to say that Tor is bulletproof. What Tor does is it provides a measure of security and allows you to disassociate your physical location. …

But the basic idea, the concept of Tor that is so valuable, is that it’s run by volunteers. Anyone can create a new node on the network, whether it’s an entry node, a middle router, or an exit point, on the basis of their willingness to accept some risk. The voluntary nature of this network means that it is survivable, it’s resistant, it’s flexible.

[Tor Browser is a great way to selectively use Tor to look something up and not leave a trace that you did it. It can also help bypass censorship when you’re on a network where certain sites are blocked. If you want to get more involved, you can volunteer to run your own Tor node, as I do, and support the diversity of the Tor network.]

Lee: So that is all stuff that everybody should be doing. What about people who have exceptional threat models, like future intelligence-community whistleblowers, and other people who have nation-state adversaries? Maybe journalists, in some cases, or activists, or people like that?

Snowden: So the first answer is that you can’t learn this from a single article. The needs of every individual in a high-risk environment are different. And the capabilities of the adversary are constantly improving. The tooling changes as well.

What really matters is to be conscious of the principles of compromise. How can the adversary, in general, gain access to information that is sensitive to you? What kinds of things do you need to protect? Because of course you don’t need to hide everything from the adversary. You don’t need to live a paranoid life, off the grid, in hiding, in the woods in Montana.

What we do need to protect are the facts of our activities, our beliefs, and our lives that could be used against us in manners that are contrary to our interests. So when we think about this for whistleblowers, for example, if you witnessed some kind of wrongdoing and you need to reveal this information, and you believe there are people that want to interfere with that, you need to think about how to compartmentalize that.

Tell no one who doesn’t need to know. [Lindsay Mills, Snowden’s girlfriend of several years, didn’t know that he had been collecting documents to leak to journalists until she heard about it on the news, like everyone else.]

When we talk about whistleblowers and what to do, you want to think about tools for protecting your identity, protecting the existence of the relationship from any type of conventional communication system. You want to use something like SecureDrop, over the Tor network, so there is no connection between the computer that you are using at the time — preferably with a non-persistent operating system like Tails, so you’ve left no forensic trace on the machine you’re using, which hopefully is a disposable machine that you can get rid of afterward, that can’t be found in a raid, that can’t be analyzed or anything like that — so that the only outcome of your operational activities are the stories reported by the journalists. [SecureDrop is a whistleblower submission system. Here is a guide to using The Intercept’s SecureDrop server as safely as possible.]

And this is to be sure that whoever has been engaging in this wrongdoing cannot distract from the controversy by pointing to your physical identity. Instead they have to deal with the facts of the controversy rather than the actors that are involved in it.

Lee: What about for people who are, like, in a repressive regime and are trying to …

Snowden: Use Tor.

Lee: Use Tor?

Snowden: If you’re not using Tor you’re doing it wrong. Now, there is a counterpoint here where the use of privacy-enhancing technologies in certain areas can actually single you out for additional surveillance through the exercise of repressive measures. This is why it’s so critical for developers who are working on security-enhancing tools to not make their protocols stand out.

Lee: So you mentioned that what you want to spread are the principles of operational security. And you mentioned some of them, like need-to-know, compartmentalization. Can you talk more about what are the principles of operating securely?

Snowden: Almost every principle of operating security is to think about vulnerability. Think about what the risks of compromise are and how to mitigate them. In every step, in every action, in every point involved, in every point of decision, you have to stop and reflect and think, “What would be the impact if my adversary were aware of my activities?” If that impact is something that’s not survivable, either you have to change or refrain from that activity, you have to mitigate that through some kind of tools or system to protect the information and reduce the risk of compromise, or ultimately, you have to accept the risk of discovery and have a plan to mitigate the response. Because sometimes you can’t always keep something secret, but you can plan your response.

Lee: Are there principles of operational security that you think would be applicable to everyday life?

Snowden: Yes, that’s selective sharing. Everybody doesn’t need to know everything about us. Your friend doesn’t need to know what pharmacy you go to. Facebook doesn’t need to know your password security questions. You don’t need to have your mother’s maiden name on your Facebook page, if that’s what you use for recovering your password on Gmail. The idea here is that sharing is OK, but it should always be voluntary. It should be thoughtful, it should be things that are mutually beneficial to people that you’re sharing with, and these aren’t things that are simply taken from you.

If you interact with the internet … the typical methods of communication today betray you silently, quietly, invisibly, at every click. At every page that you land on, information is being stolen. It’s being collected, intercepted, analyzed, and stored by governments, foreign and domestic, and by companies. You can reduce this by taking a few key steps. Basic things. If information is being collected about you, make sure it’s being done in a voluntary way.

For example, if you use browser plugins like HTTPS Everywhere by EFF, you can try to enforce secure encrypted communications so your data is not being passed in transit electronically naked.

Lee: Do you think people should use adblock software?

Snowden: Yes.

Everybody should be running adblock software, if only from a safety perspective …

We’ve seen internet providers like ComcastAT&T, or whoever it is, insert their own ads into your plaintext http connections. … As long as service providers are serving ads with active content that require the use of Javascript to display, that have some kind of active content like Flash embedded in it, anything that can be a vector for attack in your web browser — you should be actively trying to block these. Because if the service provider is not working to protect the sanctity of the relationship between reader and publisher, you have not just a right but a duty to take every effort to protect yourself in response.

Lee: Nice. So there’s a lot of esoteric attacks that you hear about in the media. There’s disk encryption attacks like evil maid attacks, and cold-boot attacks. There’s all sorts of firmware attacks. There’s BadUSB and BadBIOS, and baseband attacks on cellphones. All of these are probably unlikely to happen to many people very often. Is this something people should be concerned about? How do you go about deciding if you personally should be concerned about this sort of attack and try to defend against it?

Snowden: It all comes down to personal evaluation of your personal threat model, right? That is the bottom line of what operational security is about. You have to assess the risk of compromise. On the basis of that determine how much effort needs to be invested into mitigating that risk.

Now in the case of cold-boot attacks and things like that, there are many things you can do. For example, cold-boot attacks can be defeated by never leaving your machine unattended. This is something that is not important for the vast majority of users, because most people don’t need to worry about someone sneaking in when their machine is unattended. … There is the evil maid attack, which can be protected against by keeping your bootloader physically on you, but wearing it as a necklace, for example, on an external USB device.

You’ve got BadBIOS. You can protect against this by dumping your BIOS, hashing it (hopefully not with SHA1 anymore), and simply comparing your BIOS. In theory, if it’s owned badly enough you need to do this externally. You need to dump it using a JTAG or some kind of reader to make sure that it actually matches, if you don’t trust your operating system.

There’s a counter to every attack. The idea is you can play the cat-and-mouse game forever.

You can go to any depth, you can drive yourself crazy thinking about bugs in the walls and cameras in the ceiling. Or you can think about what are the most realistic threats in your current situation? And on that basis take some activity to mitigate the most realistic threats. In that case, for most people, that’s going to be very simple things. That’s going to be using a safe browser. That’s going to be disabling scripts and active content, ideally using a virtual machine or some other form of sandboxed browser, where if there’s a compromise it’s not persistent. [I recently wrote about how to set up virtual machines.] And making sure that your regular day-to-day communications are being selectively shared through encrypted means.

Lee: What sort of security tools are you currently excited about? What are you finding interesting?

Snowden: I’ll just namecheck Qubes here, just because it’s interesting. I’m really excited about Qubes because the idea of VM-separating machines, requiring expensive, costly sandbox escapes to get persistence on a machine, is a big step up in terms of burdening the attacker with greater resource and sophistication requirements for maintaining a compromise. I’d love to see them continue this project. I’d love to see them make it more accessible and much more secure. [You can read more about how to use Qubes here and here.]

Something that we haven’t seen that we need to see is a greater hardening of the overall kernels of every operating system through things like grsecurity [a set of patches to improve Linux security], but unfortunately there’s a big usability gap between the capabilities that are out there, that are possible, and what is attainable for the average user.

Lee: People use smartphones a lot. What do you think about using a smartphone for secure communications?

Snowden: Something that people forget about cellphones in general, of any type, is that you’re leaving a permanent record of all of your physical locations as you move around. … The problem with cellphones is they’re basically always talking about you, even when you’re not using them. That’s not to say that everyone should burn their cellphones … but you have to think about the context for your usage. Are you carrying a device that, by virtue of simply having it on your person, places you in a historic record in a place that you don’t want to be associated with, even if it’s something as simple as your place of worship?

Lee: There are tons of software developers out there that would love to figure out how to end mass surveillance. What should they be doing with their time?

Snowden: Mixed routing is one of the most important things that we need in terms of regular infrastructure because we haven’t solved the problem of how to divorce the content of communication from the fact that it has occurred at all. To have real privacy you have to have both. Not just what you talked to your mother about, but the fact that you talked to your mother at all. …

The problem with communications today is that the internet service provider knows exactly who you are. They know exactly where you live. They know what your credit card number is, when you last paid, how much it was.

You should be able to buy a pile of internet the same way you buy a bottle of water.

We need means of engaging in private connections to the internet. We need ways of engaging in private communications. We need mechanisms affording for private associations. And ultimately, we need ways to engage in private payment and shipping, which are the basis of trade.

These are research questions that need to be resolved. We need to find a way to protect the rights that we ourselves inherited for the next generation. If we don’t, today we’re standing at a fork in the road that divides between an open society and a controlled system. If we don’t do anything about this, people will look back at this moment and they’ll say, why did you let that happen? Do you want to live in a quantified world? Where not only is the content of every conversation, not only are the movements of every person known, but even the location of all the objects are known? Where the book that you leant to a friend leaves a record that they have read it? These things might be useful capabilities that provide value to society, but that’s only going to be a net good if we’re able to mitigate the impact of our activity, of our sharing, of our openness.

Lee: Ideally, governments around the world shouldn’t be spying on everybody. But that’s not really the case, so where do you think — what do you think the way to solve this problem is? Do you think it’s all just encrypting everything, or do you think that trying to get Congress to pass new laws and trying to do policy stuff is equally as important? Where do you think the balance is between tech and policy to combat mass surveillance? And what do you think that Congress should do, or that people should be urging Congress to do?

Snowden: I think reform comes with many faces. There’s legal reform, there’s statutory reform more generally, there are the products and outcomes of judicial decisions. … In the United States it has been held that these programs of mass surveillance, which were implemented secretly without the knowledge or the consent of the public, violate our rights, that they went too far, that they should end. And they have been modified or changed as a result. But there are many other programs, and many other countries, where these reforms have not yet had the impact that is so vital to free society. And in these contexts, in these situations, I believe that we do — as a community, as an open society, whether we’re talking about ordinary citizens or the technological community specifically — we have to look for ways of enforcing human rights through any means.

That can be through technology, that can be through politics, that can be through voting, that can be through behavior. But technology is, of all of these things, perhaps the quickest and most promising means through which we can respond to the greatest violations of human rights in a manner that is not dependent on every single legislative body on the planet to reform itself at the same time, which is probably somewhat optimistic to hope for. We would be instead able to create systems … that enforce and guarantee the rights that are necessary to maintain a free and open society.

Lee: On a different note — people said I should ask about Twitter — how long have you had a Twitter account for?

Snowden: Two weeks.

Lee: How many followers do you have?

Snowden: A million and a half, I think.

Lee: That’s a lot of followers. How are you liking being a Twitter user so far?

Snowden: I’m trying very hard not to mess up.

Lee: You’ve been tweeting a lot lately, including in the middle of the night Moscow time.

Snowden: Ha. I make no secret about the fact that I live on Eastern Standard Time. The majority of my work and associations, my political activism, still occurs in my home, in the United States. So it only really make sense that I work on the same hours.

Lee: Do you feel like Twitter is sucking away all your time? I mean I kind of have Twitter open all day long and I sometimes get sucked into flame wars. How is it affecting you?

Snowden: There were a few days when people kept tweeting cats for almost an entire day. And I know I shouldn’t, I have a lot of work to do, but I just couldn’t stop looking at them.

Lee: The real question is, what was your Twitter handle before this? Because you were obviously on Twitter. You know all the ins and outs.

Snowden: I can neither confirm nor deny the existence of other Twitter accounts.