Schlagwort-Archive: NSA

Macron, May, Merkel – weakening encryption and making messengers (whatsapp) vulnerable leads to data security catastrophes

In weakening strong encryption by weakening software like Android or IOS operating System (subroutines, inlays, essentials) in order to enable mass surveillance you the leaders of Europe risk the data security of thousands of Europe companies. Is it worth it?

Even Microsoft is now warning that the government practice of “stockpiling” software vulnerabilities so that they can be used as weapons is a misguided tactic that weakens security for everybody.

“An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen,” the company said Sunday.

Why are you doing this? Hopefully not for the need to give information in order to receive from the USA?

epa05989737 French President Emmanuel Macron (L) talks with German Chancellor Angela Merkel (R) as US President Donald J. Trump (C) walks by, during a line up for the group photo at the NATO summit in Brussels, Belgium, 25 May 2017. NATO countries‘ heads of states and governments gather in Brussels for a one-day meeting. EPA/ARMANDO BABANI

You saw and recognised and understood WannaCry that affected thousands of companies throuout Europe?

The vulnerability in Windows that WannaCry takes advantage of was discovered by the NSA for its surveillance toolkit. But word got out when a hacker group known as the Shadow Brokers dumped a bunch of leaked NSA information onto the Internet in April. Microsoft, however, had already issued a software update the month before; those that downloaded and installed the patch were protected from WannaCry, but many others lagged behind and became victims.

Securing Driverless Cars From Hackers Is Hard, according to Charlie Miller, Ex-NSA’s Tailored Access Operations Hacker

Securing Driverless Cars From Hackers Is Hard. Ask the Ex-Uber Guy Who Protects Them

Two years ago, Charlie Miller and Chris Valasek pulled off a demonstration that shook the auto industry, remotely hacking a Jeep Cherokee via its internet connection to paralyze it on a highway. Since then, the two security researchers have been quietly working for Uber, helping the startup secure its experimental self-driving cars against exactly the sort of attack they proved was possible on a traditional one. Now, Miller has moved on, and he’s ready to broadcast a message to the automotive industry: Securing autonomous cars from hackers is a very difficult problem. It’s time to get serious about solving it.

Last month, Miller left Uber for a position at Chinese competitor Didi, a startup that’s just now beginning its own autonomous ridesharing project. In his first post-Uber interview, Miller talked to WIRED about what he learned in those 19 months at the company—namely that driverless taxis pose a security challenge that goes well beyond even those faced by the rest of the connected car industry.

Miller couldn’t talk about any of the specifics of his research at Uber; he says he moved to Didi in part because the company has allowed him to speak more openly about car hacking. But he warns that before self-driving taxis can become a reality, the vehicles’ architects will need to consider everything from the vast array of automation in driverless cars that can be remotely hijacked, to the possibility that passengers themselves could use their physical access to sabotage an unmanned vehicle.

“Autonomous vehicles are at the apex of all the terrible things that can go wrong,” says Miller, who spent years on the NSA’s Tailored Access Operations team of elite hackers before stints at Twitter and Uber. “Cars are already insecure, and you’re adding a bunch of sensors and computers that are controlling them…If a bad guy gets control of that, it’s going to be even worse.”

At A Computer’s Mercy

In a series of experiments starting in 2013, Miller and Valasek showed that a hacker with either wired or over-the-internet access to a vehicle—including a Toyota Prius, Ford Escape, and a Jeep Cherokee—could disable or slam on a victim’s brakes, turn the steering wheel, or, in some cases, cause unintended acceleration. But to trigger almost all those attacks, Miller and Valasek had to exploit vehicles’ existing automated features. They used the Prius’ collision avoidance system to apply its brakes, and the Jeep’s cruise control feature to accelerate it. To turn the Jeep’s steering wheel, they tricked it into thinking it was parking itself—even if it was moving at 80 miles per hour.

Their car-hacking hijinks, in other words, were limited to the few functions a vehicle’s computer controls. In a driverless car, the computer controls everything. “In an autonomous vehicle, the computer can apply the brakes and turn the steering wheel any amount, at any speed,” Miller says. “The computers are even more in charge.”

An alert driver could also override many of the attacks Miller and Valasek demonstrated on traditional cars: Tap the brakes and that cruise control acceleration immediately ceases. Even the steering wheel attacks could be easily overcome if the driver wrests control of the wheel. When the passenger isn’t in the driver’s seat—or there is no steering wheel or brake pedal—no such manual override exists. “No matter what we did in the past, the human had a chance to control the car. But if you’re sitting in the backseat, that’s a whole different story,” says Miller. “You’re totally at the mercy of the vehicle.”

Hackers Take Rides, Too

A driverless car that’s used as a taxi, Miller points out, poses even more potential problems. In that situation, every passenger has to be considered a potential threat. Security researchers have shown that merely plugging an internet-connected gadget into a car’s OBD2 port—a ubiquitous outlet under its dashboard—can offer a remote attacker an entry point into the vehicle’s most sensitive systems. (Researchers at the University of California at San Diego showed in 2015 that they could take control of a Corvette’s brakes via a common OBD2 dongle distributed by insurance companies—including one that partnered with Uber.)

“There’s going to be someone you don’t necessarily trust sitting in your car for an extended period of time,” says Miller. “The OBD2 port is something that’s pretty easy for a passenger to plug something into and then hop out, and then they have access to your vehicle’s sensitive network.”

Permanently plugging that port is illegal under federal regulations, Miller says. He suggests ridesharing companies that use driverless cars could cover it with tamper-evident tape. But even then, they might only be able to narrow down which passenger could have sabotaged a vehicle to a certain day or week. A more comprehensive fix would mean securing the vehicle’s software so that not even a malicious hacker with full physical access to its network would be able to hack it—a challenge Miller says only a few highly locked-down products like an iPhone or Chromebook can pass.

“It’s definitely a hard problem,” he says.

Deep Fixes

Miller argues that solving autonomous vehicles’ security flaws will require some fundamental changes to their security architecture. Their internet-connected computers, for instance, will need “codesigning,” a measure that ensures they only run trusted code signed with a certain cryptographic key. Today only Tesla has talked publicly about implementing that feature. Cars’ internal networks will need better internal segmentation and authentication, so that critical components don’t blindly follow commands from the OBD2 port. They need intrusion detection systems that can alert the driver—or rider—when something anomalous happens on the cars’ internal networks. (Miller and Valasek designed one such prototype.) And to prevent hackers from getting an initial, remote foothold, cars need to limit their “attack surface,” any services that might accept malicious data sent over the internet.

Complicating those fixes? Companies like Uber and Didi don’t even make the cars they use, but instead have to bolt on any added security after the fact. “They’re getting a car that already has some attack surface, some vulnerabilities, and a lot of software they don’t have any control over, and then trying to make that into something secure,” says Miller. “That’s really hard.”

That means solving autonomous vehicles’ security nightmares will require far more open conversation and cooperation among companies. That’s part of why Miller left Uber, he says: He wants the freedom to speak more openly within the industry. “I want to talk about how we’re securing cars and the scary things we see, instead of designing these things in private and hoping that we all know what we’re doing,” he says.

Car hacking, fortunately, remains largely a concern for the future: No car has yet been digitally hijacked in a documented, malicious case. But that means now’s the time to work on the problem, Miller says, before cars become more automated and make the problem far more real. “We have some time to build up these security measures and get them right before something happens,” says Miller. “And that’s why I’m doing this.”

https://www.wired.com/2017/04/ubers-former-top-hacker-securing-autonomous-cars-really-hard-problem/

Whatsapp spies on your encrypted messages

Exclusive: Privacy campaigners criticise WhatsApp vulnerability as a ‘huge threat to freedom of speech’ and warn it could be exploited by government agencies

Research shows that the company can read messages due to the way WhatsApp has implemented its end-to-end encryption protocol.
Research shows that WhatsApp can read messages due to the way the company has implemented its end-to-end encryption protocol. Photograph: Ritchie B Tongo/EPA

A security backdoor that can be used to allow Facebook and others to intercept and read encrypted messages has been found within its WhatsApp messaging service.

Facebook claims that no one can intercept WhatsApp messages, not even the company and its staff, ensuring privacy for its billion-plus users. But new research shows that the company could in fact read messages due to the way WhatsApphas implemented its end-to-end encryption protocol.

Privacy campaigners said the vulnerability is a “huge threat to freedom of speech” and warned it can be used by government agencies to snoop on users who believe their messages to be secure. WhatsApp has made privacy and security a primary selling point, and has become a go to communications tool of activists, dissidents and diplomats.

WhatsApp’s end-to-end encryption relies on the generation of unique security keys, using the acclaimed Signal protocol, developed by Open Whisper Systems, that are traded and verified between users to guarantee communications are secure and cannot be intercepted by a middleman. However, WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.

The recipient is not made aware of this change in encryption, while the sender is only notified if they have opted-in to encryption warnings in settings, and only after the messages have been resent. This re-encryption and rebroadcasting effectively allows WhatsApp to intercept and read users’ messages.

The security backdoor was discovered by Tobias Boelter, a cryptography and security researcher at the University of California, Berkeley. He told the Guardian: “If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access due to the change in keys.”

The backdoor is not inherent to the Signal protocol. Open Whisper Systems’ messaging app, Signal, the app used and recommended by whistleblower Edward Snowden, does not suffer from the same vulnerability. If a recipient changes the security key while offline, for instance, a sent message will fail to be delivered and the sender will be notified of the change in security keys without automatically resending the message.

WhatsApp’s implementation automatically resends an undelivered message with a new key without warning the user in advance or giving them the ability to prevent it.

Boelter reported the backdoor vulnerability to Facebook in April 2016, but was told that Facebook was aware of the issue, that it was “expected behaviour” and wasn’t being actively worked on. The Guardian has verified the backdoor still exists.

The WhatsApp vulnerability calls into question the privacy of messages sent across the service used around the world, including by people living in oppressive regimes.
Pinterest
The WhatsApp vulnerability calls into question the privacy of messages sent across the service used around the world, including by people living in oppressive regimes. Photograph: Marcelo Sayão/EPA

Steffen Tor Jensen, head of information security and digital counter-surveillance at the European-Bahraini Organisation for Human Rights, verified Boelter’s findings. He said: “WhatsApp can effectively continue flipping the security keys when devices are offline and re-sending the message, without letting users know of the change till after it has been made, providing an extremely insecure platform.”

Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”

The vulnerability calls into question the privacy of messages sent across the service, which is used around the world, including by people living in oppressive regimes.

Professor Kirstie Ball, co-director and founder of the Centre for Research into Information, Surveillance and Privacy, called the existence of a backdoor within WhatsApp’s encryption “a gold mine for security agencies” and “a huge betrayal of user trust”. She added: “It is a huge threat to freedom of speech, for it to be able to look at what you’re saying if it wants to. Consumers will say, I’ve got nothing to hide, but you don’t know what information is looked for and what connections are being made.”

In the UK, the recently passed Investigatory Powers Act allows the government to intercept bulk data of users held by private companies, without suspicion of criminal activity, similar to the activity of the US National Security Agency uncovered by the Snowden revelations. The government also has the power to force companies to “maintain technical capabilities” that allow data collection through hacking and interception, and requires companies to remove “electronic protection” from data. Intentional or not, WhatsApp’s backdoor to the end-to-end encryption could be used in such a way to facilitate government interception.

Jim Killock, executive director of Open Rights Group, said: “If companies claim to offer end-to-end encryption, they should come clean if it is found to be compromised – whether through deliberately installed backdoors or security flaws. In the UK, the Investigatory Powers Act means that technical capability notices could be used to compel companies to introduce flaws – which could leave people’s data vulnerable.”

A WhatsApp spokesperson told the Guardian: “Over 1 billion people use WhatsApp today because it is simple, fast, reliable and secure. At WhatsApp, we’ve always believed that people’s conversations should be secure and private. Last year, we gave all our users a better level of security by making every message, photo, video, file and call end-to-end encrypted by default. As we introduce features like end-to-end encryption, we focus on keeping the product simple and take into consideration how it’s used every day around the world.

“In WhatsApp’s implementation of the Signal protocol, we have a “Show Security Notifications” setting (option under Settings > Account > Security) that notifies you when a contact’s security code has changed. We know the most common reasons this happens are because someone has switched phones or reinstalled WhatsApp. This is because in many parts of the world, people frequently change devices and Sim cards. In these situations, we want to make sure people’s messages are delivered, not lost in transit.”

Asked to comment specifically on whether Facebook/WhatApps had accessed users’ messages and whether it had done so at the request of government agencies or other third parties, it directed the Guardian to its site that details aggregate data on government requests by country.

Concerns over the privacy of WhatsApp users has been repeatedly highlighted since Facebook acquired the company for $22bn in 2014. In August 2015, Facebook announced a change to the privacy policy governing WhatsApp that allowed the social network to merge data from WhatsApp users and Facebook, including phone numbers and app usage, for advertising and development purposes.

Facebook halted the use of the shared user data for advertising purposes in November after pressure from the pan-European data protection agency groupArticle 29 Working Party in October. The European commission then filed charges against Facebook for providing “misleading” information in the run-up to the social network’s acquisition of messaging service WhatsApp, following its data-sharing change.

https://www.theguardian.com/technology/2017/jan/13/whatsapp-backdoor-allows-snooping-on-encrypted-messages

Obama gives CIA, FBI, DEA and 13 other agencies warrantless full raw DATA NSA surveillance access on Americans citizens

Obama gives CIA, FBI, DEA and 13 other agencies warrantless full raw DATA NSA surveillance access on Americans citizens.

Further Reading:

In its final days, the Obama administration has expanded the power of the National Security Agency to share globally intercepted personal communications with the government’s 16 other intelligence agencies before applying privacy protections.

The change means that far more officials will be searching through raw data.

Previously, the N.S.A. filtered information before sharing intercepted communications with another agency, like the C.I.A. or the intelligence branches of the F.B.I. and the Drug Enforcement Administration. The N.S.A.’s analysts passed on only information they deemed pertinent, screening out the identities of innocent people and irrelevant personal information.

Now, other intelligence agencies will be able to search directly through raw repositories of communications intercepted by the N.S.A. and then apply such rules for “minimizing” privacy intrusions.

“Rather than dramatically expanding government access to so much personal data, we need much stronger rules to protect the privacy of Americans,” Mr. Toomey said. “Seventeen different government agencies shouldn’t be rooting through Americans’ emails with family members, friends and colleagues, all without ever obtaining a warrant.”

“This development is very troubling for Americans’ privacy,” said John Napier Tye, a former state department official turned surveillance whistleblower. “Most people don’t realize this, but even our purely domestic email and text messages are often stored on servers outside the United States. And the NSA has written extremely permissive rules for itself to collect data outside US borders.

“So in operations overseas, the NSA is scooping up a lot of purely domestic communications. And now, with these new rules, many different federal agencies can search and read the domestic communications of normal Americans, without any warrant or oversight from Congress or the courts.”

They mean that NSA officials are no longer required to filter out information about innocent people whose identities have been scooped up before passing the intercepted communications to officials from other agencies, who will now be able to search through raw caches of data.

“This raises serious concerns that agencies that have responsibilities such as prosecuting domestic crimes, regulating our financial policy, and enforcing our immigration laws will now have access to a wealth of personal information that could be misused,” said Singh Guliani. “Congress needs to take action to regulate and provide oversight over these activities.”

https://www.theguardian.com/world/2017/jan/12/obama-us-intelligence-greater-access-warrantless-data-foreign-targets

Privacy advocates’ concerns center around loopholes in the rules that allow agencies like the FBI and DEA to search the NSA’s collected data for purposes such as investigating an “agent of a foreign power.” Any evidence of illegal behavior that a searcher stumbles on can be used in a criminal prosecution. That means the rule change, according to Cardozo, introduces new possibilities for law enforcement agencies like the DEA and FBI to carry out what’s known as “parallel construction.” That maneuver involves secretly using the NSA’s intelligence to identify or track a criminal suspect, and then fabricating a plausible trail of evidence to present to a court as an after-the-fact explanation of the investigation’s origin. The technique was the subject of an ACLU lawsuit against the Office of the Director of National Intelligence in 2012, and resulted in the Justice Department admitting to repeatedly using the technique to hide the NSA’s involvement in criminal investigations.

“It used to be that if NSA itself saw the evidence of a crime, they could give a tip to the FBI, and the FBI would engage in parallel construction,” says Cardozo. “Now FBI will be able to get into the raw data themselves and do what they will with it.”

https://www.wired.com/2017/01/just-time-trump-nsa-loosens-privacy-rules/

How NSA identifies you by just starting your windows PC

Thanks to the fine research paper found here http://www.icir.org/vern/papers/trackers-pets16.pdf  YOU ARE easiliy identified when you just start your windows PC and log onto the internet – not requiring you any user-inaction.

You are identified by either: HTTP Identifiers or NON-HTTP Identifiers

HTTP Identifiers

Application-specific: The first category is identifiers sent by applications other than browsers. For example, Skype sends a user identifier uhash in a URL of the format http://ui.skype.com/ui/2/2.1.0.81/ en/getlatestversion?ver=2.1.0.81&uhash= . The parameter uhash is a hash of the user ID, their password, and a salt, and remains constant for a given Skype user [12]. uhash can very well act as an identifier for a user; a monitor who observes the same value from two different clients/networks can infer that it reflects the same user on both. Another example in this category is a Dropbox user_id sent as a URL parameter. We discovered that since the Dropbox application regularly syncs with its server, it sends out this identifier—surprisingly, every minute—without requiring any user action.

Mobile devices: Our methodology enabled us to discover that the Apple weather app sends IMEI and IMSI numbers in POST requests to iphone-wu.apple.com. We can recognize these as such, because the parameter name in the context clearly names them as IMEI and IMSI; the value also matches the expected format for these identifiers. Other apps also send a number of device identifiers, such as phone make, advertising ID,4 SHA1 hashes of serial number, MAC address, and UDID (unique device identifier) across various domains, such as s.amazon-adsystem.com, jupiter.apads.com and ads.mp.mydas.mobi. The iOS and Android mobile SDKs provide access to these identifiers.

http-identifiers

NON-HTTP Identifiers

Device identifiers sent by iOS/OSX: We found instances of device identifiers sent on port 5223. Apple devices use this port to maintain a persistent connection with Apple’s Push Notification (APN) service, through which they receive push notifications for installed apps.

An app-provider sends to an APN server the push notification along with the device token of the recipient device. The APN server in turn forwards the notification to the device, identifiying it via the device token [2]. This device token is an opaque device identifier, which the APN service gives to the device when it first connects. The device sends this token (in clear text) to the APN server on every connection, and to each app-provider upon app installation. This identifier enabled us to identify 68 clients in our dataset as Apple devices. The devices sent their device token to a total of 407 IP addresses in two networks belonging to Apple (17.172.232/24, 17.149/16).

non-http-identifiers

The work http://www.icir.org/vern/papers/trackers-pets16.pdf was supported by the Intel Science and Technology Center for Secure Computing, the U.S. Army Research Office and by the National Science Foundation.

Copy of Publication here: trackers-pets16

Encryption Is Being Scapegoated To Mask The Failures Of Mass Surveillance

Source: http://techcrunch.com/2015/11/17/the-blame-game/

Well that took no time at all. Intelligence agencies rolled right into the horror and fury in the immediate wake of the latest co-ordinated terror attacks in the French capital on Friday, to launch their latest co-ordinated assault on strong encryption — and on the tech companies creating secure comms services — seeking to scapegoat end-to-end encryption as the enabling layer for extremists to perpetrate mass murder.

There’s no doubt they were waiting for just such an ‘opportune moment’ to redouble their attacks on encryption after recent attempts to lobby for encryption-perforating legislation foundered. (A strategy confirmed by a leaked email sent by the intelligence community’s top lawyer, Robert S. Litt, this August — and subsequently obtained by the Washington Post — in which he anticipated that a “very hostile legislative environment… could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement”. Et voila Paris… )

Speaking to CBS News the weekend in the immediate aftermath of the Paris attacks, former CIA deputy director Michael Morell said: “I think this is going to open an entire new debate about security versus privacy.”

“We, in many respects, have gone blind as a result of the commercialization and the selling of these devices that cannot be accessed either by the manufacturer or, more importantly, by us in law enforcement, even equipped with search warrants and judicial authority,” added New York City police commissioner, William J. Bratton, quoted by the NYT in a lengthy article probing the “possible” role of encrypted messaging apps in the Paris attacks.

Elsewhere the fast-flowing attacks on encrypted tech services have come without a byline — from unnamed European and American officials who say they are “not authorized to speak publicly”. Yet are happy to speak publicly, anonymously.

The NYT published an article on Sunday alleging that attackers had used “encryption technology” to communicate — citing “European officials who had been briefed on the investigation but were not authorized to speak publicly”. (The paper subsequently pulled the article from its website, as noted by InsideSources, although it can still be read via the Internet Archive.)

The irony of government/intelligence agency sources briefing against encryption on condition of anonymity as they seek to undermine the public’s right to privacy would be darkly comic if it weren’t quite so brazen.

Seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.

Here’s what one such unidentified British intelligence source told Politico: “As members of the general public get preoccupied that the government is spying on them, they have adopted these applications and terrorists have found them tailor-made for their own use.”

It’s a pretty incredible claim when you examine it. This unknown spook mouthpiece is saying terrorists are able to organize acts of mass murder as a direct consequence of the public’s dislike of government mass surveillance. Take even a cursory glance at the history of terrorism and that claim folds in on itself immediately. The highly co-ordinated 9/11 attacks of 2001 required no backdrop of public privacy fears in order to be carried out — and with horrifying, orchestrated effectiveness.

In the same Politico article, an identified source — J.M. Berger, the co-author of a book about ISIS — makes a far more credible claim: “Terrorists use technology improvisationally.”

Of course they do. The co-founder of secure messaging app Telegram, Pavel Durov, made much the same point earlier this fall when asked directly by TechCrunch about ISIS using his app to communicate. “Ultimately the ISIS will always find a way to communicate within themselves. And if any means of communication turns out to be not secure for them, then they switch to another one,” Durov argued. “I still think we’re doing the right thing — protecting our users privacy.”

Bottom line: banning encryption or enforcing tech companies to backdoor communications services has zero chance of being effective at stopping terrorists finding ways to communicate securely. They can and will route around such attempts to infiltrate their comms, as others have detailed at length.

Here’s a recap: terrorists can use encryption tools that are freely distributed from countries where your anti-encryption laws have no jurisdiction. Terrorists can (and do) build their own securely encrypted communication tools. Terrorists can switch to newer (or older) technologies to circumvent enforcement laws or enforced perforations. They can use plain old obfuscation to code their communications within noisy digital platforms like the Playstation 4 network, folding their chatter into general background digital noise (of which there is no shortage). And terrorists can meet in person, using a network of trusted couriers to facilitate these meetings, as Al Qaeda — the terrorist group that perpetrated the highly sophisticated 9/11 attacks at a time when smartphones were far less common, nor was there a ready supply of easy-to-use end-to-end encrypted messaging apps — is known to have done.

Point is, technology is not a two-lane highway that can be regulated with a couple of neat roadblocks — whatever many politicians appear to think. All such roadblocks will do is catch the law-abiding citizens who rely on digital highways to conduct more and more aspects of their daily lives. And make those law-abiding citizens less safe in multiple ways.

There’s little doubt that the lack of technological expertise in the upper echelons of governments is snowballing into a very ugly problem indeed as technology becomes increasingly sophisticated yet political rhetoric remains grounded in age-old kneejerkery. Of course we can all agree it would be beneficial if we were able to stop terrorists from communicating. But the hard political truth of the digital era is that’s never going to be possible. It really is putting the proverbial finger in the dam. (There are even startups working on encryption that’s futureproofed against quantum computers — and we don’t even have quantum computers yet.)

Another hard political truth is that effective counter terrorism policy requires spending money on physical, on-the-ground resources — putting more agents on the ground, within local communities, where they can gain trust and gather intelligence. (Not to mention having a foreign policy that seeks to promote global stability, rather than generating the kind of regional instability that feeds extremism by waging illegal wars, for instance, or selling arms to regimes known to support the spread of extremist religious ideologies.)

Yet, in the U.K. at least, the opposite is happening — police force budgets are being slashed. Meanwhile domestic spy agencies are now being promised more staff, yet spooks’ time is increasingly taken up with remote analysis of data, rather than on the ground intelligence work. The U.K. government’s draft new surveillance laws aim to cement mass surveillance as the officially sanctioned counter terror modus operandi, and will further increase the noise-to-signal ratio with additional data capture measures, such as mandating that ISPs retain data on the websites every citizen in the country has visited for the past year. Truly the opposite of a targeted intelligence strategy.

The draft Investigatory Powers Bill also has some distinctly ambiguous wording when it comes to encryption — suggesting the U.K. government is still seeking to legislate a general ability that companies be able to decrypt communications. Ergo, to outlaw end-to-end encryption. Yes, we’re back here again. You’d be forgiven for thinking politicians lacked a long-term memory.

Effective encryption might be a politically convenient scapegoat to kick around in the wake of a terror attack — given it can be used to detract attention from big picture geopolitical failures of governments. And from immediate on the ground intelligence failures — whether those are due to poor political direction, or a lack of resources, or bad decision-making/prioritization by overstretched intelligence agency staff. Pointing the finger of blame at technology companies’ use of encryption is a trivial diversion tactic to detract from wider political and intelligence failures with much more complex origins.

(On the intelligence failures point, questions certainly need to be asked, given that French and Belgian intelligence agencies apparently knew about the jihadi backgrounds of perpetrators of the Paris attacks. Yet weren’t, apparently, targeting them closely enough to prevent Saturday’s attack. And all this despite France having hugely draconian counter-terrorism digital surveillance laws…)

But seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.

Mandating vulnerabilities be built into digital communications opens up an even worse prospect: new avenues for terrorists and criminals to exploit. As officials are busy spinning the notion that terrorism is all-but only possible because of the rise of robust encryption, consider this: if the public is deprived of its digital privacy — with terrorism applied as the justification to strip out the robust safeguard of strong encryption — then individuals become more vulnerable to acts of terrorism, given their communications cannot be safeguarded from terrorists. Or criminals. Or fraudsters. Or anyone incentivized by malevolent intent.

If you want to speculate on fearful possibilities, think about terrorists being able to target individuals at will via legally-required-to-be insecure digital services. If you think terror tactics are scary right now, think about terrorists having the potential to single out, track and terminate anyone at will based on whatever twisted justification fits their warped ideology — perhaps after that person expressed views they do not approve of in an online forum.

In a world of guaranteed insecure digital services it’s a far more straightforward matter for a terrorist to hack into communications to obtain the identity of a person they deem a target, and to use other similarly perforated technology services to triangulate and track someone’s location to a place where they can be made the latest victim of a new type of hyper-targeted, mass surveillance-enabled terrorism. Inherently insecure services could also be more easily compromised by terrorists to broadcast their own propaganda, or send out phishing scams, or otherwise divert attention en masse.

The only way to protect against these scenarios is to expand the reach of properly encrypted services. To champion the cause of safeguarding the public’s personal data and privacy, rather than working to undermine it — and undermining the individual freedoms the West claims to be so keen to defend in the process.

While, when it comes to counter terrorism strategy, what’s needed is more intelligent targeting, not more mass measures that treat everyone as a potential suspect and deluge security agencies in an endless churn of irrelevant noise. Even the robust end-to-end encryption that’s now being briefed against as a ‘terrorist-enabling evil’ by shadowy officials on both sides of the Atlantic can be compromised at the level of an individual device. There’s no guaranteed shortcut to achieve that. Nor should there be — that’s the point. It takes sophisticated, targeted work.

But blanket measures to compromise the security of the many in the hopes of catching out the savvy few are even less likely to succeed on the intelligence front. We have mass surveillance already, and we also have blood on the streets of Paris once again. Encryption is just a convenient scapegoat for wider policy failures of an industrial surveillance complex.

So let’s not be taken in by false flags flown by anonymous officials trying to mask bad political decision-making. And let’s redouble our efforts to fight bad policy which seeks to entrench a failed ideology of mass surveillance — instead of focusing intelligence resources where they are really needed; honing in on signals, not drowned out by noise.