Schlagwort-Archive: Privacy

Google needs to apologize for violating the trust of its users once again

SAN FRANCISCO, CA - MAY 28: Google senior vice president of product Sundar Pichai delivers the keynote address during the 2015 Google I/O conference on May 28, 2015 in San Francisco, California. The annual Google I/O conference runs through May 29. (Photo by Justin Sullivan/Getty Images)Justin Sullivan/Getty Images

  • An Associated Press investigation recently discovered that Google still collects its users‘ location data even if they have their Location History turned off.
  • After the report was published, Google quietly updated its help page to describe how location settings work.
  • Previously, the page said „with Location History off, the places you go are no longer stored.“
  • Now, the page says, „This setting does not affect other location services on your device,“ adding that „some location data may be saved as part of your activity on other services, like Search and Maps.“
  • The quiet changing of false information is a major violation of users‘ trust.
  • Google needs to do better.

Google this week acknowledged that it quietly tracks its users‘ locations, even if those people turn off their Location History — a clarification that came in the wake of an Associated Press investigation.

It’s a major violation of users‘ trust.

And yet, nothing is going to happen as a result of this episode.

It’s happened before

Google has a history of bending the rules:

  • In 2010, Google’s Street View cars were caught eavesdropping on people’s Wi-Fi connections.
  • In 2011, Google agreed to forfeit $500 million after a criminal investigation by the Justice Department found that Google illegally allowed advertisements from online Canadian pharmacies to sell their products in the US.
  • In 2012, Google circumvented the no-cookies policy on Apple’s Safari web browser and paid a $22.5 million fine to the Federal Trade Commission as a result.

Ultimately, Google came out of all of these incidents just fine. It paid some money here and there, and sat in a few courtrooms, but nothing really happened to the company’s bottom line. People continued using Google’s services.

Other companies have done it too

Remember Cambridge Analytica?

Five months ago in March, a 28-year-old named Christopher Wylie blew the whistle on his employer, the data-analytics company, Cambridge Analytica, at which he served as a director of research.

It was later revealed that Cambridge Analytica had collected the data of over 87 million Facebook users in an attempt to influence the 2016 presidential election in favor of the Republican candidate, Donald Trump.

One month later, Facebook CEO Mark Zuckerberg was summoned in front of Congress to answer questions related to the Cambridge Analytica scandal over a two-day span.

mark zuckerberg congress facebook awkwardFacebook CEO Mark Zuckerberg, takes a drink of water while testifying before a joint hearing of the Commerce and Judiciary Committees on Capitol Hill in Washington, Tuesday, April 10, 2018, about the use of Facebook data to target American voters in the 2016 election.AP Photo/Andrew Harnik

Many users felt like their trust was violated. A hashtag movement called „#DeleteFacebook“ was born.

And yet, nothing has really changed at Facebook since that scandal, which similarly involved the improper collection of user data, and the violation of users‘ trust.

Facebook seems to be doing just fine. During its Q2 earnings report in late July, Facebook reported over $13 billion in revenue — a 42% jump year-over-year — and an 11% increase in both daily and monthly active users.

In short, Facebook is not going anywhere. And neither is Google.

Too big — and too good — to fail

Just like Facebook has no equal among the hundreds of other social networks out there, the same goes for Google and competing search engines.

According to StatCounter, Google has a whopping 90% share of the global search engine market.

The next biggest search engine in the world is Microsoft’s Bing, which has a paltry 3% market share.

In other words, a cataclysmic event would have to occur for people to switch search engines. Or, another search engine would have to come along and completely unseat Google.

But that’s probably not going to happen.

GoogleUladzik Kryhin/Shutterstock

For almost 20 years now, Google dominated the search engine game. Its other services have become similarly prevalent: Gmail, and Google Docs, have all become integral parts of people’s personal and work lives. Of course, there are similar mail and productivity services out there, but using Google is far more convenient, since most people use more than one Google product, and having all of your applications talk to each other and share information is mighty convenient.

This isn’t meant to cry foul: Google is one of the top software makers in the world, but it has earned that status by constantly improving and iterating on its products, and even itself, over the past two decades. But one does wonder what event, if any, could possibly make people quit a service as big and convenient and powerful as Google once and for all.

The fact is: That probably won’t happen. People likely won’t quit Google’s services, unless there’s some major degradation of quality. But Google, as a leader in Silicon Valley, should strive to do better for its customers. Intentional or not, misleading customers about location data is a bad thing. Google failed its customers: It let users think they had more control when they did, and they only corrected their language about location data after a third-party investigation. But there was no public acknowledgement of an error, and no mea culpa.

Google owes its users a true apology. Quietly updating an online help page isn’t good enough.

 

http://uk.businessinsider.com/google-location-data-violates-user-trust-nothing-will-happen-2018-8?r=US&IR=T

Advertisements

Lets Get Rid of the “Nothing to Hide, Nothing to Fear” Mentality

With Zuckerberg testifying to the US Congress over Facebook’s data privacy and the implementation of GDPR fast approaching, the debate around data ownership has suddenly burst into the public psyche. Collecting user data to serve targeted advertising in a free platform is one thing, harvesting the social graphs of people interacting with apps and using it to sway an election is somewhat worse.

Suffice to say that neither of the above compare to the indiscriminate collection of ordinary civilians’ data on behalf of governments every day.

In 2013, Edward Snowden blew the whistle on the systematic US spy program he helped to architect. Perhaps the largest revelation to come out of the trove of documents he released were the details of PRISM, an NSA program that collects internet communications data from US telecommunications companies like Microsoft, Yahoo, Google, Facebook and Apple. The data collected included audio and video chat logs, photographs, emails, documents and connection logs of anyone using the services of 9 leading US internet companies. PRISM benefited from changes to FISA that allowed warrantless domestic surveillance of any target without the need for probable cause. Bill Binney, former US intelligence official, explains how, for instances where corporate control wasn’t achievable, the NSA enticed third party countries to clandestinely tap internet communication lines on the internet backbone via the RAMPART-A program.What this means is that the NSA was able to assemble near complete dossiers of all web activity carried out by anyone using the internet.

But this is just in the US right?, policies like this wouldn’t be implemented in Europe.

Wrong unfortunately.

GCHQ, the UK’s intelligence agency allegedly collects considerably more metadata than the NSA. Under Tempora, GCHQ can intercept all internet communications from submarine fibre optic cables and store the information for 30 days at the Bude facility in Cornwall. This includes complete web histories, the contents of all emails and facebook entires and given that more than 25% of all internet communications flow through these cables, the implications are astronomical. Elsewhere, JTRIG, a unit of GCHQ have intercepted private facebook pictures, changed the results of online polls and spoofed websites in real time. A lot of these techniques have been made possible by the 2016 Investigatory Powers Act which Snowden describes as the most “extreme surveillance in the history of western democracy”.

But despite all this, the age old reprise; “if you’ve got nothing to hide, you’ve got nothing to fear” often rings out in debates over privacy.

Indeed, the idea is so pervasive that politicians often lean on the phrase to justify ever more draconian methods of surveillance. Yes, they draw upon the selfsame rhetoric of Joseph Goebbels, propaganda minister for the Nazi regime.

In drafting legislation for the the Investigatory Powers Act, May said that such extremes were necessary to ensure “no area of cyberspace becomes a haven for those who seek to harm us, to plot, poison minds and peddle hatred under the radar”.

When levelled against the fear of terrorism and death, its easy to see how people passively accept ever greater levels of surveillance. Indeed, Naomi Klein writes extensively in Shock Doctrine how the fear of external threats can be used as a smokescreen to implement ever more invasive policy. But indiscriminate mass surveillance should never be blindly accepted, privacy should and always will be a social norm, despite what Mark Zuckerberg said in 2010. Although I’m sure he may have a different answer now.

So you just read emails and look at cat memes online, why would you care about privacy?

In the same way we’re able to close our living room curtains and be alone and unmonitored, we should be able to explore our identities online un-impinged. Its a well rehearsed idea that nowadays we’re more honest to our web browsers than we are to each other but what happens when you become cognisant that everything you do online is intercepted and catalogued? As with CCTV, when we know we’re being watched, we alter our behaviour in line with whats expected.

As soon as this happens online, the liberating quality provided by the anonymity of the internet is lost. Your thinking aligns with the status quo and we lose the boundless ability of the internet to search and develop our identities. No progress can be made when everyone thinks the same way. Difference of opinion fuels innovation.

This draws obvious comparisons with Bentham’s Panopticon, a prison blueprint for enforcing control from within. The basic setup is as follows; there is a central guard tower surrounded by cells. In the cells are prisoners. The tower shines bright light so that the watchman can see each inmate silhouetted in their cell but the prisoners cannot see the watchman. The prisoners must assume they could be observed at any point and therefore act accordingly. In literature, the common comparison is Orwell’s 1984 where omnipresent government surveillance enforces control and distorts reality. With revelations about surveillance states, the relevance of these metaphors are plain to see.

In reality, theres actually a lot more at stake here.

With the Panopticon certain individuals are watched, in 1984 everyone is watched. On the modern internet, every person, irrespective of the threat they pose, is not only watched but their information is stored and archived for analysis.

Kafka’s The Trial, in which a bureaucracy uses citizens information to make decisions about them, but denies them the ability to participate in how their information is used, therefore seems a more apt comparison. The issue here is that corporations, more so, states have been allowed to comb our data and make decisions that affect us without our consent.

Maybe, as a member of a western democracy, you don’t think this matters. But what if you’re a member of a minority group in an oppressive regime? What if you’re arrested because a computer algorithm cant separate humour from intent to harm?

On the other hand, maybe you trust the intentions of your government, but how much faith do you have in them to keep your data private? The recent hack of the SEC shows that even government systems aren’t safe from attackers. When a business database is breached, maybe your credit card details become public, when a government database that has aggregated millions of data points on every aspect of your online life is hacked, you’ve lost all control of your ability to selectively reveal yourself to the world. Just as Lyndon Johnson sought to control physical clouds, he who controls the modern cloud, will rule the world.

Perhaps you think that even this doesn’t matter, if it allows the government to protect us from those that intend to cause harm then its worth the loss of privacy. The trouble with indiscriminate surveillance is that with so much data you see everything but paradoxically, still know nothing.

Intelligence is the strategic collection of pertinent facts, bulk data collection cannot therefore be intelligent. As Bill Binney puts it “bulk data kills people” because technicians are so overwhelmed that they cant isolate whats useful. Data collection as it is can only focus on retribution rather than reduction.

Granted, GDPR is a big step forward for individual consent but will it stop corporations handing over your data to the government? Depending on how cynical you are, you might think that GDPR is just a tool to clean up and create more reliable deterministic data anyway. The nothing to hide, nothing to fear mentality renders us passive supplicants in the removal of our civil liberties. We should be thinking about how we relate to one another and to our Governments and how much power we want to have in that relationship.

To paraphrase Edward Snowden, saying you don’t care about privacy because you’ve got nothing to hide is analogous to saying you don’t care about freedom of speech because you have nothing to say.

http://behindthebrowser.space/index.php/2018/04/22/nothing-to-fear-nothing-to-hide/

Android’s trust problem

Illustration by William Joel / The Verge

Published today, a two-year study of Android security updates has revealed a distressing gap between the software patches Android companies claim to have on their devices and the ones they actually have. Your phone’s manufacturer may be lying to you about the security of your Android device. In fact, it appears that almost all of them do.

Coming at the end of a week dominated by Mark Zuckerberg’s congressional hearings and an ongoing Facebook privacy probe, this news might seem of lesser importance, but it goes to the same issue that has drawn lawmakers’ scrutiny to Facebook: the matter of trust. Facebook is the least-trusted big US tech company, and Android might just be the operating system equivalent of it: used by 2 billion people around the world, tolerated more than loved, and susceptible to major lapses in user privacy and security.

The gap between Android and its nemesis, Apple’s iOS, has always boiled down to trust. Unlike Google, Apple doesn’t make its money by tracking the behavior of its users, and unlike the vast and varied Android ecosystem, there are only ever a couple of iPhone models, each of which is updated with regularity and over a long period of time. Owning an iPhone, you can be confident that you’re among Apple’s priority users (even if Apple faces its own cohort of critics accusing it of planned obsolescence), whereas with an Android device, as evidenced today, you can’t even be sure that the security bulletins and updates you’re getting are truthful.

Android is perceived as untrustworthy in large part because it is. Beside the matter of security misrepresentations, here are some of the other major issues and villains plaguing the platform:

Version updates are slow, if they arrive at all. I’ve been covering Android since its earliest Cupcake days, and in the near-decade that’s passed, there’s never been a moment of contentment about the speed of OS updates. Things seemed to be getting even worse late last year when the November batch of new devices came loaded with 2016’s Android Nougat. Android Oreo is now nearly eight months old — meaning we’re closer to the launch of the next version of Android than the present one — and LG is still preparing to roll out that software for its 2017 flagship LG G6.

Promises about Android device updates are as ephemeral as Snapchat messages. Before it became the world’s biggest smartphone vendor, Samsung was notorious for reneging on Android upgrade promises. Sony’s Xperia Z3 infamously fell foul of an incompatibility between its Snapdragon processor and Google’s Android Nougat requirements, leaving it prematurely stuck without major OS updates. Whenever you have so many loud voices involved — carriers and chip suppliers along with Google and device manufacturers — the outcome of their collaboration is prone to becoming exactly as haphazard and unpredictable as Android software upgrades have become.

Google is obviously aware of the situation, and it’s pushing its Android One initiative to give people reassurances when buying an Android phone. Android One guarantees OS updates for at least two years and security updates for at least three years. But, as with most things Android, Android One is only available on a few devices, most of which are of the budget variety. You won’t find the big global names of Samsung, Huawei, and LG supporting it.

Some Android OEMs snoop on you. This is an ecosystem problem rather than something rooted in the operating system itself, but it still discolors Android’s public reputation. Android phone manufacturers habitually lade their devices with bloatware (stuff you really don’t want or need on your phone), and some have even taken to loading up spyware. Blu’s devices were yanked from Amazon for doing exactly that: selling phones that were vulnerable to remote takeovers and could be exploited to have the user’s text messages and call records clandestinely recorded. OnePlus also got in trouble for having an overly inquisitive user analytics program, which beamed personally identifiable information back to the company’s HQ without explicit user consent.

Huawei is perhaps the most famous example of a potentially conflicted Android phone manufacturer, with US spy agencies openly urging their citizens to avoid Huawei phones for their own security. No hard evidence has yet been presented of Huawei doing anything improper, however the US is not the only country to express concern about the company’s relationship with the Chinese government — and mistrust is based as much on smoke as it is on the actual fire.

Android remains vulnerable, thanks in part to Google’s permissiveness. It’s noteworthy that, when Facebook’s data breach became public and people started looking into what data Facebook had on them, only their Android calls and messages had been collected. Why not the iPhone? Because Apple’s walled-garden philosophy makes it much harder, practically impossible, for a user to inadvertently give consent to privacy-eroding apps like Facebook’s Messenger to dig into their devices. Your data is simply better protected on iOS, and even though Android has taken significant steps forward in making app permissions more granular and specific, it’s still comparatively easy to mislead users about what data an app is obtaining and for what purposes.

Android hardware development is chaotic and unreliable. For many, the blistering, sometimes chaotic pace of change in Android devices is part of the ecosystem’s charm. It’s entertaining to watch companies try all sorts of zany and unlikely designs, with only the best of them surviving more than a few months. But the downside of all this speed is lack of attention being paid to small details and long-term sustainability.

LG made a huge promotional push two years ago around its modular G5 flagship, which was meant to usher in a new accessory ecosystem and elevate the flexibility of LG Android devices to new heights. Within six months, that modular project was abandoned, leaving anyone that bought modular LG accessories — on the expectation of multigenerational support — high and dry. And speaking of dryness, Sony recently got itself in trouble for overpromising by calling its Xperia phones “waterproof.”

Samsung’s Galaxy Note 7 is the best and starkest example of the dire consequences that can result from a hurried and excessively ambitious hardware development cycle. The Note 7 had a fatal battery flaw that led many people’s shiny new Samsung smartphones to spontaneously catch fire. Compare that to the iPhone’s pace of usually incremental changes, implemented at predictable intervals and with excruciating fastidiousness.

Android Marshmallow official logo

Besides pledging to deliver OS updates that never come, claiming to have delivered security updates that never arrived, and taking liberties with your personal data, Android OEMs also have a tendency to exaggerate what their phones can actually do. They don’t collaborate on much, so in spite of pouring great efforts into developing their Android software experience, they also just feed the old steadfast complaint of a fragmented ecosystem.

The problem of trust with Android, much like the problem of trust in Facebook, is grounded in reality. It doesn’t matter that not all Android device makers engage in shady privacy invasion or overreaching marketing claims. The perception, like the Android brand, is collective

https://www.theverge.com/2018/4/13/17233122/android-software-patch-trust-problem

Delete Signal’s texts, or the app itself, and virtually no trace of the conversation remains.

Delete Signal’s texts, or the app itself, and virtually no trace of the conversation remains. “The messages are pretty much gone

Suing to See the Feds’ Encrypted Messages? Good Luck

The recent rise of end-to-end encrypted messaging apps has given billions of people access to strong surveillance protections. But as one federal watchdog group may soon discover, it also creates a transparency conundrum: Delete the conversation from those two ends, and there may be no record left.

The conservative group Judicial Watch is suing the Environmental Protection Agency under the Freedom of Information Act, seeking to compel the EPA to hand over any employee communications sent via Signal, the encrypted messaging and calling app. In its public statement about the lawsuit, Judicial Watch points to reports that EPA staffers have used Signal to communicate secretly, in the face of an adversarial Trump administration.

But encryption and forensics experts say Judicial Watch may have picked a tough fight. Delete Signal’s texts, or the app itself, and virtually no trace of the conversation remains. “The messages are pretty much gone,” says Johns Hopkins crypotgrapher Matthew Green, who has closely followed the development of secure messaging tools. “You can’t prove something was there when there’s nothing there.”

End-to-Dead-End

Signal, like other end-to-end encryption apps, protects messages such that only the people participating in a conversation can read them. No outside observer—not even the Signal server that the messages route through—can sneak a look. Delete the messages from the devices of two Signal communicants, and no other unencrypted copy of it exists.

In fact, Signal’s own server doesn’t keep record of even the encrypted versions of those communications. Last October, Signal’s developers at the non-profit Open Whisper Systems revealed that a grand jury subpoena had yielded practically no useful data. “The only information we can produce in response to a request like this is the date and time a user registered with Signal and the last date of a user’s connectivity to the Signal service,” Open Whisper Systems wrote at the time. (That’s the last time they opened the app, not sent or received a message.)

Even seizing and examining the phones of EPA employees likely won’t help if users have deleted their messages or the full app, Green says. They could even do so on autopilot. Six months ago, Signal added a Snapchat-like feature to allow automated deletionof a conversation from both users’ phones after a certain amount of time. Forensic analyst Jonathan Zdziarski, who now works as an Apple security engineer, wrote in a blog post last year that after Signal messages are deleted, the app “leaves virtually nothing, so there’s nothing to worry about. No messy cleanup.” (Open Whisper Systems declined to comment on the Judicial Watch FOIA request, or how exactly it deletes messages.)

Still, despite its best sterilization efforts, even Signal might leave some forensic trace of deleted messages on phones, says Green. And other less-secure ephemeral messaging apps like Confide, which has also become popular among government staffers, likely leave more fingerprints behind. But Green argues that recovering deleted messages from even sloppier apps would take deeper digging than FOIA requests typically compel—so long as users are careful to delete messages on both sides of the conversation and any cloud backups. “We’re talking about expensive, detailed forensic analysis,” says Green. “It’s a lot more work than you’d expect from someone carrying out FOIA requests.”

For the Records

Deleting records of government business from government-issued devices is—let’s be clear—illegal. That smartphone scrubbing, says Georgetown Law professor David Vladeck, would blatantly violate the Federal Records Act. “It’s no different from taking records home and burning them,” says Vladeck. “They’re not your records, they’re the federal government’s, and you’re not supposed to do that.”

Judicial Watch, for its part, acknowledges that it may be tough to dig up deleted Signal communications. But another element of its FOIA request asks for any EPA information about whether it has approved Signal for use by agency staffers. “They can’t use these apps to thwart the Federal Records Act just because they don’t like Donald Trump,” says Judicial Watch president Tom Fitton. “This serves also as an educational moment for any government employees, that using the app to conduct government business to ensure the deletion of records is against the law, and against record-keeping policies in almost every agency.”

Fitton hopes the lawsuit will at least compel the EPA to prevent employees from installing Signal or similar apps on government-issued phones. “The agency is obligated to ensure their employees are following the rules so that records subject to FOIA are preserved,” he says. “If they’re not doing that, they could be answerable to the courts.”

Georgetown’s Vladeck says that even evidence employees have used Signal at all should be troubling, and might warrant a deeper investigation. “I would be very concerned if employees were using an app designed to leave no trace. That’s smoke, if not a fire, and it’s deeply problematic,” he says.

But Johns Hopkins’ Green counters that FOIA has never been an all-seeing eye into government agencies. And he points out that sending a Signal message to an EPA colleague isn’t so different from simply walking into their office and closing the door. “These ephemeral communications apps give us a way to have those face-to-face conversations electronically and in a secure way,” says Green. “It’s a way to communicate without being on the record. And people need that.”

https://www.wired.com/2017/04/suing-see-feds-encrypted-messages-good-luck/

The CIA Leak Exposes Tech’s Vulnerable Future

Source: https://www.wired.com/2017/03/cia-leak-exposes-techs-vulnerable-future/

Apple Ditched Secrecy for Openness

Apple CEO Tim Cook waves goodbye after an event at the Apple headquarters in Cupertino, California

Encryption Is Being Scapegoated To Mask The Failures Of Mass Surveillance

Source: http://techcrunch.com/2015/11/17/the-blame-game/

Well that took no time at all. Intelligence agencies rolled right into the horror and fury in the immediate wake of the latest co-ordinated terror attacks in the French capital on Friday, to launch their latest co-ordinated assault on strong encryption — and on the tech companies creating secure comms services — seeking to scapegoat end-to-end encryption as the enabling layer for extremists to perpetrate mass murder.

There’s no doubt they were waiting for just such an ‘opportune moment’ to redouble their attacks on encryption after recent attempts to lobby for encryption-perforating legislation foundered. (A strategy confirmed by a leaked email sent by the intelligence community’s top lawyer, Robert S. Litt, this August — and subsequently obtained by the Washington Post — in which he anticipated that a “very hostile legislative environment… could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement”. Et voila Paris… )

Speaking to CBS News the weekend in the immediate aftermath of the Paris attacks, former CIA deputy director Michael Morell said: “I think this is going to open an entire new debate about security versus privacy.”

“We, in many respects, have gone blind as a result of the commercialization and the selling of these devices that cannot be accessed either by the manufacturer or, more importantly, by us in law enforcement, even equipped with search warrants and judicial authority,” added New York City police commissioner, William J. Bratton, quoted by the NYT in a lengthy article probing the “possible” role of encrypted messaging apps in the Paris attacks.

Elsewhere the fast-flowing attacks on encrypted tech services have come without a byline — from unnamed European and American officials who say they are “not authorized to speak publicly”. Yet are happy to speak publicly, anonymously.

The NYT published an article on Sunday alleging that attackers had used “encryption technology” to communicate — citing “European officials who had been briefed on the investigation but were not authorized to speak publicly”. (The paper subsequently pulled the article from its website, as noted by InsideSources, although it can still be read via the Internet Archive.)

The irony of government/intelligence agency sources briefing against encryption on condition of anonymity as they seek to undermine the public’s right to privacy would be darkly comic if it weren’t quite so brazen.

Seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.

Here’s what one such unidentified British intelligence source told Politico: “As members of the general public get preoccupied that the government is spying on them, they have adopted these applications and terrorists have found them tailor-made for their own use.”

It’s a pretty incredible claim when you examine it. This unknown spook mouthpiece is saying terrorists are able to organize acts of mass murder as a direct consequence of the public’s dislike of government mass surveillance. Take even a cursory glance at the history of terrorism and that claim folds in on itself immediately. The highly co-ordinated 9/11 attacks of 2001 required no backdrop of public privacy fears in order to be carried out — and with horrifying, orchestrated effectiveness.

In the same Politico article, an identified source — J.M. Berger, the co-author of a book about ISIS — makes a far more credible claim: “Terrorists use technology improvisationally.”

Of course they do. The co-founder of secure messaging app Telegram, Pavel Durov, made much the same point earlier this fall when asked directly by TechCrunch about ISIS using his app to communicate. “Ultimately the ISIS will always find a way to communicate within themselves. And if any means of communication turns out to be not secure for them, then they switch to another one,” Durov argued. “I still think we’re doing the right thing — protecting our users privacy.”

Bottom line: banning encryption or enforcing tech companies to backdoor communications services has zero chance of being effective at stopping terrorists finding ways to communicate securely. They can and will route around such attempts to infiltrate their comms, as others have detailed at length.

Here’s a recap: terrorists can use encryption tools that are freely distributed from countries where your anti-encryption laws have no jurisdiction. Terrorists can (and do) build their own securely encrypted communication tools. Terrorists can switch to newer (or older) technologies to circumvent enforcement laws or enforced perforations. They can use plain old obfuscation to code their communications within noisy digital platforms like the Playstation 4 network, folding their chatter into general background digital noise (of which there is no shortage). And terrorists can meet in person, using a network of trusted couriers to facilitate these meetings, as Al Qaeda — the terrorist group that perpetrated the highly sophisticated 9/11 attacks at a time when smartphones were far less common, nor was there a ready supply of easy-to-use end-to-end encrypted messaging apps — is known to have done.

Point is, technology is not a two-lane highway that can be regulated with a couple of neat roadblocks — whatever many politicians appear to think. All such roadblocks will do is catch the law-abiding citizens who rely on digital highways to conduct more and more aspects of their daily lives. And make those law-abiding citizens less safe in multiple ways.

There’s little doubt that the lack of technological expertise in the upper echelons of governments is snowballing into a very ugly problem indeed as technology becomes increasingly sophisticated yet political rhetoric remains grounded in age-old kneejerkery. Of course we can all agree it would be beneficial if we were able to stop terrorists from communicating. But the hard political truth of the digital era is that’s never going to be possible. It really is putting the proverbial finger in the dam. (There are even startups working on encryption that’s futureproofed against quantum computers — and we don’t even have quantum computers yet.)

Another hard political truth is that effective counter terrorism policy requires spending money on physical, on-the-ground resources — putting more agents on the ground, within local communities, where they can gain trust and gather intelligence. (Not to mention having a foreign policy that seeks to promote global stability, rather than generating the kind of regional instability that feeds extremism by waging illegal wars, for instance, or selling arms to regimes known to support the spread of extremist religious ideologies.)

Yet, in the U.K. at least, the opposite is happening — police force budgets are being slashed. Meanwhile domestic spy agencies are now being promised more staff, yet spooks’ time is increasingly taken up with remote analysis of data, rather than on the ground intelligence work. The U.K. government’s draft new surveillance laws aim to cement mass surveillance as the officially sanctioned counter terror modus operandi, and will further increase the noise-to-signal ratio with additional data capture measures, such as mandating that ISPs retain data on the websites every citizen in the country has visited for the past year. Truly the opposite of a targeted intelligence strategy.

The draft Investigatory Powers Bill also has some distinctly ambiguous wording when it comes to encryption — suggesting the U.K. government is still seeking to legislate a general ability that companies be able to decrypt communications. Ergo, to outlaw end-to-end encryption. Yes, we’re back here again. You’d be forgiven for thinking politicians lacked a long-term memory.

Effective encryption might be a politically convenient scapegoat to kick around in the wake of a terror attack — given it can be used to detract attention from big picture geopolitical failures of governments. And from immediate on the ground intelligence failures — whether those are due to poor political direction, or a lack of resources, or bad decision-making/prioritization by overstretched intelligence agency staff. Pointing the finger of blame at technology companies’ use of encryption is a trivial diversion tactic to detract from wider political and intelligence failures with much more complex origins.

(On the intelligence failures point, questions certainly need to be asked, given that French and Belgian intelligence agencies apparently knew about the jihadi backgrounds of perpetrators of the Paris attacks. Yet weren’t, apparently, targeting them closely enough to prevent Saturday’s attack. And all this despite France having hugely draconian counter-terrorism digital surveillance laws…)

But seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.

Mandating vulnerabilities be built into digital communications opens up an even worse prospect: new avenues for terrorists and criminals to exploit. As officials are busy spinning the notion that terrorism is all-but only possible because of the rise of robust encryption, consider this: if the public is deprived of its digital privacy — with terrorism applied as the justification to strip out the robust safeguard of strong encryption — then individuals become more vulnerable to acts of terrorism, given their communications cannot be safeguarded from terrorists. Or criminals. Or fraudsters. Or anyone incentivized by malevolent intent.

If you want to speculate on fearful possibilities, think about terrorists being able to target individuals at will via legally-required-to-be insecure digital services. If you think terror tactics are scary right now, think about terrorists having the potential to single out, track and terminate anyone at will based on whatever twisted justification fits their warped ideology — perhaps after that person expressed views they do not approve of in an online forum.

In a world of guaranteed insecure digital services it’s a far more straightforward matter for a terrorist to hack into communications to obtain the identity of a person they deem a target, and to use other similarly perforated technology services to triangulate and track someone’s location to a place where they can be made the latest victim of a new type of hyper-targeted, mass surveillance-enabled terrorism. Inherently insecure services could also be more easily compromised by terrorists to broadcast their own propaganda, or send out phishing scams, or otherwise divert attention en masse.

The only way to protect against these scenarios is to expand the reach of properly encrypted services. To champion the cause of safeguarding the public’s personal data and privacy, rather than working to undermine it — and undermining the individual freedoms the West claims to be so keen to defend in the process.

While, when it comes to counter terrorism strategy, what’s needed is more intelligent targeting, not more mass measures that treat everyone as a potential suspect and deluge security agencies in an endless churn of irrelevant noise. Even the robust end-to-end encryption that’s now being briefed against as a ‘terrorist-enabling evil’ by shadowy officials on both sides of the Atlantic can be compromised at the level of an individual device. There’s no guaranteed shortcut to achieve that. Nor should there be — that’s the point. It takes sophisticated, targeted work.

But blanket measures to compromise the security of the many in the hopes of catching out the savvy few are even less likely to succeed on the intelligence front. We have mass surveillance already, and we also have blood on the streets of Paris once again. Encryption is just a convenient scapegoat for wider policy failures of an industrial surveillance complex.

So let’s not be taken in by false flags flown by anonymous officials trying to mask bad political decision-making. And let’s redouble our efforts to fight bad policy which seeks to entrench a failed ideology of mass surveillance — instead of focusing intelligence resources where they are really needed; honing in on signals, not drowned out by noise.