Schlagwort-Archive: NSA

Beapy uses NSA’s DoublePulsar EternalBlue & Mimikatz to collect and use passwords to mine for cryptocurrency following Coinhive

Two years after highly classified exploits built by the National Security Agency were stolen and published, hackers are still using the tools for nefarious reasons.

Security researchers at Symantec say they’ve seen a recent spike in a new malware, dubbed Beapy, which uses the leaked hacking tools to spread like wildfire across corporate networks to enslave computers into running mining code to generate cryptocurrency.

Beapy was first spotted in January but rocketed to more than 12,000 unique infections across 732 organizations since March, said Alan Neville, Symantec’s lead researcher on Beapy, in an email to TechCrunch. The malware almost exclusively targets enterprises, host to large numbers of computers, which when infected with cryptocurrency mining malware can generate sizable sums of money.

The malware relies on someone in the company opening a malicious email. Once opened, the malware drops the NSA-developed DoublePulsar malware to create a persistent backdoor on the infected computer, and uses the NSA’s EternalBlue exploit to spread laterally throughout the network. These are the same exploits that helped spread the WannaCry ransomware in 2017. Once the computers on the network are backdoored, the Beapy malware is pulled from the hacker’s command and control server to infect each computer with the mining software.

Not only does Beapy use the NSA’s exploits to spread, it also uses Mimikatz, an open-source credential stealer, to collect and use passwords from infected computers to navigate its way across the network.

According to the researchers, more than 80 percent of Beapy’s infections are in China.

Hijacking computers to mine for cryptocurrency — known as cryptojacking — has been on the decline in recent months, partially following the shutdown of Coinhive, a popular mining tool. Hackers are finding the rewards fluctuate greatly depending on the value of the cryptocurrency. But cryptojacking remains a more stable source of revenue than the hit-and-miss results of ransomware.

In September, some 919,000 computers were vulnerable to EternalBlue attacks — many of which were exploited for mining cryptocurrency. Today, that figure has risen to more than a million.

Typically cryptojackers exploit vulnerabilities in websites, which, when opened on a user’s browser, uses the computer’s processing power to generate cryptocurrency. But file-based cryptojacking is far more efficient and faster, allowing the hackers to make more money.

In a single month, file-based mining can generate up to $750,000, Symantec researchers estimate, compared to just $30,000 from a browser-based mining operation.

Cryptojacking might seem like a victimless crime — no data is stolen and files aren’t encrypted, but Symantec says the mining campaigns can slow down computers and cause device degradation.

A new cryptocurrency mining malware uses leaked NSA exploits to spread across enterprise networks

Alexa do you work for the NSA ;-)

Tens of millions of people use smart speakers and their voice software to play games, find music or trawl for trivia. Millions more are reluctant to invite the devices and their powerful microphones into their homes out of concern that someone might be listening.

Sometimes, someone is.

Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands.

The Alexa voice review process, described by seven people who have worked on the program, highlights the often-overlooked human role in training software algorithms. In marketing materials Amazon says Alexa “lives in the cloud and is always getting smarter.” But like many software tools built to learn from experience, humans are doing some of the teaching.

The team comprises a mix of contractors and full-time Amazon employees who work in outposts from Boston to Costa Rica, India and Romania, according to the people, who signed nondisclosure agreements barring them from speaking publicly about the program. They work nine hours a day, with each reviewer parsing as many as 1,000 audio clips per shift, according to two workers based at Amazon’s Bucharest office, which takes up the top three floors of the Globalworth building in the Romanian capital’s up-and-coming Pipera district. The modern facility stands out amid the crumbling infrastructure and bears no exterior sign advertising Amazon’s presence.

The work is mostly mundane. One worker in Boston said he mined accumulated voice data for specific utterances such as “Taylor Swift” and annotated them to indicate the searcher meant the musical artist. Occasionally the listeners pick up things Echo owners likely would rather stay private: a woman singing badly off key in the shower, say, or a child screaming for help. The teams use internal chat rooms to share files when they need help parsing a muddled word—or come across an amusing recording.

 Amazon in Bucharest
Amazon has offices in this Bucharest building.
Photographer: Irina Vilcu/Bloomberg

Sometimes they hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was a sexual assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere.

“We take the security and privacy of our customers’ personal information seriously,” an Amazon spokesman said in an emailed statement. “We only annotate an extremely small sample of Alexa voice recordings in order [to] improve the customer experience. For example, this information helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.

“We have strict technical and operational safeguards, and have a zero tolerance policy for the abuse of our system. Employees do not have direct access to information that can identify the person or account as part of this workflow. All information is treated with high confidentiality and we use multi-factor authentication to restrict access, service encryption and audits of our control environment to protect it.”

Amazon, in its marketing and privacy policy materials, doesn’t explicitly say humans are listening to recordings of some conversations picked up by Alexa. “We use your requests to Alexa to train our speech recognition and natural language understanding systems,” the company says in a list of frequently asked questions.

In Alexa’s privacy settings, Amazon gives users the option of disabling the use of their voice recordings for the development of new features. The company says people who opt out of that program might still have their recordings analyzed by hand over the regular course of the review process. A screenshot reviewed by Bloomberg shows that the recordings sent to the Alexa reviewers don’t provide a user’s full name and address but are associated with an account number, as well as the user’s first name and the device’s serial number.

The Intercept reported earlier this year that employees of Amazon-owned Ring manually identify vehicles and people in videos captured by the company’s doorbell cameras, an effort to better train the software to do that work itself.

“You don’t necessarily think of another human listening to what you’re telling your smart speaker in the intimacy of your home,” said Florian Schaub, a professor at the University of Michigan who has researched privacy issues related to smart speakers. “I think we’ve been conditioned to the [assumption] that these machines are just doing magic machine learning. But the fact is there is still manual processing involved.”

“Whether that’s a privacy concern or not depends on how cautious Amazon and other companies are in what type of information they have manually annotated, and how they present that information to someone,” he added.

When the Echo debuted in 2014, Amazon’s cylindrical smart speaker quickly popularized the use of voice software in the home. Before long, Alphabet Inc. launched its own version, called Google Home, followed by Apple Inc.’s HomePod. Various companies also sell their own devices in China. Globally, consumers bought 78 million smart speakers last year, according to researcher Canalys. Millions more use voice software to interact with digital assistants on their smartphones.

Alexa software is designed to continuously record snatches of audio, listening for a wake word. That’s “Alexa” by default, but people can change it to “Echo” or “computer.” When the wake word is detected, the light ring at the top of the Echo turns blue, indicating the device is recording and beaming a command to Amazon servers.

 Inside An Amazon 4-Star Store
An Echo smart speaker inside an Amazon 4-star store in Berkeley, California.
Photographer: Cayce Clifford/Bloomberg

Most modern speech-recognition systems rely on neural networks patterned on the human brain. The software learns as it goes, by spotting patterns amid vast amounts of data. The algorithms powering the Echo and other smart speakers use models of probability to make educated guesses. If someone asks Alexa if there’s a Greek place nearby, the algorithms know the user is probably looking for a restaurant, not a church or community center.

But sometimes Alexa gets it wrong—especially when grappling with new slang, regional colloquialisms or languages other than English. In French, avec sa, “with his” or “with her,” can confuse the software into thinking someone is using the Alexa wake word. Hecho, Spanish for a fact or deed, is sometimes misinterpreted as Echo. And so on. That’s why Amazon recruited human helpers to fill in the gaps missed by the algorithms.

Apple’s Siri also has human helpers, who work to gauge whether the digital assistant’s interpretation of requests lines up with what the person said. The recordings they review lack personally identifiable information and are stored for six months tied to a random identifier, according to an Apple security white paper. After that, the data is stripped of its random identification information but may be stored for longer periods to improve Siri’s voice recognition.

At Google, some reviewers can access some audio snippets from its Assistant to help train and improve the product, but it’s not associated with any personally identifiable information and the audio is distorted, the company says.

A recent Amazon job posting, seeking a quality assurance manager for Alexa Data Services in Bucharest, describes the role humans play: “Every day she [Alexa] listens to thousands of people talking to her about different topics and different languages, and she needs our help to make sense of it all.” The want ad continues: “This is big data handling like you’ve never seen it. We’re creating, labeling, curating and analyzing vast quantities of speech on a daily basis.”

Amazon’s review process for speech data begins when Alexa pulls a random, small sampling of customer voice recordings and sends the audio files to the far-flung employees and contractors, according to a person familiar with the program’s design.

 Amazon.com Inc. Holds Product Reveal Launch
The Echo Spot
Photographer: Daniel Berman/Bloomberg

Some Alexa reviewers are tasked with transcribing users’ commands, comparing the recordings to Alexa’s automated transcript, say, or annotating the interaction between user and machine. What did the person ask? Did Alexa provide an effective response?

Others note everything the speaker picks up, including background conversations—even when children are speaking. Sometimes listeners hear users discussing private details such as names or bank details; in such cases, they’re supposed to tick a dialog box denoting “critical data.” They then move on to the next audio file.

According to Amazon’s website, no audio is stored unless Echo detects the wake word or is activated by pressing a button. But sometimes Alexa appears to begin recording without any prompt at all, and the audio files start with a blaring television or unintelligible noise. Whether or not the activation is mistaken, the reviewers are required to transcribe it. One of the people said the auditors each transcribe as many as 100 recordings a day when Alexa receives no wake command or is triggered by accident.

In homes around the world, Echo owners frequently speculate about who might be listening, according to two of the reviewers. “Do you work for the NSA?” they ask. “Alexa, is someone else listening to us?”

— With assistance by Gerrit De Vynck, Mark Gurman, and Irina Vilcu

Source: https://www.bloomberg.com/news/articles/2019-04-10/is-anyone-listening-to-you-on-alexa-a-global-team-reviews-audio

Lets Get Rid of the “Nothing to Hide, Nothing to Fear” Mentality

With Zuckerberg testifying to the US Congress over Facebook’s data privacy and the implementation of GDPR fast approaching, the debate around data ownership has suddenly burst into the public psyche. Collecting user data to serve targeted advertising in a free platform is one thing, harvesting the social graphs of people interacting with apps and using it to sway an election is somewhat worse.

Suffice to say that neither of the above compare to the indiscriminate collection of ordinary civilians’ data on behalf of governments every day.

In 2013, Edward Snowden blew the whistle on the systematic US spy program he helped to architect. Perhaps the largest revelation to come out of the trove of documents he released were the details of PRISM, an NSA program that collects internet communications data from US telecommunications companies like Microsoft, Yahoo, Google, Facebook and Apple. The data collected included audio and video chat logs, photographs, emails, documents and connection logs of anyone using the services of 9 leading US internet companies. PRISM benefited from changes to FISA that allowed warrantless domestic surveillance of any target without the need for probable cause. Bill Binney, former US intelligence official, explains how, for instances where corporate control wasn’t achievable, the NSA enticed third party countries to clandestinely tap internet communication lines on the internet backbone via the RAMPART-A program.What this means is that the NSA was able to assemble near complete dossiers of all web activity carried out by anyone using the internet.

But this is just in the US right?, policies like this wouldn’t be implemented in Europe.

Wrong unfortunately.

GCHQ, the UK’s intelligence agency allegedly collects considerably more metadata than the NSA. Under Tempora, GCHQ can intercept all internet communications from submarine fibre optic cables and store the information for 30 days at the Bude facility in Cornwall. This includes complete web histories, the contents of all emails and facebook entires and given that more than 25% of all internet communications flow through these cables, the implications are astronomical. Elsewhere, JTRIG, a unit of GCHQ have intercepted private facebook pictures, changed the results of online polls and spoofed websites in real time. A lot of these techniques have been made possible by the 2016 Investigatory Powers Act which Snowden describes as the most “extreme surveillance in the history of western democracy”.

But despite all this, the age old reprise; “if you’ve got nothing to hide, you’ve got nothing to fear” often rings out in debates over privacy.

Indeed, the idea is so pervasive that politicians often lean on the phrase to justify ever more draconian methods of surveillance. Yes, they draw upon the selfsame rhetoric of Joseph Goebbels, propaganda minister for the Nazi regime.

In drafting legislation for the the Investigatory Powers Act, May said that such extremes were necessary to ensure “no area of cyberspace becomes a haven for those who seek to harm us, to plot, poison minds and peddle hatred under the radar”.

When levelled against the fear of terrorism and death, its easy to see how people passively accept ever greater levels of surveillance. Indeed, Naomi Klein writes extensively in Shock Doctrine how the fear of external threats can be used as a smokescreen to implement ever more invasive policy. But indiscriminate mass surveillance should never be blindly accepted, privacy should and always will be a social norm, despite what Mark Zuckerberg said in 2010. Although I’m sure he may have a different answer now.

So you just read emails and look at cat memes online, why would you care about privacy?

In the same way we’re able to close our living room curtains and be alone and unmonitored, we should be able to explore our identities online un-impinged. Its a well rehearsed idea that nowadays we’re more honest to our web browsers than we are to each other but what happens when you become cognisant that everything you do online is intercepted and catalogued? As with CCTV, when we know we’re being watched, we alter our behaviour in line with whats expected.

As soon as this happens online, the liberating quality provided by the anonymity of the internet is lost. Your thinking aligns with the status quo and we lose the boundless ability of the internet to search and develop our identities. No progress can be made when everyone thinks the same way. Difference of opinion fuels innovation.

This draws obvious comparisons with Bentham’s Panopticon, a prison blueprint for enforcing control from within. The basic setup is as follows; there is a central guard tower surrounded by cells. In the cells are prisoners. The tower shines bright light so that the watchman can see each inmate silhouetted in their cell but the prisoners cannot see the watchman. The prisoners must assume they could be observed at any point and therefore act accordingly. In literature, the common comparison is Orwell’s 1984 where omnipresent government surveillance enforces control and distorts reality. With revelations about surveillance states, the relevance of these metaphors are plain to see.

In reality, theres actually a lot more at stake here.

With the Panopticon certain individuals are watched, in 1984 everyone is watched. On the modern internet, every person, irrespective of the threat they pose, is not only watched but their information is stored and archived for analysis.

Kafka’s The Trial, in which a bureaucracy uses citizens information to make decisions about them, but denies them the ability to participate in how their information is used, therefore seems a more apt comparison. The issue here is that corporations, more so, states have been allowed to comb our data and make decisions that affect us without our consent.

Maybe, as a member of a western democracy, you don’t think this matters. But what if you’re a member of a minority group in an oppressive regime? What if you’re arrested because a computer algorithm cant separate humour from intent to harm?

On the other hand, maybe you trust the intentions of your government, but how much faith do you have in them to keep your data private? The recent hack of the SEC shows that even government systems aren’t safe from attackers. When a business database is breached, maybe your credit card details become public, when a government database that has aggregated millions of data points on every aspect of your online life is hacked, you’ve lost all control of your ability to selectively reveal yourself to the world. Just as Lyndon Johnson sought to control physical clouds, he who controls the modern cloud, will rule the world.

Perhaps you think that even this doesn’t matter, if it allows the government to protect us from those that intend to cause harm then its worth the loss of privacy. The trouble with indiscriminate surveillance is that with so much data you see everything but paradoxically, still know nothing.

Intelligence is the strategic collection of pertinent facts, bulk data collection cannot therefore be intelligent. As Bill Binney puts it “bulk data kills people” because technicians are so overwhelmed that they cant isolate whats useful. Data collection as it is can only focus on retribution rather than reduction.

Granted, GDPR is a big step forward for individual consent but will it stop corporations handing over your data to the government? Depending on how cynical you are, you might think that GDPR is just a tool to clean up and create more reliable deterministic data anyway. The nothing to hide, nothing to fear mentality renders us passive supplicants in the removal of our civil liberties. We should be thinking about how we relate to one another and to our Governments and how much power we want to have in that relationship.

To paraphrase Edward Snowden, saying you don’t care about privacy because you’ve got nothing to hide is analogous to saying you don’t care about freedom of speech because you have nothing to say.

http://behindthebrowser.space/index.php/2018/04/22/nothing-to-fear-nothing-to-hide/

Macron, May, Merkel – weakening encryption and making messengers (whatsapp) vulnerable leads to data security catastrophes

In weakening strong encryption by weakening software like Android or IOS operating System (subroutines, inlays, essentials) in order to enable mass surveillance you the leaders of Europe risk the data security of thousands of Europe companies. Is it worth it?

Even Microsoft is now warning that the government practice of “stockpiling” software vulnerabilities so that they can be used as weapons is a misguided tactic that weakens security for everybody.

“An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen,” the company said Sunday.

Why are you doing this? Hopefully not for the need to give information in order to receive from the USA?

epa05989737 French President Emmanuel Macron (L) talks with German Chancellor Angela Merkel (R) as US President Donald J. Trump (C) walks by, during a line up for the group photo at the NATO summit in Brussels, Belgium, 25 May 2017. NATO countries‘ heads of states and governments gather in Brussels for a one-day meeting. EPA/ARMANDO BABANI

You saw and recognised and understood WannaCry that affected thousands of companies throuout Europe?

The vulnerability in Windows that WannaCry takes advantage of was discovered by the NSA for its surveillance toolkit. But word got out when a hacker group known as the Shadow Brokers dumped a bunch of leaked NSA information onto the Internet in April. Microsoft, however, had already issued a software update the month before; those that downloaded and installed the patch were protected from WannaCry, but many others lagged behind and became victims.

Securing Driverless Cars From Hackers Is Hard, according to Charlie Miller, Ex-NSA’s Tailored Access Operations Hacker

Securing Driverless Cars From Hackers Is Hard. Ask the Ex-Uber Guy Who Protects Them

Two years ago, Charlie Miller and Chris Valasek pulled off a demonstration that shook the auto industry, remotely hacking a Jeep Cherokee via its internet connection to paralyze it on a highway. Since then, the two security researchers have been quietly working for Uber, helping the startup secure its experimental self-driving cars against exactly the sort of attack they proved was possible on a traditional one. Now, Miller has moved on, and he’s ready to broadcast a message to the automotive industry: Securing autonomous cars from hackers is a very difficult problem. It’s time to get serious about solving it.

Last month, Miller left Uber for a position at Chinese competitor Didi, a startup that’s just now beginning its own autonomous ridesharing project. In his first post-Uber interview, Miller talked to WIRED about what he learned in those 19 months at the company—namely that driverless taxis pose a security challenge that goes well beyond even those faced by the rest of the connected car industry.

Miller couldn’t talk about any of the specifics of his research at Uber; he says he moved to Didi in part because the company has allowed him to speak more openly about car hacking. But he warns that before self-driving taxis can become a reality, the vehicles’ architects will need to consider everything from the vast array of automation in driverless cars that can be remotely hijacked, to the possibility that passengers themselves could use their physical access to sabotage an unmanned vehicle.

“Autonomous vehicles are at the apex of all the terrible things that can go wrong,” says Miller, who spent years on the NSA’s Tailored Access Operations team of elite hackers before stints at Twitter and Uber. “Cars are already insecure, and you’re adding a bunch of sensors and computers that are controlling them…If a bad guy gets control of that, it’s going to be even worse.”

At A Computer’s Mercy

In a series of experiments starting in 2013, Miller and Valasek showed that a hacker with either wired or over-the-internet access to a vehicle—including a Toyota Prius, Ford Escape, and a Jeep Cherokee—could disable or slam on a victim’s brakes, turn the steering wheel, or, in some cases, cause unintended acceleration. But to trigger almost all those attacks, Miller and Valasek had to exploit vehicles’ existing automated features. They used the Prius’ collision avoidance system to apply its brakes, and the Jeep’s cruise control feature to accelerate it. To turn the Jeep’s steering wheel, they tricked it into thinking it was parking itself—even if it was moving at 80 miles per hour.

Their car-hacking hijinks, in other words, were limited to the few functions a vehicle’s computer controls. In a driverless car, the computer controls everything. “In an autonomous vehicle, the computer can apply the brakes and turn the steering wheel any amount, at any speed,” Miller says. “The computers are even more in charge.”

An alert driver could also override many of the attacks Miller and Valasek demonstrated on traditional cars: Tap the brakes and that cruise control acceleration immediately ceases. Even the steering wheel attacks could be easily overcome if the driver wrests control of the wheel. When the passenger isn’t in the driver’s seat—or there is no steering wheel or brake pedal—no such manual override exists. “No matter what we did in the past, the human had a chance to control the car. But if you’re sitting in the backseat, that’s a whole different story,” says Miller. “You’re totally at the mercy of the vehicle.”

Hackers Take Rides, Too

A driverless car that’s used as a taxi, Miller points out, poses even more potential problems. In that situation, every passenger has to be considered a potential threat. Security researchers have shown that merely plugging an internet-connected gadget into a car’s OBD2 port—a ubiquitous outlet under its dashboard—can offer a remote attacker an entry point into the vehicle’s most sensitive systems. (Researchers at the University of California at San Diego showed in 2015 that they could take control of a Corvette’s brakes via a common OBD2 dongle distributed by insurance companies—including one that partnered with Uber.)

“There’s going to be someone you don’t necessarily trust sitting in your car for an extended period of time,” says Miller. “The OBD2 port is something that’s pretty easy for a passenger to plug something into and then hop out, and then they have access to your vehicle’s sensitive network.”

Permanently plugging that port is illegal under federal regulations, Miller says. He suggests ridesharing companies that use driverless cars could cover it with tamper-evident tape. But even then, they might only be able to narrow down which passenger could have sabotaged a vehicle to a certain day or week. A more comprehensive fix would mean securing the vehicle’s software so that not even a malicious hacker with full physical access to its network would be able to hack it—a challenge Miller says only a few highly locked-down products like an iPhone or Chromebook can pass.

“It’s definitely a hard problem,” he says.

Deep Fixes

Miller argues that solving autonomous vehicles’ security flaws will require some fundamental changes to their security architecture. Their internet-connected computers, for instance, will need “codesigning,” a measure that ensures they only run trusted code signed with a certain cryptographic key. Today only Tesla has talked publicly about implementing that feature. Cars’ internal networks will need better internal segmentation and authentication, so that critical components don’t blindly follow commands from the OBD2 port. They need intrusion detection systems that can alert the driver—or rider—when something anomalous happens on the cars’ internal networks. (Miller and Valasek designed one such prototype.) And to prevent hackers from getting an initial, remote foothold, cars need to limit their “attack surface,” any services that might accept malicious data sent over the internet.

Complicating those fixes? Companies like Uber and Didi don’t even make the cars they use, but instead have to bolt on any added security after the fact. “They’re getting a car that already has some attack surface, some vulnerabilities, and a lot of software they don’t have any control over, and then trying to make that into something secure,” says Miller. “That’s really hard.”

That means solving autonomous vehicles’ security nightmares will require far more open conversation and cooperation among companies. That’s part of why Miller left Uber, he says: He wants the freedom to speak more openly within the industry. “I want to talk about how we’re securing cars and the scary things we see, instead of designing these things in private and hoping that we all know what we’re doing,” he says.

Car hacking, fortunately, remains largely a concern for the future: No car has yet been digitally hijacked in a documented, malicious case. But that means now’s the time to work on the problem, Miller says, before cars become more automated and make the problem far more real. “We have some time to build up these security measures and get them right before something happens,” says Miller. “And that’s why I’m doing this.”

https://www.wired.com/2017/04/ubers-former-top-hacker-securing-autonomous-cars-really-hard-problem/

Whatsapp spies on your encrypted messages

Exclusive: Privacy campaigners criticise WhatsApp vulnerability as a ‘huge threat to freedom of speech’ and warn it could be exploited by government agencies

Research shows that the company can read messages due to the way WhatsApp has implemented its end-to-end encryption protocol.
Research shows that WhatsApp can read messages due to the way the company has implemented its end-to-end encryption protocol. Photograph: Ritchie B Tongo/EPA

A security backdoor that can be used to allow Facebook and others to intercept and read encrypted messages has been found within its WhatsApp messaging service.

Facebook claims that no one can intercept WhatsApp messages, not even the company and its staff, ensuring privacy for its billion-plus users. But new research shows that the company could in fact read messages due to the way WhatsApphas implemented its end-to-end encryption protocol.

Privacy campaigners said the vulnerability is a “huge threat to freedom of speech” and warned it can be used by government agencies to snoop on users who believe their messages to be secure. WhatsApp has made privacy and security a primary selling point, and has become a go to communications tool of activists, dissidents and diplomats.

WhatsApp’s end-to-end encryption relies on the generation of unique security keys, using the acclaimed Signal protocol, developed by Open Whisper Systems, that are traded and verified between users to guarantee communications are secure and cannot be intercepted by a middleman. However, WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.

The recipient is not made aware of this change in encryption, while the sender is only notified if they have opted-in to encryption warnings in settings, and only after the messages have been resent. This re-encryption and rebroadcasting effectively allows WhatsApp to intercept and read users’ messages.

The security backdoor was discovered by Tobias Boelter, a cryptography and security researcher at the University of California, Berkeley. He told the Guardian: “If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access due to the change in keys.”

The backdoor is not inherent to the Signal protocol. Open Whisper Systems’ messaging app, Signal, the app used and recommended by whistleblower Edward Snowden, does not suffer from the same vulnerability. If a recipient changes the security key while offline, for instance, a sent message will fail to be delivered and the sender will be notified of the change in security keys without automatically resending the message.

WhatsApp’s implementation automatically resends an undelivered message with a new key without warning the user in advance or giving them the ability to prevent it.

Boelter reported the backdoor vulnerability to Facebook in April 2016, but was told that Facebook was aware of the issue, that it was “expected behaviour” and wasn’t being actively worked on. The Guardian has verified the backdoor still exists.

The WhatsApp vulnerability calls into question the privacy of messages sent across the service used around the world, including by people living in oppressive regimes.
Pinterest
The WhatsApp vulnerability calls into question the privacy of messages sent across the service used around the world, including by people living in oppressive regimes. Photograph: Marcelo Sayão/EPA

Steffen Tor Jensen, head of information security and digital counter-surveillance at the European-Bahraini Organisation for Human Rights, verified Boelter’s findings. He said: “WhatsApp can effectively continue flipping the security keys when devices are offline and re-sending the message, without letting users know of the change till after it has been made, providing an extremely insecure platform.”

Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”

The vulnerability calls into question the privacy of messages sent across the service, which is used around the world, including by people living in oppressive regimes.

Professor Kirstie Ball, co-director and founder of the Centre for Research into Information, Surveillance and Privacy, called the existence of a backdoor within WhatsApp’s encryption “a gold mine for security agencies” and “a huge betrayal of user trust”. She added: “It is a huge threat to freedom of speech, for it to be able to look at what you’re saying if it wants to. Consumers will say, I’ve got nothing to hide, but you don’t know what information is looked for and what connections are being made.”

In the UK, the recently passed Investigatory Powers Act allows the government to intercept bulk data of users held by private companies, without suspicion of criminal activity, similar to the activity of the US National Security Agency uncovered by the Snowden revelations. The government also has the power to force companies to “maintain technical capabilities” that allow data collection through hacking and interception, and requires companies to remove “electronic protection” from data. Intentional or not, WhatsApp’s backdoor to the end-to-end encryption could be used in such a way to facilitate government interception.

Jim Killock, executive director of Open Rights Group, said: “If companies claim to offer end-to-end encryption, they should come clean if it is found to be compromised – whether through deliberately installed backdoors or security flaws. In the UK, the Investigatory Powers Act means that technical capability notices could be used to compel companies to introduce flaws – which could leave people’s data vulnerable.”

A WhatsApp spokesperson told the Guardian: “Over 1 billion people use WhatsApp today because it is simple, fast, reliable and secure. At WhatsApp, we’ve always believed that people’s conversations should be secure and private. Last year, we gave all our users a better level of security by making every message, photo, video, file and call end-to-end encrypted by default. As we introduce features like end-to-end encryption, we focus on keeping the product simple and take into consideration how it’s used every day around the world.

“In WhatsApp’s implementation of the Signal protocol, we have a “Show Security Notifications” setting (option under Settings > Account > Security) that notifies you when a contact’s security code has changed. We know the most common reasons this happens are because someone has switched phones or reinstalled WhatsApp. This is because in many parts of the world, people frequently change devices and Sim cards. In these situations, we want to make sure people’s messages are delivered, not lost in transit.”

Asked to comment specifically on whether Facebook/WhatApps had accessed users’ messages and whether it had done so at the request of government agencies or other third parties, it directed the Guardian to its site that details aggregate data on government requests by country.

Concerns over the privacy of WhatsApp users has been repeatedly highlighted since Facebook acquired the company for $22bn in 2014. In August 2015, Facebook announced a change to the privacy policy governing WhatsApp that allowed the social network to merge data from WhatsApp users and Facebook, including phone numbers and app usage, for advertising and development purposes.

Facebook halted the use of the shared user data for advertising purposes in November after pressure from the pan-European data protection agency groupArticle 29 Working Party in October. The European commission then filed charges against Facebook for providing “misleading” information in the run-up to the social network’s acquisition of messaging service WhatsApp, following its data-sharing change.

https://www.theguardian.com/technology/2017/jan/13/whatsapp-backdoor-allows-snooping-on-encrypted-messages

Obama gives CIA, FBI, DEA and 13 other agencies warrantless full raw DATA NSA surveillance access on Americans citizens

Obama gives CIA, FBI, DEA and 13 other agencies warrantless full raw DATA NSA surveillance access on Americans citizens.

Further Reading:

In its final days, the Obama administration has expanded the power of the National Security Agency to share globally intercepted personal communications with the government’s 16 other intelligence agencies before applying privacy protections.

The change means that far more officials will be searching through raw data.

Previously, the N.S.A. filtered information before sharing intercepted communications with another agency, like the C.I.A. or the intelligence branches of the F.B.I. and the Drug Enforcement Administration. The N.S.A.’s analysts passed on only information they deemed pertinent, screening out the identities of innocent people and irrelevant personal information.

Now, other intelligence agencies will be able to search directly through raw repositories of communications intercepted by the N.S.A. and then apply such rules for “minimizing” privacy intrusions.

“Rather than dramatically expanding government access to so much personal data, we need much stronger rules to protect the privacy of Americans,” Mr. Toomey said. “Seventeen different government agencies shouldn’t be rooting through Americans’ emails with family members, friends and colleagues, all without ever obtaining a warrant.”

“This development is very troubling for Americans’ privacy,” said John Napier Tye, a former state department official turned surveillance whistleblower. “Most people don’t realize this, but even our purely domestic email and text messages are often stored on servers outside the United States. And the NSA has written extremely permissive rules for itself to collect data outside US borders.

“So in operations overseas, the NSA is scooping up a lot of purely domestic communications. And now, with these new rules, many different federal agencies can search and read the domestic communications of normal Americans, without any warrant or oversight from Congress or the courts.”

They mean that NSA officials are no longer required to filter out information about innocent people whose identities have been scooped up before passing the intercepted communications to officials from other agencies, who will now be able to search through raw caches of data.

“This raises serious concerns that agencies that have responsibilities such as prosecuting domestic crimes, regulating our financial policy, and enforcing our immigration laws will now have access to a wealth of personal information that could be misused,” said Singh Guliani. “Congress needs to take action to regulate and provide oversight over these activities.”

https://www.theguardian.com/world/2017/jan/12/obama-us-intelligence-greater-access-warrantless-data-foreign-targets

Privacy advocates’ concerns center around loopholes in the rules that allow agencies like the FBI and DEA to search the NSA’s collected data for purposes such as investigating an “agent of a foreign power.” Any evidence of illegal behavior that a searcher stumbles on can be used in a criminal prosecution. That means the rule change, according to Cardozo, introduces new possibilities for law enforcement agencies like the DEA and FBI to carry out what’s known as “parallel construction.” That maneuver involves secretly using the NSA’s intelligence to identify or track a criminal suspect, and then fabricating a plausible trail of evidence to present to a court as an after-the-fact explanation of the investigation’s origin. The technique was the subject of an ACLU lawsuit against the Office of the Director of National Intelligence in 2012, and resulted in the Justice Department admitting to repeatedly using the technique to hide the NSA’s involvement in criminal investigations.

“It used to be that if NSA itself saw the evidence of a crime, they could give a tip to the FBI, and the FBI would engage in parallel construction,” says Cardozo. “Now FBI will be able to get into the raw data themselves and do what they will with it.”

https://www.wired.com/2017/01/just-time-trump-nsa-loosens-privacy-rules/

How NSA identifies you by just starting your windows PC

Thanks to the fine research paper found here http://www.icir.org/vern/papers/trackers-pets16.pdf  YOU ARE easiliy identified when you just start your windows PC and log onto the internet – not requiring you any user-inaction.

You are identified by either: HTTP Identifiers or NON-HTTP Identifiers

HTTP Identifiers

Application-specific: The first category is identifiers sent by applications other than browsers. For example, Skype sends a user identifier uhash in a URL of the format http://ui.skype.com/ui/2/2.1.0.81/ en/getlatestversion?ver=2.1.0.81&uhash= . The parameter uhash is a hash of the user ID, their password, and a salt, and remains constant for a given Skype user [12]. uhash can very well act as an identifier for a user; a monitor who observes the same value from two different clients/networks can infer that it reflects the same user on both. Another example in this category is a Dropbox user_id sent as a URL parameter. We discovered that since the Dropbox application regularly syncs with its server, it sends out this identifier—surprisingly, every minute—without requiring any user action.

Mobile devices: Our methodology enabled us to discover that the Apple weather app sends IMEI and IMSI numbers in POST requests to iphone-wu.apple.com. We can recognize these as such, because the parameter name in the context clearly names them as IMEI and IMSI; the value also matches the expected format for these identifiers. Other apps also send a number of device identifiers, such as phone make, advertising ID,4 SHA1 hashes of serial number, MAC address, and UDID (unique device identifier) across various domains, such as s.amazon-adsystem.com, jupiter.apads.com and ads.mp.mydas.mobi. The iOS and Android mobile SDKs provide access to these identifiers.

http-identifiers

NON-HTTP Identifiers

Device identifiers sent by iOS/OSX: We found instances of device identifiers sent on port 5223. Apple devices use this port to maintain a persistent connection with Apple’s Push Notification (APN) service, through which they receive push notifications for installed apps.

An app-provider sends to an APN server the push notification along with the device token of the recipient device. The APN server in turn forwards the notification to the device, identifiying it via the device token [2]. This device token is an opaque device identifier, which the APN service gives to the device when it first connects. The device sends this token (in clear text) to the APN server on every connection, and to each app-provider upon app installation. This identifier enabled us to identify 68 clients in our dataset as Apple devices. The devices sent their device token to a total of 407 IP addresses in two networks belonging to Apple (17.172.232/24, 17.149/16).

non-http-identifiers

The work http://www.icir.org/vern/papers/trackers-pets16.pdf was supported by the Intel Science and Technology Center for Secure Computing, the U.S. Army Research Office and by the National Science Foundation.

Copy of Publication here: trackers-pets16

Encryption Is Being Scapegoated To Mask The Failures Of Mass Surveillance

Source: http://techcrunch.com/2015/11/17/the-blame-game/

Well that took no time at all. Intelligence agencies rolled right into the horror and fury in the immediate wake of the latest co-ordinated terror attacks in the French capital on Friday, to launch their latest co-ordinated assault on strong encryption — and on the tech companies creating secure comms services — seeking to scapegoat end-to-end encryption as the enabling layer for extremists to perpetrate mass murder.

There’s no doubt they were waiting for just such an ‘opportune moment’ to redouble their attacks on encryption after recent attempts to lobby for encryption-perforating legislation foundered. (A strategy confirmed by a leaked email sent by the intelligence community’s top lawyer, Robert S. Litt, this August — and subsequently obtained by the Washington Post — in which he anticipated that a “very hostile legislative environment… could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement”. Et voila Paris… )

Speaking to CBS News the weekend in the immediate aftermath of the Paris attacks, former CIA deputy director Michael Morell said: “I think this is going to open an entire new debate about security versus privacy.”

“We, in many respects, have gone blind as a result of the commercialization and the selling of these devices that cannot be accessed either by the manufacturer or, more importantly, by us in law enforcement, even equipped with search warrants and judicial authority,” added New York City police commissioner, William J. Bratton, quoted by the NYT in a lengthy article probing the “possible” role of encrypted messaging apps in the Paris attacks.

Elsewhere the fast-flowing attacks on encrypted tech services have come without a byline — from unnamed European and American officials who say they are “not authorized to speak publicly”. Yet are happy to speak publicly, anonymously.

The NYT published an article on Sunday alleging that attackers had used “encryption technology” to communicate — citing “European officials who had been briefed on the investigation but were not authorized to speak publicly”. (The paper subsequently pulled the article from its website, as noted by InsideSources, although it can still be read via the Internet Archive.)

The irony of government/intelligence agency sources briefing against encryption on condition of anonymity as they seek to undermine the public’s right to privacy would be darkly comic if it weren’t quite so brazen.

Seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.

Here’s what one such unidentified British intelligence source told Politico: “As members of the general public get preoccupied that the government is spying on them, they have adopted these applications and terrorists have found them tailor-made for their own use.”

It’s a pretty incredible claim when you examine it. This unknown spook mouthpiece is saying terrorists are able to organize acts of mass murder as a direct consequence of the public’s dislike of government mass surveillance. Take even a cursory glance at the history of terrorism and that claim folds in on itself immediately. The highly co-ordinated 9/11 attacks of 2001 required no backdrop of public privacy fears in order to be carried out — and with horrifying, orchestrated effectiveness.

In the same Politico article, an identified source — J.M. Berger, the co-author of a book about ISIS — makes a far more credible claim: “Terrorists use technology improvisationally.”

Of course they do. The co-founder of secure messaging app Telegram, Pavel Durov, made much the same point earlier this fall when asked directly by TechCrunch about ISIS using his app to communicate. “Ultimately the ISIS will always find a way to communicate within themselves. And if any means of communication turns out to be not secure for them, then they switch to another one,” Durov argued. “I still think we’re doing the right thing — protecting our users privacy.”

Bottom line: banning encryption or enforcing tech companies to backdoor communications services has zero chance of being effective at stopping terrorists finding ways to communicate securely. They can and will route around such attempts to infiltrate their comms, as others have detailed at length.

Here’s a recap: terrorists can use encryption tools that are freely distributed from countries where your anti-encryption laws have no jurisdiction. Terrorists can (and do) build their own securely encrypted communication tools. Terrorists can switch to newer (or older) technologies to circumvent enforcement laws or enforced perforations. They can use plain old obfuscation to code their communications within noisy digital platforms like the Playstation 4 network, folding their chatter into general background digital noise (of which there is no shortage). And terrorists can meet in person, using a network of trusted couriers to facilitate these meetings, as Al Qaeda — the terrorist group that perpetrated the highly sophisticated 9/11 attacks at a time when smartphones were far less common, nor was there a ready supply of easy-to-use end-to-end encrypted messaging apps — is known to have done.

Point is, technology is not a two-lane highway that can be regulated with a couple of neat roadblocks — whatever many politicians appear to think. All such roadblocks will do is catch the law-abiding citizens who rely on digital highways to conduct more and more aspects of their daily lives. And make those law-abiding citizens less safe in multiple ways.

There’s little doubt that the lack of technological expertise in the upper echelons of governments is snowballing into a very ugly problem indeed as technology becomes increasingly sophisticated yet political rhetoric remains grounded in age-old kneejerkery. Of course we can all agree it would be beneficial if we were able to stop terrorists from communicating. But the hard political truth of the digital era is that’s never going to be possible. It really is putting the proverbial finger in the dam. (There are even startups working on encryption that’s futureproofed against quantum computers — and we don’t even have quantum computers yet.)

Another hard political truth is that effective counter terrorism policy requires spending money on physical, on-the-ground resources — putting more agents on the ground, within local communities, where they can gain trust and gather intelligence. (Not to mention having a foreign policy that seeks to promote global stability, rather than generating the kind of regional instability that feeds extremism by waging illegal wars, for instance, or selling arms to regimes known to support the spread of extremist religious ideologies.)

Yet, in the U.K. at least, the opposite is happening — police force budgets are being slashed. Meanwhile domestic spy agencies are now being promised more staff, yet spooks’ time is increasingly taken up with remote analysis of data, rather than on the ground intelligence work. The U.K. government’s draft new surveillance laws aim to cement mass surveillance as the officially sanctioned counter terror modus operandi, and will further increase the noise-to-signal ratio with additional data capture measures, such as mandating that ISPs retain data on the websites every citizen in the country has visited for the past year. Truly the opposite of a targeted intelligence strategy.

The draft Investigatory Powers Bill also has some distinctly ambiguous wording when it comes to encryption — suggesting the U.K. government is still seeking to legislate a general ability that companies be able to decrypt communications. Ergo, to outlaw end-to-end encryption. Yes, we’re back here again. You’d be forgiven for thinking politicians lacked a long-term memory.

Effective encryption might be a politically convenient scapegoat to kick around in the wake of a terror attack — given it can be used to detract attention from big picture geopolitical failures of governments. And from immediate on the ground intelligence failures — whether those are due to poor political direction, or a lack of resources, or bad decision-making/prioritization by overstretched intelligence agency staff. Pointing the finger of blame at technology companies’ use of encryption is a trivial diversion tactic to detract from wider political and intelligence failures with much more complex origins.

(On the intelligence failures point, questions certainly need to be asked, given that French and Belgian intelligence agencies apparently knew about the jihadi backgrounds of perpetrators of the Paris attacks. Yet weren’t, apparently, targeting them closely enough to prevent Saturday’s attack. And all this despite France having hugely draconian counter-terrorism digital surveillance laws…)

But seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.

Mandating vulnerabilities be built into digital communications opens up an even worse prospect: new avenues for terrorists and criminals to exploit. As officials are busy spinning the notion that terrorism is all-but only possible because of the rise of robust encryption, consider this: if the public is deprived of its digital privacy — with terrorism applied as the justification to strip out the robust safeguard of strong encryption — then individuals become more vulnerable to acts of terrorism, given their communications cannot be safeguarded from terrorists. Or criminals. Or fraudsters. Or anyone incentivized by malevolent intent.

If you want to speculate on fearful possibilities, think about terrorists being able to target individuals at will via legally-required-to-be insecure digital services. If you think terror tactics are scary right now, think about terrorists having the potential to single out, track and terminate anyone at will based on whatever twisted justification fits their warped ideology — perhaps after that person expressed views they do not approve of in an online forum.

In a world of guaranteed insecure digital services it’s a far more straightforward matter for a terrorist to hack into communications to obtain the identity of a person they deem a target, and to use other similarly perforated technology services to triangulate and track someone’s location to a place where they can be made the latest victim of a new type of hyper-targeted, mass surveillance-enabled terrorism. Inherently insecure services could also be more easily compromised by terrorists to broadcast their own propaganda, or send out phishing scams, or otherwise divert attention en masse.

The only way to protect against these scenarios is to expand the reach of properly encrypted services. To champion the cause of safeguarding the public’s personal data and privacy, rather than working to undermine it — and undermining the individual freedoms the West claims to be so keen to defend in the process.

While, when it comes to counter terrorism strategy, what’s needed is more intelligent targeting, not more mass measures that treat everyone as a potential suspect and deluge security agencies in an endless churn of irrelevant noise. Even the robust end-to-end encryption that’s now being briefed against as a ‘terrorist-enabling evil’ by shadowy officials on both sides of the Atlantic can be compromised at the level of an individual device. There’s no guaranteed shortcut to achieve that. Nor should there be — that’s the point. It takes sophisticated, targeted work.

But blanket measures to compromise the security of the many in the hopes of catching out the savvy few are even less likely to succeed on the intelligence front. We have mass surveillance already, and we also have blood on the streets of Paris once again. Encryption is just a convenient scapegoat for wider policy failures of an industrial surveillance complex.

So let’s not be taken in by false flags flown by anonymous officials trying to mask bad political decision-making. And let’s redouble our efforts to fight bad policy which seeks to entrench a failed ideology of mass surveillance — instead of focusing intelligence resources where they are really needed; honing in on signals, not drowned out by noise.