Schlagwort-Archive: NSA

Beapy uses NSA’s DoublePulsar EternalBlue & Mimikatz to collect and use passwords to mine for cryptocurrency following Coinhive

Two years after highly classified exploits built by the National Security Agency were stolen and published, hackers are still using the tools for nefarious reasons.

Security researchers at Symantec say they’ve seen a recent spike in a new malware, dubbed Beapy, which uses the leaked hacking tools to spread like wildfire across corporate networks to enslave computers into running mining code to generate cryptocurrency.

Beapy was first spotted in January but rocketed to more than 12,000 unique infections across 732 organizations since March, said Alan Neville, Symantec’s lead researcher on Beapy, in an email to TechCrunch. The malware almost exclusively targets enterprises, host to large numbers of computers, which when infected with cryptocurrency mining malware can generate sizable sums of money.

The malware relies on someone in the company opening a malicious email. Once opened, the malware drops the NSA-developed DoublePulsar malware to create a persistent backdoor on the infected computer, and uses the NSA’s EternalBlue exploit to spread laterally throughout the network. These are the same exploits that helped spread the WannaCry ransomware in 2017. Once the computers on the network are backdoored, the Beapy malware is pulled from the hacker’s command and control server to infect each computer with the mining software.

Not only does Beapy use the NSA’s exploits to spread, it also uses Mimikatz, an open-source credential stealer, to collect and use passwords from infected computers to navigate its way across the network.

According to the researchers, more than 80 percent of Beapy’s infections are in China.

Hijacking computers to mine for cryptocurrency — known as cryptojacking — has been on the decline in recent months, partially following the shutdown of Coinhive, a popular mining tool. Hackers are finding the rewards fluctuate greatly depending on the value of the cryptocurrency. But cryptojacking remains a more stable source of revenue than the hit-and-miss results of ransomware.

In September, some 919,000 computers were vulnerable to EternalBlue attacks — many of which were exploited for mining cryptocurrency. Today, that figure has risen to more than a million.

Typically cryptojackers exploit vulnerabilities in websites, which, when opened on a user’s browser, uses the computer’s processing power to generate cryptocurrency. But file-based cryptojacking is far more efficient and faster, allowing the hackers to make more money.

In a single month, file-based mining can generate up to $750,000, Symantec researchers estimate, compared to just $30,000 from a browser-based mining operation.

Cryptojacking might seem like a victimless crime — no data is stolen and files aren’t encrypted, but Symantec says the mining campaigns can slow down computers and cause device degradation.

A new cryptocurrency mining malware uses leaked NSA exploits to spread across enterprise networks

Alexa do you work for the NSA ;-)

Tens of millions of people use smart speakers and their voice software to play games, find music or trawl for trivia. Millions more are reluctant to invite the devices and their powerful microphones into their homes out of concern that someone might be listening.

Sometimes, someone is.

Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands.

The Alexa voice review process, described by seven people who have worked on the program, highlights the often-overlooked human role in training software algorithms. In marketing materials Amazon says Alexa “lives in the cloud and is always getting smarter.” But like many software tools built to learn from experience, humans are doing some of the teaching.

The team comprises a mix of contractors and full-time Amazon employees who work in outposts from Boston to Costa Rica, India and Romania, according to the people, who signed nondisclosure agreements barring them from speaking publicly about the program. They work nine hours a day, with each reviewer parsing as many as 1,000 audio clips per shift, according to two workers based at Amazon’s Bucharest office, which takes up the top three floors of the Globalworth building in the Romanian capital’s up-and-coming Pipera district. The modern facility stands out amid the crumbling infrastructure and bears no exterior sign advertising Amazon’s presence.

The work is mostly mundane. One worker in Boston said he mined accumulated voice data for specific utterances such as “Taylor Swift” and annotated them to indicate the searcher meant the musical artist. Occasionally the listeners pick up things Echo owners likely would rather stay private: a woman singing badly off key in the shower, say, or a child screaming for help. The teams use internal chat rooms to share files when they need help parsing a muddled word—or come across an amusing recording.

 Amazon in Bucharest
Amazon has offices in this Bucharest building.
Photographer: Irina Vilcu/Bloomberg

Sometimes they hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was a sexual assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere.

“We take the security and privacy of our customers’ personal information seriously,” an Amazon spokesman said in an emailed statement. “We only annotate an extremely small sample of Alexa voice recordings in order [to] improve the customer experience. For example, this information helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.

“We have strict technical and operational safeguards, and have a zero tolerance policy for the abuse of our system. Employees do not have direct access to information that can identify the person or account as part of this workflow. All information is treated with high confidentiality and we use multi-factor authentication to restrict access, service encryption and audits of our control environment to protect it.”

Amazon, in its marketing and privacy policy materials, doesn’t explicitly say humans are listening to recordings of some conversations picked up by Alexa. “We use your requests to Alexa to train our speech recognition and natural language understanding systems,” the company says in a list of frequently asked questions.

In Alexa’s privacy settings, Amazon gives users the option of disabling the use of their voice recordings for the development of new features. The company says people who opt out of that program might still have their recordings analyzed by hand over the regular course of the review process. A screenshot reviewed by Bloomberg shows that the recordings sent to the Alexa reviewers don’t provide a user’s full name and address but are associated with an account number, as well as the user’s first name and the device’s serial number.

The Intercept reported earlier this year that employees of Amazon-owned Ring manually identify vehicles and people in videos captured by the company’s doorbell cameras, an effort to better train the software to do that work itself.

“You don’t necessarily think of another human listening to what you’re telling your smart speaker in the intimacy of your home,” said Florian Schaub, a professor at the University of Michigan who has researched privacy issues related to smart speakers. “I think we’ve been conditioned to the [assumption] that these machines are just doing magic machine learning. But the fact is there is still manual processing involved.”

“Whether that’s a privacy concern or not depends on how cautious Amazon and other companies are in what type of information they have manually annotated, and how they present that information to someone,” he added.

When the Echo debuted in 2014, Amazon’s cylindrical smart speaker quickly popularized the use of voice software in the home. Before long, Alphabet Inc. launched its own version, called Google Home, followed by Apple Inc.’s HomePod. Various companies also sell their own devices in China. Globally, consumers bought 78 million smart speakers last year, according to researcher Canalys. Millions more use voice software to interact with digital assistants on their smartphones.

Alexa software is designed to continuously record snatches of audio, listening for a wake word. That’s “Alexa” by default, but people can change it to “Echo” or “computer.” When the wake word is detected, the light ring at the top of the Echo turns blue, indicating the device is recording and beaming a command to Amazon servers.

 Inside An Amazon 4-Star Store
An Echo smart speaker inside an Amazon 4-star store in Berkeley, California.
Photographer: Cayce Clifford/Bloomberg

Most modern speech-recognition systems rely on neural networks patterned on the human brain. The software learns as it goes, by spotting patterns amid vast amounts of data. The algorithms powering the Echo and other smart speakers use models of probability to make educated guesses. If someone asks Alexa if there’s a Greek place nearby, the algorithms know the user is probably looking for a restaurant, not a church or community center.

But sometimes Alexa gets it wrong—especially when grappling with new slang, regional colloquialisms or languages other than English. In French, avec sa, “with his” or “with her,” can confuse the software into thinking someone is using the Alexa wake word. Hecho, Spanish for a fact or deed, is sometimes misinterpreted as Echo. And so on. That’s why Amazon recruited human helpers to fill in the gaps missed by the algorithms.

Apple’s Siri also has human helpers, who work to gauge whether the digital assistant’s interpretation of requests lines up with what the person said. The recordings they review lack personally identifiable information and are stored for six months tied to a random identifier, according to an Apple security white paper. After that, the data is stripped of its random identification information but may be stored for longer periods to improve Siri’s voice recognition.

At Google, some reviewers can access some audio snippets from its Assistant to help train and improve the product, but it’s not associated with any personally identifiable information and the audio is distorted, the company says.

A recent Amazon job posting, seeking a quality assurance manager for Alexa Data Services in Bucharest, describes the role humans play: “Every day she [Alexa] listens to thousands of people talking to her about different topics and different languages, and she needs our help to make sense of it all.” The want ad continues: “This is big data handling like you’ve never seen it. We’re creating, labeling, curating and analyzing vast quantities of speech on a daily basis.”

Amazon’s review process for speech data begins when Alexa pulls a random, small sampling of customer voice recordings and sends the audio files to the far-flung employees and contractors, according to a person familiar with the program’s design.

 Amazon.com Inc. Holds Product Reveal Launch
The Echo Spot
Photographer: Daniel Berman/Bloomberg

Some Alexa reviewers are tasked with transcribing users’ commands, comparing the recordings to Alexa’s automated transcript, say, or annotating the interaction between user and machine. What did the person ask? Did Alexa provide an effective response?

Others note everything the speaker picks up, including background conversations—even when children are speaking. Sometimes listeners hear users discussing private details such as names or bank details; in such cases, they’re supposed to tick a dialog box denoting “critical data.” They then move on to the next audio file.

According to Amazon’s website, no audio is stored unless Echo detects the wake word or is activated by pressing a button. But sometimes Alexa appears to begin recording without any prompt at all, and the audio files start with a blaring television or unintelligible noise. Whether or not the activation is mistaken, the reviewers are required to transcribe it. One of the people said the auditors each transcribe as many as 100 recordings a day when Alexa receives no wake command or is triggered by accident.

In homes around the world, Echo owners frequently speculate about who might be listening, according to two of the reviewers. “Do you work for the NSA?” they ask. “Alexa, is someone else listening to us?”

— With assistance by Gerrit De Vynck, Mark Gurman, and Irina Vilcu

Source: https://www.bloomberg.com/news/articles/2019-04-10/is-anyone-listening-to-you-on-alexa-a-global-team-reviews-audio

Lets Get Rid of the “Nothing to Hide, Nothing to Fear” Mentality

With Zuckerberg testifying to the US Congress over Facebook’s data privacy and the implementation of GDPR fast approaching, the debate around data ownership has suddenly burst into the public psyche. Collecting user data to serve targeted advertising in a free platform is one thing, harvesting the social graphs of people interacting with apps and using it to sway an election is somewhat worse.

Suffice to say that neither of the above compare to the indiscriminate collection of ordinary civilians’ data on behalf of governments every day.

In 2013, Edward Snowden blew the whistle on the systematic US spy program he helped to architect. Perhaps the largest revelation to come out of the trove of documents he released were the details of PRISM, an NSA program that collects internet communications data from US telecommunications companies like Microsoft, Yahoo, Google, Facebook and Apple. The data collected included audio and video chat logs, photographs, emails, documents and connection logs of anyone using the services of 9 leading US internet companies. PRISM benefited from changes to FISA that allowed warrantless domestic surveillance of any target without the need for probable cause. Bill Binney, former US intelligence official, explains how, for instances where corporate control wasn’t achievable, the NSA enticed third party countries to clandestinely tap internet communication lines on the internet backbone via the RAMPART-A program.What this means is that the NSA was able to assemble near complete dossiers of all web activity carried out by anyone using the internet.

But this is just in the US right?, policies like this wouldn’t be implemented in Europe.

Wrong unfortunately.

GCHQ, the UK’s intelligence agency allegedly collects considerably more metadata than the NSA. Under Tempora, GCHQ can intercept all internet communications from submarine fibre optic cables and store the information for 30 days at the Bude facility in Cornwall. This includes complete web histories, the contents of all emails and facebook entires and given that more than 25% of all internet communications flow through these cables, the implications are astronomical. Elsewhere, JTRIG, a unit of GCHQ have intercepted private facebook pictures, changed the results of online polls and spoofed websites in real time. A lot of these techniques have been made possible by the 2016 Investigatory Powers Act which Snowden describes as the most “extreme surveillance in the history of western democracy”.

But despite all this, the age old reprise; “if you’ve got nothing to hide, you’ve got nothing to fear” often rings out in debates over privacy.

Indeed, the idea is so pervasive that politicians often lean on the phrase to justify ever more draconian methods of surveillance. Yes, they draw upon the selfsame rhetoric of Joseph Goebbels, propaganda minister for the Nazi regime.

In drafting legislation for the the Investigatory Powers Act, May said that such extremes were necessary to ensure “no area of cyberspace becomes a haven for those who seek to harm us, to plot, poison minds and peddle hatred under the radar”.

When levelled against the fear of terrorism and death, its easy to see how people passively accept ever greater levels of surveillance. Indeed, Naomi Klein writes extensively in Shock Doctrine how the fear of external threats can be used as a smokescreen to implement ever more invasive policy. But indiscriminate mass surveillance should never be blindly accepted, privacy should and always will be a social norm, despite what Mark Zuckerberg said in 2010. Although I’m sure he may have a different answer now.

So you just read emails and look at cat memes online, why would you care about privacy?

In the same way we’re able to close our living room curtains and be alone and unmonitored, we should be able to explore our identities online un-impinged. Its a well rehearsed idea that nowadays we’re more honest to our web browsers than we are to each other but what happens when you become cognisant that everything you do online is intercepted and catalogued? As with CCTV, when we know we’re being watched, we alter our behaviour in line with whats expected.

As soon as this happens online, the liberating quality provided by the anonymity of the internet is lost. Your thinking aligns with the status quo and we lose the boundless ability of the internet to search and develop our identities. No progress can be made when everyone thinks the same way. Difference of opinion fuels innovation.

This draws obvious comparisons with Bentham’s Panopticon, a prison blueprint for enforcing control from within. The basic setup is as follows; there is a central guard tower surrounded by cells. In the cells are prisoners. The tower shines bright light so that the watchman can see each inmate silhouetted in their cell but the prisoners cannot see the watchman. The prisoners must assume they could be observed at any point and therefore act accordingly. In literature, the common comparison is Orwell’s 1984 where omnipresent government surveillance enforces control and distorts reality. With revelations about surveillance states, the relevance of these metaphors are plain to see.

In reality, theres actually a lot more at stake here.

With the Panopticon certain individuals are watched, in 1984 everyone is watched. On the modern internet, every person, irrespective of the threat they pose, is not only watched but their information is stored and archived for analysis.

Kafka’s The Trial, in which a bureaucracy uses citizens information to make decisions about them, but denies them the ability to participate in how their information is used, therefore seems a more apt comparison. The issue here is that corporations, more so, states have been allowed to comb our data and make decisions that affect us without our consent.

Maybe, as a member of a western democracy, you don’t think this matters. But what if you’re a member of a minority group in an oppressive regime? What if you’re arrested because a computer algorithm cant separate humour from intent to harm?

On the other hand, maybe you trust the intentions of your government, but how much faith do you have in them to keep your data private? The recent hack of the SEC shows that even government systems aren’t safe from attackers. When a business database is breached, maybe your credit card details become public, when a government database that has aggregated millions of data points on every aspect of your online life is hacked, you’ve lost all control of your ability to selectively reveal yourself to the world. Just as Lyndon Johnson sought to control physical clouds, he who controls the modern cloud, will rule the world.

Perhaps you think that even this doesn’t matter, if it allows the government to protect us from those that intend to cause harm then its worth the loss of privacy. The trouble with indiscriminate surveillance is that with so much data you see everything but paradoxically, still know nothing.

Intelligence is the strategic collection of pertinent facts, bulk data collection cannot therefore be intelligent. As Bill Binney puts it “bulk data kills people” because technicians are so overwhelmed that they cant isolate whats useful. Data collection as it is can only focus on retribution rather than reduction.

Granted, GDPR is a big step forward for individual consent but will it stop corporations handing over your data to the government? Depending on how cynical you are, you might think that GDPR is just a tool to clean up and create more reliable deterministic data anyway. The nothing to hide, nothing to fear mentality renders us passive supplicants in the removal of our civil liberties. We should be thinking about how we relate to one another and to our Governments and how much power we want to have in that relationship.

To paraphrase Edward Snowden, saying you don’t care about privacy because you’ve got nothing to hide is analogous to saying you don’t care about freedom of speech because you have nothing to say.

http://behindthebrowser.space/index.php/2018/04/22/nothing-to-fear-nothing-to-hide/

Macron, May, Merkel – weakening encryption and making messengers (whatsapp) vulnerable leads to data security catastrophes

In weakening strong encryption by weakening software like Android or IOS operating System (subroutines, inlays, essentials) in order to enable mass surveillance you the leaders of Europe risk the data security of thousands of Europe companies. Is it worth it?

Even Microsoft is now warning that the government practice of “stockpiling” software vulnerabilities so that they can be used as weapons is a misguided tactic that weakens security for everybody.

“An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen,” the company said Sunday.

Why are you doing this? Hopefully not for the need to give information in order to receive from the USA?

epa05989737 French President Emmanuel Macron (L) talks with German Chancellor Angela Merkel (R) as US President Donald J. Trump (C) walks by, during a line up for the group photo at the NATO summit in Brussels, Belgium, 25 May 2017. NATO countries‘ heads of states and governments gather in Brussels for a one-day meeting. EPA/ARMANDO BABANI

You saw and recognised and understood WannaCry that affected thousands of companies throuout Europe?

The vulnerability in Windows that WannaCry takes advantage of was discovered by the NSA for its surveillance toolkit. But word got out when a hacker group known as the Shadow Brokers dumped a bunch of leaked NSA information onto the Internet in April. Microsoft, however, had already issued a software update the month before; those that downloaded and installed the patch were protected from WannaCry, but many others lagged behind and became victims.

Securing Driverless Cars From Hackers Is Hard, according to Charlie Miller, Ex-NSA’s Tailored Access Operations Hacker

Securing Driverless Cars From Hackers Is Hard. Ask the Ex-Uber Guy Who Protects Them

Two years ago, Charlie Miller and Chris Valasek pulled off a demonstration that shook the auto industry, remotely hacking a Jeep Cherokee via its internet connection to paralyze it on a highway. Since then, the two security researchers have been quietly working for Uber, helping the startup secure its experimental self-driving cars against exactly the sort of attack they proved was possible on a traditional one. Now, Miller has moved on, and he’s ready to broadcast a message to the automotive industry: Securing autonomous cars from hackers is a very difficult problem. It’s time to get serious about solving it.

Last month, Miller left Uber for a position at Chinese competitor Didi, a startup that’s just now beginning its own autonomous ridesharing project. In his first post-Uber interview, Miller talked to WIRED about what he learned in those 19 months at the company—namely that driverless taxis pose a security challenge that goes well beyond even those faced by the rest of the connected car industry.

Miller couldn’t talk about any of the specifics of his research at Uber; he says he moved to Didi in part because the company has allowed him to speak more openly about car hacking. But he warns that before self-driving taxis can become a reality, the vehicles’ architects will need to consider everything from the vast array of automation in driverless cars that can be remotely hijacked, to the possibility that passengers themselves could use their physical access to sabotage an unmanned vehicle.

“Autonomous vehicles are at the apex of all the terrible things that can go wrong,” says Miller, who spent years on the NSA’s Tailored Access Operations team of elite hackers before stints at Twitter and Uber. “Cars are already insecure, and you’re adding a bunch of sensors and computers that are controlling them…If a bad guy gets control of that, it’s going to be even worse.”

At A Computer’s Mercy

In a series of experiments starting in 2013, Miller and Valasek showed that a hacker with either wired or over-the-internet access to a vehicle—including a Toyota Prius, Ford Escape, and a Jeep Cherokee—could disable or slam on a victim’s brakes, turn the steering wheel, or, in some cases, cause unintended acceleration. But to trigger almost all those attacks, Miller and Valasek had to exploit vehicles’ existing automated features. They used the Prius’ collision avoidance system to apply its brakes, and the Jeep’s cruise control feature to accelerate it. To turn the Jeep’s steering wheel, they tricked it into thinking it was parking itself—even if it was moving at 80 miles per hour.

Their car-hacking hijinks, in other words, were limited to the few functions a vehicle’s computer controls. In a driverless car, the computer controls everything. “In an autonomous vehicle, the computer can apply the brakes and turn the steering wheel any amount, at any speed,” Miller says. “The computers are even more in charge.”

An alert driver could also override many of the attacks Miller and Valasek demonstrated on traditional cars: Tap the brakes and that cruise control acceleration immediately ceases. Even the steering wheel attacks could be easily overcome if the driver wrests control of the wheel. When the passenger isn’t in the driver’s seat—or there is no steering wheel or brake pedal—no such manual override exists. “No matter what we did in the past, the human had a chance to control the car. But if you’re sitting in the backseat, that’s a whole different story,” says Miller. “You’re totally at the mercy of the vehicle.”

Hackers Take Rides, Too

A driverless car that’s used as a taxi, Miller points out, poses even more potential problems. In that situation, every passenger has to be considered a potential threat. Security researchers have shown that merely plugging an internet-connected gadget into a car’s OBD2 port—a ubiquitous outlet under its dashboard—can offer a remote attacker an entry point into the vehicle’s most sensitive systems. (Researchers at the University of California at San Diego showed in 2015 that they could take control of a Corvette’s brakes via a common OBD2 dongle distributed by insurance companies—including one that partnered with Uber.)

“There’s going to be someone you don’t necessarily trust sitting in your car for an extended period of time,” says Miller. “The OBD2 port is something that’s pretty easy for a passenger to plug something into and then hop out, and then they have access to your vehicle’s sensitive network.”

Permanently plugging that port is illegal under federal regulations, Miller says. He suggests ridesharing companies that use driverless cars could cover it with tamper-evident tape. But even then, they might only be able to narrow down which passenger could have sabotaged a vehicle to a certain day or week. A more comprehensive fix would mean securing the vehicle’s software so that not even a malicious hacker with full physical access to its network would be able to hack it—a challenge Miller says only a few highly locked-down products like an iPhone or Chromebook can pass.

“It’s definitely a hard problem,” he says.

Deep Fixes

Miller argues that solving autonomous vehicles’ security flaws will require some fundamental changes to their security architecture. Their internet-connected computers, for instance, will need “codesigning,” a measure that ensures they only run trusted code signed with a certain cryptographic key. Today only Tesla has talked publicly about implementing that feature. Cars’ internal networks will need better internal segmentation and authentication, so that critical components don’t blindly follow commands from the OBD2 port. They need intrusion detection systems that can alert the driver—or rider—when something anomalous happens on the cars’ internal networks. (Miller and Valasek designed one such prototype.) And to prevent hackers from getting an initial, remote foothold, cars need to limit their “attack surface,” any services that might accept malicious data sent over the internet.

Complicating those fixes? Companies like Uber and Didi don’t even make the cars they use, but instead have to bolt on any added security after the fact. “They’re getting a car that already has some attack surface, some vulnerabilities, and a lot of software they don’t have any control over, and then trying to make that into something secure,” says Miller. “That’s really hard.”

That means solving autonomous vehicles’ security nightmares will require far more open conversation and cooperation among companies. That’s part of why Miller left Uber, he says: He wants the freedom to speak more openly within the industry. “I want to talk about how we’re securing cars and the scary things we see, instead of designing these things in private and hoping that we all know what we’re doing,” he says.

Car hacking, fortunately, remains largely a concern for the future: No car has yet been digitally hijacked in a documented, malicious case. But that means now’s the time to work on the problem, Miller says, before cars become more automated and make the problem far more real. “We have some time to build up these security measures and get them right before something happens,” says Miller. “And that’s why I’m doing this.”

https://www.wired.com/2017/04/ubers-former-top-hacker-securing-autonomous-cars-really-hard-problem/

Whatsapp spies on your encrypted messages

Exclusive: Privacy campaigners criticise WhatsApp vulnerability as a ‘huge threat to freedom of speech’ and warn it could be exploited by government agencies

Research shows that the company can read messages due to the way WhatsApp has implemented its end-to-end encryption protocol.
Research shows that WhatsApp can read messages due to the way the company has implemented its end-to-end encryption protocol. Photograph: Ritchie B Tongo/EPA

A security backdoor that can be used to allow Facebook and others to intercept and read encrypted messages has been found within its WhatsApp messaging service.

Facebook claims that no one can intercept WhatsApp messages, not even the company and its staff, ensuring privacy for its billion-plus users. But new research shows that the company could in fact read messages due to the way WhatsApphas implemented its end-to-end encryption protocol.

Privacy campaigners said the vulnerability is a “huge threat to freedom of speech” and warned it can be used by government agencies to snoop on users who believe their messages to be secure. WhatsApp has made privacy and security a primary selling point, and has become a go to communications tool of activists, dissidents and diplomats.

WhatsApp’s end-to-end encryption relies on the generation of unique security keys, using the acclaimed Signal protocol, developed by Open Whisper Systems, that are traded and verified between users to guarantee communications are secure and cannot be intercepted by a middleman. However, WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.

The recipient is not made aware of this change in encryption, while the sender is only notified if they have opted-in to encryption warnings in settings, and only after the messages have been resent. This re-encryption and rebroadcasting effectively allows WhatsApp to intercept and read users’ messages.

The security backdoor was discovered by Tobias Boelter, a cryptography and security researcher at the University of California, Berkeley. He told the Guardian: “If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access due to the change in keys.”

The backdoor is not inherent to the Signal protocol. Open Whisper Systems’ messaging app, Signal, the app used and recommended by whistleblower Edward Snowden, does not suffer from the same vulnerability. If a recipient changes the security key while offline, for instance, a sent message will fail to be delivered and the sender will be notified of the change in security keys without automatically resending the message.

WhatsApp’s implementation automatically resends an undelivered message with a new key without warning the user in advance or giving them the ability to prevent it.

Boelter reported the backdoor vulnerability to Facebook in April 2016, but was told that Facebook was aware of the issue, that it was “expected behaviour” and wasn’t being actively worked on. The Guardian has verified the backdoor still exists.

The WhatsApp vulnerability calls into question the privacy of messages sent across the service used around the world, including by people living in oppressive regimes.
Pinterest
The WhatsApp vulnerability calls into question the privacy of messages sent across the service used around the world, including by people living in oppressive regimes. Photograph: Marcelo Sayão/EPA

Steffen Tor Jensen, head of information security and digital counter-surveillance at the European-Bahraini Organisation for Human Rights, verified Boelter’s findings. He said: “WhatsApp can effectively continue flipping the security keys when devices are offline and re-sending the message, without letting users know of the change till after it has been made, providing an extremely insecure platform.”

Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”

The vulnerability calls into question the privacy of messages sent across the service, which is used around the world, including by people living in oppressive regimes.

Professor Kirstie Ball, co-director and founder of the Centre for Research into Information, Surveillance and Privacy, called the existence of a backdoor within WhatsApp’s encryption “a gold mine for security agencies” and “a huge betrayal of user trust”. She added: “It is a huge threat to freedom of speech, for it to be able to look at what you’re saying if it wants to. Consumers will say, I’ve got nothing to hide, but you don’t know what information is looked for and what connections are being made.”

In the UK, the recently passed Investigatory Powers Act allows the government to intercept bulk data of users held by private companies, without suspicion of criminal activity, similar to the activity of the US National Security Agency uncovered by the Snowden revelations. The government also has the power to force companies to “maintain technical capabilities” that allow data collection through hacking and interception, and requires companies to remove “electronic protection” from data. Intentional or not, WhatsApp’s backdoor to the end-to-end encryption could be used in such a way to facilitate government interception.

Jim Killock, executive director of Open Rights Group, said: “If companies claim to offer end-to-end encryption, they should come clean if it is found to be compromised – whether through deliberately installed backdoors or security flaws. In the UK, the Investigatory Powers Act means that technical capability notices could be used to compel companies to introduce flaws – which could leave people’s data vulnerable.”

A WhatsApp spokesperson told the Guardian: “Over 1 billion people use WhatsApp today because it is simple, fast, reliable and secure. At WhatsApp, we’ve always believed that people’s conversations should be secure and private. Last year, we gave all our users a better level of security by making every message, photo, video, file and call end-to-end encrypted by default. As we introduce features like end-to-end encryption, we focus on keeping the product simple and take into consideration how it’s used every day around the world.

“In WhatsApp’s implementation of the Signal protocol, we have a “Show Security Notifications” setting (option under Settings > Account > Security) that notifies you when a contact’s security code has changed. We know the most common reasons this happens are because someone has switched phones or reinstalled WhatsApp. This is because in many parts of the world, people frequently change devices and Sim cards. In these situations, we want to make sure people’s messages are delivered, not lost in transit.”

Asked to comment specifically on whether Facebook/WhatApps had accessed users’ messages and whether it had done so at the request of government agencies or other third parties, it directed the Guardian to its site that details aggregate data on government requests by country.

Concerns over the privacy of WhatsApp users has been repeatedly highlighted since Facebook acquired the company for $22bn in 2014. In August 2015, Facebook announced a change to the privacy policy governing WhatsApp that allowed the social network to merge data from WhatsApp users and Facebook, including phone numbers and app usage, for advertising and development purposes.

Facebook halted the use of the shared user data for advertising purposes in November after pressure from the pan-European data protection agency groupArticle 29 Working Party in October. The European commission then filed charges against Facebook for providing “misleading” information in the run-up to the social network’s acquisition of messaging service WhatsApp, following its data-sharing change.

https://www.theguardian.com/technology/2017/jan/13/whatsapp-backdoor-allows-snooping-on-encrypted-messages

Obama gives CIA, FBI, DEA and 13 other agencies warrantless full raw DATA NSA surveillance access on Americans citizens

Obama gives CIA, FBI, DEA and 13 other agencies warrantless full raw DATA NSA surveillance access on Americans citizens.

Further Reading:

In its final days, the Obama administration has expanded the power of the National Security Agency to share globally intercepted personal communications with the government’s 16 other intelligence agencies before applying privacy protections.

The change means that far more officials will be searching through raw data.

Previously, the N.S.A. filtered information before sharing intercepted communications with another agency, like the C.I.A. or the intelligence branches of the F.B.I. and the Drug Enforcement Administration. The N.S.A.’s analysts passed on only information they deemed pertinent, screening out the identities of innocent people and irrelevant personal information.

Now, other intelligence agencies will be able to search directly through raw repositories of communications intercepted by the N.S.A. and then apply such rules for “minimizing” privacy intrusions.

“Rather than dramatically expanding government access to so much personal data, we need much stronger rules to protect the privacy of Americans,” Mr. Toomey said. “Seventeen different government agencies shouldn’t be rooting through Americans’ emails with family members, friends and colleagues, all without ever obtaining a warrant.”

“This development is very troubling for Americans’ privacy,” said John Napier Tye, a former state department official turned surveillance whistleblower. “Most people don’t realize this, but even our purely domestic email and text messages are often stored on servers outside the United States. And the NSA has written extremely permissive rules for itself to collect data outside US borders.

“So in operations overseas, the NSA is scooping up a lot of purely domestic communications. And now, with these new rules, many different federal agencies can search and read the domestic communications of normal Americans, without any warrant or oversight from Congress or the courts.”

They mean that NSA officials are no longer required to filter out information about innocent people whose identities have been scooped up before passing the intercepted communications to officials from other agencies, who will now be able to search through raw caches of data.

“This raises serious concerns that agencies that have responsibilities such as prosecuting domestic crimes, regulating our financial policy, and enforcing our immigration laws will now have access to a wealth of personal information that could be misused,” said Singh Guliani. “Congress needs to take action to regulate and provide oversight over these activities.”

https://www.theguardian.com/world/2017/jan/12/obama-us-intelligence-greater-access-warrantless-data-foreign-targets

Privacy advocates’ concerns center around loopholes in the rules that allow agencies like the FBI and DEA to search the NSA’s collected data for purposes such as investigating an “agent of a foreign power.” Any evidence of illegal behavior that a searcher stumbles on can be used in a criminal prosecution. That means the rule change, according to Cardozo, introduces new possibilities for law enforcement agencies like the DEA and FBI to carry out what’s known as “parallel construction.” That maneuver involves secretly using the NSA’s intelligence to identify or track a criminal suspect, and then fabricating a plausible trail of evidence to present to a court as an after-the-fact explanation of the investigation’s origin. The technique was the subject of an ACLU lawsuit against the Office of the Director of National Intelligence in 2012, and resulted in the Justice Department admitting to repeatedly using the technique to hide the NSA’s involvement in criminal investigations.

“It used to be that if NSA itself saw the evidence of a crime, they could give a tip to the FBI, and the FBI would engage in parallel construction,” says Cardozo. “Now FBI will be able to get into the raw data themselves and do what they will with it.”

https://www.wired.com/2017/01/just-time-trump-nsa-loosens-privacy-rules/