Archiv der Kategorie: Privacy

Macron, May, Merkel – weakening encryption and making messengers (whatsapp) vulnerable leads to data security catastrophes

In weakening strong encryption by weakening software like Android or IOS operating System (subroutines, inlays, essentials) in order to enable mass surveillance you the leaders of Europe risk the data security of thousands of Europe companies. Is it worth it?

Even Microsoft is now warning that the government practice of “stockpiling” software vulnerabilities so that they can be used as weapons is a misguided tactic that weakens security for everybody.

“An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen,” the company said Sunday.

Why are you doing this? Hopefully not for the need to give information in order to receive from the USA?

epa05989737 French President Emmanuel Macron (L) talks with German Chancellor Angela Merkel (R) as US President Donald J. Trump (C) walks by, during a line up for the group photo at the NATO summit in Brussels, Belgium, 25 May 2017. NATO countries‘ heads of states and governments gather in Brussels for a one-day meeting. EPA/ARMANDO BABANI

You saw and recognised and understood WannaCry that affected thousands of companies throuout Europe?

The vulnerability in Windows that WannaCry takes advantage of was discovered by the NSA for its surveillance toolkit. But word got out when a hacker group known as the Shadow Brokers dumped a bunch of leaked NSA information onto the Internet in April. Microsoft, however, had already issued a software update the month before; those that downloaded and installed the patch were protected from WannaCry, but many others lagged behind and became victims.

Advertisements

More Android phones are using encryption and lock screen security than ever before

Many often have a false sense of just how secure their private data is on their devices — that is, if they’re thinking about it at all. Your average smartphone user just wants to access the apps and people they care about, and not worry about security.

That’s why it was extremely encouraging to hear some of the security metrics announced at Google I/O 2017. For devices running Android Nougat, roughly 80% of users are running them fully encrypted. At the same time, about 70% of Nougat devices are using a secure lock screen of some form.

Android encryption adoptionAndroid lock screen adoption

That 80% encryption number isn’t amazingly surprising when you remember that Nougat has full-device encryption turned on by default, but that number also includes devices that were upgraded from Marshmallow, which didn’t have default encryption. Devices running on Marshmallow have a device encryption rate of just 25%, though, so this is a massive improvement. And the best part about Google’s insistence on default encryption is that eventually older devices will be replaced by those running Nougat or later out of the box, meaning this encryption rate could get very close to 100%.

The default settings are immensely important.

Full-device encryption is particularly effective when paired with a secure lock screen, and Google’s metrics showing 70% adoption in this regard definitely needs some work. It’s a small increase from the roughly 60% secure lock screen rate of Marshmallow phones but a decent jump from the sub-50% rate of devices running Lollipop. The most interesting aspect of these numbers to my eyes is that having a fingerprint sensor on the device doesn’t signal a very large increase in adoption — perhaps just a five percentage point jump. On one hand it’s great to see people using secured lock screens even when they don’t have something as convenient as a fingerprint sensor, but then again I’d expect the simplicity of that sensor to help adoption more than these numbers show.

The trend is heading in the right direction in both of these metrics, and that’s a great sign despite the fact that secure lock screens show a slower growth rate. The closer we get both of these numbers to 100%, the better.

http://www.androidcentral.com/more-android-phones-are-using-encryption-and-lock-screen-security-eve

Don’t wanna Cry? Use Linux 

Don’t wanna Cry? Use Linux. Life is too short to reboot. 

So far, over 213,000 computers across 99 countries around the world have been infected, and the infection is still rising even hours after the kill switch was triggered by the 22-years-old British security researcher behind the twitter handle ‚MalwareTech.‘

For those unaware, WannaCry is an insanely fast-spreading ransomware malware that leverages a Windows SMB exploit to remotely target a computer running on unpatched or unsupported versions of Windows.

So far, Criminals behind WannaCry Ransomware have received nearly 100 payments from victims, total 15 Bitcoins, equals to USD $26,090.


Once infected, WannaCry also scans for other vulnerable computers connected to the same network, as well scans random hosts on the wider Internet, to spread quickly.

The SMB exploit, currently being used by WannaCry, has been identified as EternalBlue, a collection of hacking tools allegedly created by the NSA and then subsequently dumped by a hacking group calling itself „The Shadow Brokers“ over a month ago.

„If NSA had privately disclosed the flaw used to attack hospitals when they *found* it, not when they lost it, this may not have happened,“ NSA whistleblower Edward Snowden says.

http://thehackernews.com/2017/05/wannacry-ransomware-cyber-attack.html

Securing Driverless Cars From Hackers Is Hard, according to Charlie Miller, Ex-NSA’s Tailored Access Operations Hacker

Securing Driverless Cars From Hackers Is Hard. Ask the Ex-Uber Guy Who Protects Them

Two years ago, Charlie Miller and Chris Valasek pulled off a demonstration that shook the auto industry, remotely hacking a Jeep Cherokee via its internet connection to paralyze it on a highway. Since then, the two security researchers have been quietly working for Uber, helping the startup secure its experimental self-driving cars against exactly the sort of attack they proved was possible on a traditional one. Now, Miller has moved on, and he’s ready to broadcast a message to the automotive industry: Securing autonomous cars from hackers is a very difficult problem. It’s time to get serious about solving it.

Last month, Miller left Uber for a position at Chinese competitor Didi, a startup that’s just now beginning its own autonomous ridesharing project. In his first post-Uber interview, Miller talked to WIRED about what he learned in those 19 months at the company—namely that driverless taxis pose a security challenge that goes well beyond even those faced by the rest of the connected car industry.

Miller couldn’t talk about any of the specifics of his research at Uber; he says he moved to Didi in part because the company has allowed him to speak more openly about car hacking. But he warns that before self-driving taxis can become a reality, the vehicles’ architects will need to consider everything from the vast array of automation in driverless cars that can be remotely hijacked, to the possibility that passengers themselves could use their physical access to sabotage an unmanned vehicle.

“Autonomous vehicles are at the apex of all the terrible things that can go wrong,” says Miller, who spent years on the NSA’s Tailored Access Operations team of elite hackers before stints at Twitter and Uber. “Cars are already insecure, and you’re adding a bunch of sensors and computers that are controlling them…If a bad guy gets control of that, it’s going to be even worse.”

At A Computer’s Mercy

In a series of experiments starting in 2013, Miller and Valasek showed that a hacker with either wired or over-the-internet access to a vehicle—including a Toyota Prius, Ford Escape, and a Jeep Cherokee—could disable or slam on a victim’s brakes, turn the steering wheel, or, in some cases, cause unintended acceleration. But to trigger almost all those attacks, Miller and Valasek had to exploit vehicles’ existing automated features. They used the Prius’ collision avoidance system to apply its brakes, and the Jeep’s cruise control feature to accelerate it. To turn the Jeep’s steering wheel, they tricked it into thinking it was parking itself—even if it was moving at 80 miles per hour.

Their car-hacking hijinks, in other words, were limited to the few functions a vehicle’s computer controls. In a driverless car, the computer controls everything. “In an autonomous vehicle, the computer can apply the brakes and turn the steering wheel any amount, at any speed,” Miller says. “The computers are even more in charge.”

An alert driver could also override many of the attacks Miller and Valasek demonstrated on traditional cars: Tap the brakes and that cruise control acceleration immediately ceases. Even the steering wheel attacks could be easily overcome if the driver wrests control of the wheel. When the passenger isn’t in the driver’s seat—or there is no steering wheel or brake pedal—no such manual override exists. “No matter what we did in the past, the human had a chance to control the car. But if you’re sitting in the backseat, that’s a whole different story,” says Miller. “You’re totally at the mercy of the vehicle.”

Hackers Take Rides, Too

A driverless car that’s used as a taxi, Miller points out, poses even more potential problems. In that situation, every passenger has to be considered a potential threat. Security researchers have shown that merely plugging an internet-connected gadget into a car’s OBD2 port—a ubiquitous outlet under its dashboard—can offer a remote attacker an entry point into the vehicle’s most sensitive systems. (Researchers at the University of California at San Diego showed in 2015 that they could take control of a Corvette’s brakes via a common OBD2 dongle distributed by insurance companies—including one that partnered with Uber.)

“There’s going to be someone you don’t necessarily trust sitting in your car for an extended period of time,” says Miller. “The OBD2 port is something that’s pretty easy for a passenger to plug something into and then hop out, and then they have access to your vehicle’s sensitive network.”

Permanently plugging that port is illegal under federal regulations, Miller says. He suggests ridesharing companies that use driverless cars could cover it with tamper-evident tape. But even then, they might only be able to narrow down which passenger could have sabotaged a vehicle to a certain day or week. A more comprehensive fix would mean securing the vehicle’s software so that not even a malicious hacker with full physical access to its network would be able to hack it—a challenge Miller says only a few highly locked-down products like an iPhone or Chromebook can pass.

“It’s definitely a hard problem,” he says.

Deep Fixes

Miller argues that solving autonomous vehicles’ security flaws will require some fundamental changes to their security architecture. Their internet-connected computers, for instance, will need “codesigning,” a measure that ensures they only run trusted code signed with a certain cryptographic key. Today only Tesla has talked publicly about implementing that feature. Cars’ internal networks will need better internal segmentation and authentication, so that critical components don’t blindly follow commands from the OBD2 port. They need intrusion detection systems that can alert the driver—or rider—when something anomalous happens on the cars’ internal networks. (Miller and Valasek designed one such prototype.) And to prevent hackers from getting an initial, remote foothold, cars need to limit their “attack surface,” any services that might accept malicious data sent over the internet.

Complicating those fixes? Companies like Uber and Didi don’t even make the cars they use, but instead have to bolt on any added security after the fact. “They’re getting a car that already has some attack surface, some vulnerabilities, and a lot of software they don’t have any control over, and then trying to make that into something secure,” says Miller. “That’s really hard.”

That means solving autonomous vehicles’ security nightmares will require far more open conversation and cooperation among companies. That’s part of why Miller left Uber, he says: He wants the freedom to speak more openly within the industry. “I want to talk about how we’re securing cars and the scary things we see, instead of designing these things in private and hoping that we all know what we’re doing,” he says.

Car hacking, fortunately, remains largely a concern for the future: No car has yet been digitally hijacked in a documented, malicious case. But that means now’s the time to work on the problem, Miller says, before cars become more automated and make the problem far more real. “We have some time to build up these security measures and get them right before something happens,” says Miller. “And that’s why I’m doing this.”

https://www.wired.com/2017/04/ubers-former-top-hacker-securing-autonomous-cars-really-hard-problem/

Delete Signal’s texts, or the app itself, and virtually no trace of the conversation remains.

Delete Signal’s texts, or the app itself, and virtually no trace of the conversation remains. “The messages are pretty much gone

Suing to See the Feds’ Encrypted Messages? Good Luck

The recent rise of end-to-end encrypted messaging apps has given billions of people access to strong surveillance protections. But as one federal watchdog group may soon discover, it also creates a transparency conundrum: Delete the conversation from those two ends, and there may be no record left.

The conservative group Judicial Watch is suing the Environmental Protection Agency under the Freedom of Information Act, seeking to compel the EPA to hand over any employee communications sent via Signal, the encrypted messaging and calling app. In its public statement about the lawsuit, Judicial Watch points to reports that EPA staffers have used Signal to communicate secretly, in the face of an adversarial Trump administration.

But encryption and forensics experts say Judicial Watch may have picked a tough fight. Delete Signal’s texts, or the app itself, and virtually no trace of the conversation remains. “The messages are pretty much gone,” says Johns Hopkins crypotgrapher Matthew Green, who has closely followed the development of secure messaging tools. “You can’t prove something was there when there’s nothing there.”

End-to-Dead-End

Signal, like other end-to-end encryption apps, protects messages such that only the people participating in a conversation can read them. No outside observer—not even the Signal server that the messages route through—can sneak a look. Delete the messages from the devices of two Signal communicants, and no other unencrypted copy of it exists.

In fact, Signal’s own server doesn’t keep record of even the encrypted versions of those communications. Last October, Signal’s developers at the non-profit Open Whisper Systems revealed that a grand jury subpoena had yielded practically no useful data. “The only information we can produce in response to a request like this is the date and time a user registered with Signal and the last date of a user’s connectivity to the Signal service,” Open Whisper Systems wrote at the time. (That’s the last time they opened the app, not sent or received a message.)

Even seizing and examining the phones of EPA employees likely won’t help if users have deleted their messages or the full app, Green says. They could even do so on autopilot. Six months ago, Signal added a Snapchat-like feature to allow automated deletionof a conversation from both users’ phones after a certain amount of time. Forensic analyst Jonathan Zdziarski, who now works as an Apple security engineer, wrote in a blog post last year that after Signal messages are deleted, the app “leaves virtually nothing, so there’s nothing to worry about. No messy cleanup.” (Open Whisper Systems declined to comment on the Judicial Watch FOIA request, or how exactly it deletes messages.)

Still, despite its best sterilization efforts, even Signal might leave some forensic trace of deleted messages on phones, says Green. And other less-secure ephemeral messaging apps like Confide, which has also become popular among government staffers, likely leave more fingerprints behind. But Green argues that recovering deleted messages from even sloppier apps would take deeper digging than FOIA requests typically compel—so long as users are careful to delete messages on both sides of the conversation and any cloud backups. “We’re talking about expensive, detailed forensic analysis,” says Green. “It’s a lot more work than you’d expect from someone carrying out FOIA requests.”

For the Records

Deleting records of government business from government-issued devices is—let’s be clear—illegal. That smartphone scrubbing, says Georgetown Law professor David Vladeck, would blatantly violate the Federal Records Act. “It’s no different from taking records home and burning them,” says Vladeck. “They’re not your records, they’re the federal government’s, and you’re not supposed to do that.”

Judicial Watch, for its part, acknowledges that it may be tough to dig up deleted Signal communications. But another element of its FOIA request asks for any EPA information about whether it has approved Signal for use by agency staffers. “They can’t use these apps to thwart the Federal Records Act just because they don’t like Donald Trump,” says Judicial Watch president Tom Fitton. “This serves also as an educational moment for any government employees, that using the app to conduct government business to ensure the deletion of records is against the law, and against record-keeping policies in almost every agency.”

Fitton hopes the lawsuit will at least compel the EPA to prevent employees from installing Signal or similar apps on government-issued phones. “The agency is obligated to ensure their employees are following the rules so that records subject to FOIA are preserved,” he says. “If they’re not doing that, they could be answerable to the courts.”

Georgetown’s Vladeck says that even evidence employees have used Signal at all should be troubling, and might warrant a deeper investigation. “I would be very concerned if employees were using an app designed to leave no trace. That’s smoke, if not a fire, and it’s deeply problematic,” he says.

But Johns Hopkins’ Green counters that FOIA has never been an all-seeing eye into government agencies. And he points out that sending a Signal message to an EPA colleague isn’t so different from simply walking into their office and closing the door. “These ephemeral communications apps give us a way to have those face-to-face conversations electronically and in a secure way,” says Green. “It’s a way to communicate without being on the record. And people need that.”

https://www.wired.com/2017/04/suing-see-feds-encrypted-messages-good-luck/

The CIA Leak Exposes Tech’s Vulnerable Future

Source: https://www.wired.com/2017/03/cia-leak-exposes-techs-vulnerable-future/

How Whatsapp spies on Your Messages – WhatsApp Retransmission Vulnerability

According to Tobias Boelter tobias@boelter.it

Download the Slides from Tobias here: Whatsapp Slides from Tobias Boelter

Setting: Three phones. Phone A is Alice’s phone. Phone B is Bob’s phone. Phone C is the attacker’s phone.

Alice starts by communication with bob and being a good human of course meets with Bob in person and they verify each other’s identities, i.e. that the key exchange was not compromised.

Remember, Alice encrypts her messages with the public key she has received from Bob. But this key is sent through the WhatsApp servers so she can not know for sure that it is actually Bob’s key. That’s why they use a secure channel (the physical channel) to verify this.

Now, Alice sends a message to Bob. And then another message. But this time this message does not get delivered. For example because Bob is offline, or the WhatsApp server just does not forward the message.

wa3

Now the attacker comes in. He registers Bob’s phone number with the WhatsApp server (by attacking the way to vulnerable GSM network, putting WhatsApp under pressure or by being WhatsApp itself).

Alice’s WhatsApp client will now automatically, without Alices‘ interaction, re-encrypt the second message with the attackers key and send it to the attacker, who receives it:

wa2

Only after the act, a warning is displayed to Alice (and also only if she explicitly chose to see warnings in here settings).

wa1

Conclusion

Proprietary closed-source crypto software is the wrong path. After all this – potentially mallicious code – handles all our decrypted messages. Next time the FBI will not ask Apple but WhatsApp to ship a version of their code that will send all decrypted messages directly to the FBI.

Signal is better

Signal is doing it right. Alice’s second message („Offline message“) was never sent to the attacker.

signal3 signal1 signal2

Signal is also open source and experimenting with reproducible builds. Have a look at it.

Update (May 31, 2016)

Facebook responded to my white-hat report

„[…] We were previously aware of the issue and might change it in the future, but for now it’s not something we’re actively working on changing.[…]“

https://tobi.rocks/2016/04/whats-app-retransmission-vulnerability/

Download the Presentation here: Whatsapp Slides from Tobias Boelter