Archiv der Kategorie: Privacy

Android’s trust problem

Illustration by William Joel / The Verge

Published today, a two-year study of Android security updates has revealed a distressing gap between the software patches Android companies claim to have on their devices and the ones they actually have. Your phone’s manufacturer may be lying to you about the security of your Android device. In fact, it appears that almost all of them do.

Coming at the end of a week dominated by Mark Zuckerberg’s congressional hearings and an ongoing Facebook privacy probe, this news might seem of lesser importance, but it goes to the same issue that has drawn lawmakers’ scrutiny to Facebook: the matter of trust. Facebook is the least-trusted big US tech company, and Android might just be the operating system equivalent of it: used by 2 billion people around the world, tolerated more than loved, and susceptible to major lapses in user privacy and security.

The gap between Android and its nemesis, Apple’s iOS, has always boiled down to trust. Unlike Google, Apple doesn’t make its money by tracking the behavior of its users, and unlike the vast and varied Android ecosystem, there are only ever a couple of iPhone models, each of which is updated with regularity and over a long period of time. Owning an iPhone, you can be confident that you’re among Apple’s priority users (even if Apple faces its own cohort of critics accusing it of planned obsolescence), whereas with an Android device, as evidenced today, you can’t even be sure that the security bulletins and updates you’re getting are truthful.

Android is perceived as untrustworthy in large part because it is. Beside the matter of security misrepresentations, here are some of the other major issues and villains plaguing the platform:

Version updates are slow, if they arrive at all. I’ve been covering Android since its earliest Cupcake days, and in the near-decade that’s passed, there’s never been a moment of contentment about the speed of OS updates. Things seemed to be getting even worse late last year when the November batch of new devices came loaded with 2016’s Android Nougat. Android Oreo is now nearly eight months old — meaning we’re closer to the launch of the next version of Android than the present one — and LG is still preparing to roll out that software for its 2017 flagship LG G6.

Promises about Android device updates are as ephemeral as Snapchat messages. Before it became the world’s biggest smartphone vendor, Samsung was notorious for reneging on Android upgrade promises. Sony’s Xperia Z3 infamously fell foul of an incompatibility between its Snapdragon processor and Google’s Android Nougat requirements, leaving it prematurely stuck without major OS updates. Whenever you have so many loud voices involved — carriers and chip suppliers along with Google and device manufacturers — the outcome of their collaboration is prone to becoming exactly as haphazard and unpredictable as Android software upgrades have become.

Google is obviously aware of the situation, and it’s pushing its Android One initiative to give people reassurances when buying an Android phone. Android One guarantees OS updates for at least two years and security updates for at least three years. But, as with most things Android, Android One is only available on a few devices, most of which are of the budget variety. You won’t find the big global names of Samsung, Huawei, and LG supporting it.

Some Android OEMs snoop on you. This is an ecosystem problem rather than something rooted in the operating system itself, but it still discolors Android’s public reputation. Android phone manufacturers habitually lade their devices with bloatware (stuff you really don’t want or need on your phone), and some have even taken to loading up spyware. Blu’s devices were yanked from Amazon for doing exactly that: selling phones that were vulnerable to remote takeovers and could be exploited to have the user’s text messages and call records clandestinely recorded. OnePlus also got in trouble for having an overly inquisitive user analytics program, which beamed personally identifiable information back to the company’s HQ without explicit user consent.

Huawei is perhaps the most famous example of a potentially conflicted Android phone manufacturer, with US spy agencies openly urging their citizens to avoid Huawei phones for their own security. No hard evidence has yet been presented of Huawei doing anything improper, however the US is not the only country to express concern about the company’s relationship with the Chinese government — and mistrust is based as much on smoke as it is on the actual fire.

Android remains vulnerable, thanks in part to Google’s permissiveness. It’s noteworthy that, when Facebook’s data breach became public and people started looking into what data Facebook had on them, only their Android calls and messages had been collected. Why not the iPhone? Because Apple’s walled-garden philosophy makes it much harder, practically impossible, for a user to inadvertently give consent to privacy-eroding apps like Facebook’s Messenger to dig into their devices. Your data is simply better protected on iOS, and even though Android has taken significant steps forward in making app permissions more granular and specific, it’s still comparatively easy to mislead users about what data an app is obtaining and for what purposes.

Android hardware development is chaotic and unreliable. For many, the blistering, sometimes chaotic pace of change in Android devices is part of the ecosystem’s charm. It’s entertaining to watch companies try all sorts of zany and unlikely designs, with only the best of them surviving more than a few months. But the downside of all this speed is lack of attention being paid to small details and long-term sustainability.

LG made a huge promotional push two years ago around its modular G5 flagship, which was meant to usher in a new accessory ecosystem and elevate the flexibility of LG Android devices to new heights. Within six months, that modular project was abandoned, leaving anyone that bought modular LG accessories — on the expectation of multigenerational support — high and dry. And speaking of dryness, Sony recently got itself in trouble for overpromising by calling its Xperia phones “waterproof.”

Samsung’s Galaxy Note 7 is the best and starkest example of the dire consequences that can result from a hurried and excessively ambitious hardware development cycle. The Note 7 had a fatal battery flaw that led many people’s shiny new Samsung smartphones to spontaneously catch fire. Compare that to the iPhone’s pace of usually incremental changes, implemented at predictable intervals and with excruciating fastidiousness.

Android Marshmallow official logo

Besides pledging to deliver OS updates that never come, claiming to have delivered security updates that never arrived, and taking liberties with your personal data, Android OEMs also have a tendency to exaggerate what their phones can actually do. They don’t collaborate on much, so in spite of pouring great efforts into developing their Android software experience, they also just feed the old steadfast complaint of a fragmented ecosystem.

The problem of trust with Android, much like the problem of trust in Facebook, is grounded in reality. It doesn’t matter that not all Android device makers engage in shady privacy invasion or overreaching marketing claims. The perception, like the Android brand, is collective

https://www.theverge.com/2018/4/13/17233122/android-software-patch-trust-problem

Advertisements

World celebrates, cyber-snoops cry as TLS 1.3 internet crypto approved

 

Image Credits: kinsta.com

Forward-secrecy protocol comes with the 28th draft

A much-needed update to internet security has finally passed at the Internet Engineering Task Force (IETF), after four years and 28 drafts.

Internet engineers meeting in London, England, approved the updated TLS 1.3 protocol despite a wave of last-minute concerns that it could cause networking nightmares.

TLS 1.3 won unanimous approval (well, one „no objection“ amid the yeses), paving the way for its widespread implementation and use in software and products from Oracle’s Java to Google’s Chrome browser.

The new protocol aims to comprehensively thwart any attempts by the NSA and other eavesdroppers to decrypt intercepted HTTPS connections and other encrypted network packets. TLS 1.3 should also speed up secure communications thanks to its streamlined approach.

The critical nature of the protocol, however, has meant that progress has been slow and, on occasion, controversial. This time last year, Google paused its plan to support the new protocol in Chrome when an IT schools administrator in Maryland reported that a third of the 50,000 Chromebooks he managed bricked themselves after being updating to use the tech.

Most recently, banks and businesses complained that, thanks to the way the new protocol does security, they will be cut off from being able to inspect and analyze TLS 1.3 encrypted traffic flowing through their networks, and so potentially be at greater risk from attack.

Unfortunately, that self-same ability to decrypt secure traffic on your own network can also be potentially used by third parties to grab and decrypt communications.

An effort to effectively insert a backdoor into the protocol was met with disdain and some anger by internet engineers, many of whom pointed out that it will still be possible to introduce middleware to monitor and analyze internal network traffic.

Nope

The backdoor proposal did not move forward, meaning the internet as a whole will become more secure and faster, while banks and similar outfits will have to do a little extra work to accommodate and inspect TLS 1.3 connections as required.

At the heart of the change – and the complaints – are two key elements: forward secrecy, and ephemeral encryption keys.

TLS – standing for Transport Layer Security – basically works by creating a secure connection between a client and a server – your laptop, for example, and a company’s website. All this is done before any real information is shared – like credit card details or personal information.

Under TLS 1.2 this is a fairly lengthy process that can take as much as half-a-second:

  • The client says hi to the server and offers a range of strong encryption systems it can work with
  • The server says hi back, explains which encryption system it will use and sends an encryption key
  • The client takes that key and uses it to encrypt and send back a random series of letters
  • Together they use this exchange to create two new keys: a master key and a session key – the master key being stronger; the session key weaker.
  • The client then says which encryption system it plans to use for the weaker, session key – which allows data to be sent much faster because it doesn’t have to be processed as much
  • The server acknowledges that system will be used, and then the two start sharing the actual information that the whole exchange is about

TLS 1.3 speeds that whole process up by bundling several steps together:

  • The client says hi, here’s the systems I plan to use
  • The server gets back saying hi, ok let’s use them, here’s my key, we should be good to go
  • The client responds saying, yep that all looks good, here are the session keys

As well as being faster, TLS 1.3 is much more secure because it ditches many of the older encryption algorithms that TLS 1.2 supports that over the years people have managed to find holes in. Effectively the older crypto-systems potentially allowed miscreants to figure out what previous keys had been used (called „non-forward secrecy“) and so decrypt previous conversations.

A little less conversation

For example, snoopers could, under TLS 1.2, force the exchange to use older and weaker encryption algorithms that they knew how to crack.

People using TLS 1.3 will only be able to use more recent systems that are much harder to crack – at least for now. Any effort to force the conversation to use a weaker 1.2 system will be detected and flagged as a problem.

Another very important advantage to TLS 1.3 – but also one that some security experts are concerned about – is called „0-RTT Resumption“ which effectively allows the client and server to remember if they have spoken before, and so forego all the checks, using previous keys to start talking immediately.

That will make connections much faster but the concern of course is that someone malicious could get hold of the „0-RTT Resumption“ information and pose as one of the parties. Although internet engineers are less concerned about this security risk – which would require getting access to a machine – than the TLS 1.2 system that allowed people to hijack and listen into a conversation.

In short, it’s a win-win but will require people to put in some effort to make it all work properly.

The big losers will be criminals and security services who will be shut out of secure communications – at least until they figure out a way to crack this new protocol. At which point the IETF will start on TLS 1.4. ®

Source: theregister.co.uk

 

 

An Overview of TLS 1.3 – Faster and More Secure

Updated on March 25, 2018

It has been over eight years since the last encryption protocol update, but the new TLS 1.3 has now been finalized as of March 21st, 2018. The exciting part for the WordPress community and customers here at Kinsta is that TLS 1.3 includes a lot of security and performance improvements. With the HTTP/2 protocol update in late 2015, and now TLS 1.3 in 2018, encrypted connections are now more secure and faster than ever. Read more below about the changes coming with TLS 1.3 and how it can benefit you as a WordPress site owner.

‚TLS 1.3: Faster, Safer, Better, Everything.‘ 👍 — Filippo ValsordaCLICK TO TWEET

What is TLS?

TLS stands for Transport Layer Security and is the successor to SSL (Secure Sockets Layer). However, both these terms are commonly thrown around a lot online and you might see them both referred to as simply SSL.  TLS provides secure communication between web browsers and servers. The connection itself is secure because symmetric cryptography is used to encrypt the data transmitted. The keys are uniquely generated for each connection and are based on a shared secret negotiated at the beginning of the session, also known as a TLS handshake. Many IP-based protocols, such as HTTPS, SMTP, POP3, FTP support TLS to encrypt data.

Web browsers utilize an SSL certificate which allows them to recognize that it belongs to a digitally signed certificate authority. Technically these are also known as TLS certificates, but most SSL providers stick with the term “SSL certificates” as this is generally more well known. SSL/TLS certificates provide the magic behind what many people simply know as the HTTPS that they see in their browser’s address bar.

https web browser address bar

TLS 1.3 vs TLS 1.2

The Internet Engineering Task Force (IETF) is the group that has been in charge of defining the TLS protocol, which has gone through many various iterations. The previous version of TLS, TLS 1.2, was defined in RFC 5246 and has been in use for the past eight years by the majority of all web browsers. As of March 21st, 2018, TLS 1.3 has now been finalized, after going through 28 drafts.

Companies such as Cloudflare are already making TLS 1.3 available to their customers. Filippo Valsorda had a great talk (see presentation below) on the differences between TLS 1.2 and TLS 1.3. In short, the major benefits of TLS 1.3 vs that of TLS 1.2 is faster speeds and improved security.

Speed Benefits of TLS 1.3

TLS and encrypted connections have always added a slight overhead when it comes to web performance. HTTP/2 definitely helped with this problem, but TLS 1.3 helps speed up encrypted connections even more with features such as TLS false start and Zero Round Trip Time (0-RTT).

To put it simply, with TLS 1.2, two round-trips have been needed to complete the TLS handshake. With 1.3, it requires only one round-trip, which in turn cuts the encryption latency in half. This helps those encrypted connections feel just a little bit snappier than before.

tls 1.3 handshake performance

TLS 1.3 handshake performance

Another advantage of is that in a sense, it remembers! On sites you have previously visited, you can now send data on the first message to the server. This is called a “zero round trip.” (0-RTT). And yes, this also results in improved load time times.

Improved Security With TLS 1.3

A big problem with TLS 1.2 is that it’s often not configured properly it leaves websites vulnerable to attacks. TLS 1.3 now removes obsolete and insecure features from TLS 1.2, including the following:

  • SHA-1
  • RC4
  • DES
  • 3DES
  • AES-CBC
  • MD5
  • Arbitrary Diffie-Hellman groups — CVE-2016-0701
  • EXPORT-strength ciphers – Responsible for FREAK and LogJam

Because the protocol is in a sense more simplified, this make it less likely for administrators and developers to misconfigure the protocol. Jessie Victors, a security consultant, specializing in privacy-enhancing systems and applied cryptography stated:

I am excited for the upcoming standard. I think we will see far fewer vulnerabilities and we will be able to trust TLS far more than we have in the past.

Google is also raising the bar, as they have started warning users in search console that they are moving to TLS version 1.2, as TLS 1 is no longer that safe. They are giving a final deadline of March 2018.

TLS 1.3 Browser Support

With Chrome 63, TLS 1.3 is enabled for outgoing connections. Support for TLS 1.3 was added back in Chrome 56 and is also supported by Chrome for Android.

TLS 1.3 is enabled by default in Firefox 52 and above (including Quantum). They are retaining an insecure fallback to TLS 1.2 until they know more about server tolerance and the 1.3 handshake.

TLS 1.3 browser support

TLS 1.3 browser support

With that being said some SSL test services on the Internet don’t support TLS 1.3 yet and neither do other browsers such as IE, Microsoft Edge, Opera, or Safari. It will be a couple more months while the protocol is being finalized and for browsers to catch up. Most of the remaining ones are in development at the moment.

Cloudflare has an excellent article on why TLS 1.3 isn’t in browsers yet.

Summary

Just like with HTTP/2, TLS 1.3 is another exciting protocol update that we can expect to benefit from for years to come. Not only will encrypted (HTTPS) connections become faster, but they will also be more secure. Here’s to moving the web forward!

Source: https://kinsta.com/blog/tls-1-3/

Secure your Privacy – HERE’S WHY YOU SHOULD USE SIGNAL

Source: https://www.wired.com/story/ditch-all-those-other-messaging-apps-heres-why-you-should-use-signal/

STOP ME IF you’ve heard this before. You text a friend to finalize plans, anxiously awaiting their reply, only to get a message from them on Snapchat to say your latest story was hilarious. So, you move the conversation over to Snapchat, decide to meet up at 10:30, but then you close the app and can’t remember if you agreed on meeting at Hannegan’s or that poppin‘ new brewery downtown. You can’t go back and look at the message since Snapchat messages have a short shelf life, so you send a text, but your friend has already proven to be an unreliable texter. You’d be lucky if they got back to you by midnight.

All of this illustrates a plain truth. There are just too many messaging apps. As conversations can bounce between Snapchat, iMessage, Skype, Instagram, Twitter, and Hangouts/Allo or whatever Google’s latest attempt at messaging is, they’re rendered confusing and unsearchable. We could stick to SMS, but it’s pretty limited compared to other options, and it has some security holes. Rather than just chugging along with a dozen chat apps, letting your notifications pile up, it’s time to pick one messaging app and get all of your friends on board. That way, everyone can just pick up their phones and shoot a message to anyone without hesitation.

Here comes the easy part. There’s one messaging app we should all be using: Signal. It has strong encryption, it’s free, it works on every mobile platform, and the developers are committed to keeping it simple and fast by not mucking up the experience with ads, web-tracking, stickers, or animated poop emoji.

Tales From the Crypto

Signal looks and works a lot like other basic messaging apps, so it’s easy to get started. It’s especially convenient if you have friends and family overseas because, like iMessage and WhatsApp, Signal lets you sidestep expensive international SMS fees. It also supports voice and video calls, so you can cut out Skype and FaceTime. Sure, you don’t get fancy stickers or games like some of the competition, but you can still send pictures, videos, and documents. It’s available on iOS, Android, and desktop.

But plenty of apps have all that stuff. The thing that actually makes Signal superior is that it’s easy to ensure that the contents of every chat remain private and unable to be read by anyone else. As long as both parties are using the app to message each other, every single message sent with Signal is encrypted. Also, the encryption Signal uses is available under an open-source license, so experts have had the chance to test and poke the app to make sure it stays as secure as what’s intended.

If you’re super concerned about messages being read by the wrong eyes, Signal lets you force individual conversations to delete themselves after a designated amount of time. Signal’s security doesn’t stop at texts. All of your calls are encrypted, so nobody can listen in. Even if you have nothing to hide, it’s nice to know that your private life is kept, you know, private.

WhatAbout WhatsApp

Yes, this list of features sounds a lot like WhatsApp. It’s true, the Facebook-owned messaging app has over a billion users, offers most of the same features, and even employs Signal’s encryption to keep chats private. But WhatsApp raises a few concerns that Signal doesn’t. First, it’s owned by Facebook, a company whose primary interest is in collecting information about you to sell you ads. That alone may steer away those who feel Facebook already knows too much about us. Even though the content of your WhatsApp messages are encrypted, Facebook can still extract metadata from your habits, like who you’re talking to and how frequently.

Still, if you use WhatsApp, chances are you already know a lot of other people who are using it. Getting all of them to switch to Signal is highly unlikely. And you know, that’s OK—WhatsApp really is the next-best option to Signal. The encryption is just as strong, and while it isn’t as cleanly stripped of extraneous features as Signal, that massive user base makes it easy to reach almost anyone in your contact list.

Chat Heads

While we’re talking about Facebook, it’s worth noting that the company’s Messenger app isn’t the safest place to keep your conversations. Aside from all the clutter inside the app, the two biggest issues with Facebook Messenger are that you have to encrypt conversations individually by flipping on the „Secret Conversations“ option (good luck remembering to do that), and that anyone with a Facebook profile can just search for your name and send you a message. (Yikes!) There are too many variables in the app, and a lot the security is out of your hands. iMessage may seem like a solid remedy to all of these woes, but it’s tucked behind Apple’s walled iOS garden, so you’re bound to leave out your closest friends who use Android devices. And if you ever switch platforms, say bye-bye to your chat history.

Signal isn’t going to win a lot of fans among those who’ve grown used to the more novel features inside their chat apps. There are no stickers, and no animoji. Still, as privacy issues come to the fore in the minds of users, and as mobile messaging options proliferate, and as notifications pile up, everyone will be searching for a path to sanity. It’s easy to invite people to Signal. Once you’re using it, just tap the „invite“ button inside the chat window, and your friend will be sent a link to download the app. Even stubborn people who only send texts can get into it—Signal can be set as your phone’s default SMS client, so the pain involved in the switch is minimal.

So let’s make a pact right now. Let’s all switch to Signal, keep our messages private, and finally put an end to the untenable multi-app shuffle that’s gone on far too long.

Macron, May, Merkel – weakening encryption and making messengers (whatsapp) vulnerable leads to data security catastrophes

In weakening strong encryption by weakening software like Android or IOS operating System (subroutines, inlays, essentials) in order to enable mass surveillance you the leaders of Europe risk the data security of thousands of Europe companies. Is it worth it?

Even Microsoft is now warning that the government practice of “stockpiling” software vulnerabilities so that they can be used as weapons is a misguided tactic that weakens security for everybody.

“An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen,” the company said Sunday.

Why are you doing this? Hopefully not for the need to give information in order to receive from the USA?

epa05989737 French President Emmanuel Macron (L) talks with German Chancellor Angela Merkel (R) as US President Donald J. Trump (C) walks by, during a line up for the group photo at the NATO summit in Brussels, Belgium, 25 May 2017. NATO countries‘ heads of states and governments gather in Brussels for a one-day meeting. EPA/ARMANDO BABANI

You saw and recognised and understood WannaCry that affected thousands of companies throuout Europe?

The vulnerability in Windows that WannaCry takes advantage of was discovered by the NSA for its surveillance toolkit. But word got out when a hacker group known as the Shadow Brokers dumped a bunch of leaked NSA information onto the Internet in April. Microsoft, however, had already issued a software update the month before; those that downloaded and installed the patch were protected from WannaCry, but many others lagged behind and became victims.

More Android phones are using encryption and lock screen security than ever before

Many often have a false sense of just how secure their private data is on their devices — that is, if they’re thinking about it at all. Your average smartphone user just wants to access the apps and people they care about, and not worry about security.

That’s why it was extremely encouraging to hear some of the security metrics announced at Google I/O 2017. For devices running Android Nougat, roughly 80% of users are running them fully encrypted. At the same time, about 70% of Nougat devices are using a secure lock screen of some form.

Android encryption adoptionAndroid lock screen adoption

That 80% encryption number isn’t amazingly surprising when you remember that Nougat has full-device encryption turned on by default, but that number also includes devices that were upgraded from Marshmallow, which didn’t have default encryption. Devices running on Marshmallow have a device encryption rate of just 25%, though, so this is a massive improvement. And the best part about Google’s insistence on default encryption is that eventually older devices will be replaced by those running Nougat or later out of the box, meaning this encryption rate could get very close to 100%.

The default settings are immensely important.

Full-device encryption is particularly effective when paired with a secure lock screen, and Google’s metrics showing 70% adoption in this regard definitely needs some work. It’s a small increase from the roughly 60% secure lock screen rate of Marshmallow phones but a decent jump from the sub-50% rate of devices running Lollipop. The most interesting aspect of these numbers to my eyes is that having a fingerprint sensor on the device doesn’t signal a very large increase in adoption — perhaps just a five percentage point jump. On one hand it’s great to see people using secured lock screens even when they don’t have something as convenient as a fingerprint sensor, but then again I’d expect the simplicity of that sensor to help adoption more than these numbers show.

The trend is heading in the right direction in both of these metrics, and that’s a great sign despite the fact that secure lock screens show a slower growth rate. The closer we get both of these numbers to 100%, the better.

http://www.androidcentral.com/more-android-phones-are-using-encryption-and-lock-screen-security-eve

Don’t wanna Cry? Use Linux 

Don’t wanna Cry? Use Linux. Life is too short to reboot. 

So far, over 213,000 computers across 99 countries around the world have been infected, and the infection is still rising even hours after the kill switch was triggered by the 22-years-old British security researcher behind the twitter handle ‚MalwareTech.‘

For those unaware, WannaCry is an insanely fast-spreading ransomware malware that leverages a Windows SMB exploit to remotely target a computer running on unpatched or unsupported versions of Windows.

So far, Criminals behind WannaCry Ransomware have received nearly 100 payments from victims, total 15 Bitcoins, equals to USD $26,090.


Once infected, WannaCry also scans for other vulnerable computers connected to the same network, as well scans random hosts on the wider Internet, to spread quickly.

The SMB exploit, currently being used by WannaCry, has been identified as EternalBlue, a collection of hacking tools allegedly created by the NSA and then subsequently dumped by a hacking group calling itself „The Shadow Brokers“ over a month ago.

„If NSA had privately disclosed the flaw used to attack hospitals when they *found* it, not when they lost it, this may not have happened,“ NSA whistleblower Edward Snowden says.

http://thehackernews.com/2017/05/wannacry-ransomware-cyber-attack.html

Securing Driverless Cars From Hackers Is Hard, according to Charlie Miller, Ex-NSA’s Tailored Access Operations Hacker

Securing Driverless Cars From Hackers Is Hard. Ask the Ex-Uber Guy Who Protects Them

Two years ago, Charlie Miller and Chris Valasek pulled off a demonstration that shook the auto industry, remotely hacking a Jeep Cherokee via its internet connection to paralyze it on a highway. Since then, the two security researchers have been quietly working for Uber, helping the startup secure its experimental self-driving cars against exactly the sort of attack they proved was possible on a traditional one. Now, Miller has moved on, and he’s ready to broadcast a message to the automotive industry: Securing autonomous cars from hackers is a very difficult problem. It’s time to get serious about solving it.

Last month, Miller left Uber for a position at Chinese competitor Didi, a startup that’s just now beginning its own autonomous ridesharing project. In his first post-Uber interview, Miller talked to WIRED about what he learned in those 19 months at the company—namely that driverless taxis pose a security challenge that goes well beyond even those faced by the rest of the connected car industry.

Miller couldn’t talk about any of the specifics of his research at Uber; he says he moved to Didi in part because the company has allowed him to speak more openly about car hacking. But he warns that before self-driving taxis can become a reality, the vehicles’ architects will need to consider everything from the vast array of automation in driverless cars that can be remotely hijacked, to the possibility that passengers themselves could use their physical access to sabotage an unmanned vehicle.

“Autonomous vehicles are at the apex of all the terrible things that can go wrong,” says Miller, who spent years on the NSA’s Tailored Access Operations team of elite hackers before stints at Twitter and Uber. “Cars are already insecure, and you’re adding a bunch of sensors and computers that are controlling them…If a bad guy gets control of that, it’s going to be even worse.”

At A Computer’s Mercy

In a series of experiments starting in 2013, Miller and Valasek showed that a hacker with either wired or over-the-internet access to a vehicle—including a Toyota Prius, Ford Escape, and a Jeep Cherokee—could disable or slam on a victim’s brakes, turn the steering wheel, or, in some cases, cause unintended acceleration. But to trigger almost all those attacks, Miller and Valasek had to exploit vehicles’ existing automated features. They used the Prius’ collision avoidance system to apply its brakes, and the Jeep’s cruise control feature to accelerate it. To turn the Jeep’s steering wheel, they tricked it into thinking it was parking itself—even if it was moving at 80 miles per hour.

Their car-hacking hijinks, in other words, were limited to the few functions a vehicle’s computer controls. In a driverless car, the computer controls everything. “In an autonomous vehicle, the computer can apply the brakes and turn the steering wheel any amount, at any speed,” Miller says. “The computers are even more in charge.”

An alert driver could also override many of the attacks Miller and Valasek demonstrated on traditional cars: Tap the brakes and that cruise control acceleration immediately ceases. Even the steering wheel attacks could be easily overcome if the driver wrests control of the wheel. When the passenger isn’t in the driver’s seat—or there is no steering wheel or brake pedal—no such manual override exists. “No matter what we did in the past, the human had a chance to control the car. But if you’re sitting in the backseat, that’s a whole different story,” says Miller. “You’re totally at the mercy of the vehicle.”

Hackers Take Rides, Too

A driverless car that’s used as a taxi, Miller points out, poses even more potential problems. In that situation, every passenger has to be considered a potential threat. Security researchers have shown that merely plugging an internet-connected gadget into a car’s OBD2 port—a ubiquitous outlet under its dashboard—can offer a remote attacker an entry point into the vehicle’s most sensitive systems. (Researchers at the University of California at San Diego showed in 2015 that they could take control of a Corvette’s brakes via a common OBD2 dongle distributed by insurance companies—including one that partnered with Uber.)

“There’s going to be someone you don’t necessarily trust sitting in your car for an extended period of time,” says Miller. “The OBD2 port is something that’s pretty easy for a passenger to plug something into and then hop out, and then they have access to your vehicle’s sensitive network.”

Permanently plugging that port is illegal under federal regulations, Miller says. He suggests ridesharing companies that use driverless cars could cover it with tamper-evident tape. But even then, they might only be able to narrow down which passenger could have sabotaged a vehicle to a certain day or week. A more comprehensive fix would mean securing the vehicle’s software so that not even a malicious hacker with full physical access to its network would be able to hack it—a challenge Miller says only a few highly locked-down products like an iPhone or Chromebook can pass.

“It’s definitely a hard problem,” he says.

Deep Fixes

Miller argues that solving autonomous vehicles’ security flaws will require some fundamental changes to their security architecture. Their internet-connected computers, for instance, will need “codesigning,” a measure that ensures they only run trusted code signed with a certain cryptographic key. Today only Tesla has talked publicly about implementing that feature. Cars’ internal networks will need better internal segmentation and authentication, so that critical components don’t blindly follow commands from the OBD2 port. They need intrusion detection systems that can alert the driver—or rider—when something anomalous happens on the cars’ internal networks. (Miller and Valasek designed one such prototype.) And to prevent hackers from getting an initial, remote foothold, cars need to limit their “attack surface,” any services that might accept malicious data sent over the internet.

Complicating those fixes? Companies like Uber and Didi don’t even make the cars they use, but instead have to bolt on any added security after the fact. “They’re getting a car that already has some attack surface, some vulnerabilities, and a lot of software they don’t have any control over, and then trying to make that into something secure,” says Miller. “That’s really hard.”

That means solving autonomous vehicles’ security nightmares will require far more open conversation and cooperation among companies. That’s part of why Miller left Uber, he says: He wants the freedom to speak more openly within the industry. “I want to talk about how we’re securing cars and the scary things we see, instead of designing these things in private and hoping that we all know what we’re doing,” he says.

Car hacking, fortunately, remains largely a concern for the future: No car has yet been digitally hijacked in a documented, malicious case. But that means now’s the time to work on the problem, Miller says, before cars become more automated and make the problem far more real. “We have some time to build up these security measures and get them right before something happens,” says Miller. “And that’s why I’m doing this.”

https://www.wired.com/2017/04/ubers-former-top-hacker-securing-autonomous-cars-really-hard-problem/