Schlagwort-Archive: TAO

Securing Driverless Cars From Hackers Is Hard, according to Charlie Miller, Ex-NSA’s Tailored Access Operations Hacker

Securing Driverless Cars From Hackers Is Hard. Ask the Ex-Uber Guy Who Protects Them

Two years ago, Charlie Miller and Chris Valasek pulled off a demonstration that shook the auto industry, remotely hacking a Jeep Cherokee via its internet connection to paralyze it on a highway. Since then, the two security researchers have been quietly working for Uber, helping the startup secure its experimental self-driving cars against exactly the sort of attack they proved was possible on a traditional one. Now, Miller has moved on, and he’s ready to broadcast a message to the automotive industry: Securing autonomous cars from hackers is a very difficult problem. It’s time to get serious about solving it.

Last month, Miller left Uber for a position at Chinese competitor Didi, a startup that’s just now beginning its own autonomous ridesharing project. In his first post-Uber interview, Miller talked to WIRED about what he learned in those 19 months at the company—namely that driverless taxis pose a security challenge that goes well beyond even those faced by the rest of the connected car industry.

Miller couldn’t talk about any of the specifics of his research at Uber; he says he moved to Didi in part because the company has allowed him to speak more openly about car hacking. But he warns that before self-driving taxis can become a reality, the vehicles’ architects will need to consider everything from the vast array of automation in driverless cars that can be remotely hijacked, to the possibility that passengers themselves could use their physical access to sabotage an unmanned vehicle.

“Autonomous vehicles are at the apex of all the terrible things that can go wrong,” says Miller, who spent years on the NSA’s Tailored Access Operations team of elite hackers before stints at Twitter and Uber. “Cars are already insecure, and you’re adding a bunch of sensors and computers that are controlling them…If a bad guy gets control of that, it’s going to be even worse.”

At A Computer’s Mercy

In a series of experiments starting in 2013, Miller and Valasek showed that a hacker with either wired or over-the-internet access to a vehicle—including a Toyota Prius, Ford Escape, and a Jeep Cherokee—could disable or slam on a victim’s brakes, turn the steering wheel, or, in some cases, cause unintended acceleration. But to trigger almost all those attacks, Miller and Valasek had to exploit vehicles’ existing automated features. They used the Prius’ collision avoidance system to apply its brakes, and the Jeep’s cruise control feature to accelerate it. To turn the Jeep’s steering wheel, they tricked it into thinking it was parking itself—even if it was moving at 80 miles per hour.

Their car-hacking hijinks, in other words, were limited to the few functions a vehicle’s computer controls. In a driverless car, the computer controls everything. “In an autonomous vehicle, the computer can apply the brakes and turn the steering wheel any amount, at any speed,” Miller says. “The computers are even more in charge.”

An alert driver could also override many of the attacks Miller and Valasek demonstrated on traditional cars: Tap the brakes and that cruise control acceleration immediately ceases. Even the steering wheel attacks could be easily overcome if the driver wrests control of the wheel. When the passenger isn’t in the driver’s seat—or there is no steering wheel or brake pedal—no such manual override exists. “No matter what we did in the past, the human had a chance to control the car. But if you’re sitting in the backseat, that’s a whole different story,” says Miller. “You’re totally at the mercy of the vehicle.”

Hackers Take Rides, Too

A driverless car that’s used as a taxi, Miller points out, poses even more potential problems. In that situation, every passenger has to be considered a potential threat. Security researchers have shown that merely plugging an internet-connected gadget into a car’s OBD2 port—a ubiquitous outlet under its dashboard—can offer a remote attacker an entry point into the vehicle’s most sensitive systems. (Researchers at the University of California at San Diego showed in 2015 that they could take control of a Corvette’s brakes via a common OBD2 dongle distributed by insurance companies—including one that partnered with Uber.)

“There’s going to be someone you don’t necessarily trust sitting in your car for an extended period of time,” says Miller. “The OBD2 port is something that’s pretty easy for a passenger to plug something into and then hop out, and then they have access to your vehicle’s sensitive network.”

Permanently plugging that port is illegal under federal regulations, Miller says. He suggests ridesharing companies that use driverless cars could cover it with tamper-evident tape. But even then, they might only be able to narrow down which passenger could have sabotaged a vehicle to a certain day or week. A more comprehensive fix would mean securing the vehicle’s software so that not even a malicious hacker with full physical access to its network would be able to hack it—a challenge Miller says only a few highly locked-down products like an iPhone or Chromebook can pass.

“It’s definitely a hard problem,” he says.

Deep Fixes

Miller argues that solving autonomous vehicles’ security flaws will require some fundamental changes to their security architecture. Their internet-connected computers, for instance, will need “codesigning,” a measure that ensures they only run trusted code signed with a certain cryptographic key. Today only Tesla has talked publicly about implementing that feature. Cars’ internal networks will need better internal segmentation and authentication, so that critical components don’t blindly follow commands from the OBD2 port. They need intrusion detection systems that can alert the driver—or rider—when something anomalous happens on the cars’ internal networks. (Miller and Valasek designed one such prototype.) And to prevent hackers from getting an initial, remote foothold, cars need to limit their “attack surface,” any services that might accept malicious data sent over the internet.

Complicating those fixes? Companies like Uber and Didi don’t even make the cars they use, but instead have to bolt on any added security after the fact. “They’re getting a car that already has some attack surface, some vulnerabilities, and a lot of software they don’t have any control over, and then trying to make that into something secure,” says Miller. “That’s really hard.”

That means solving autonomous vehicles’ security nightmares will require far more open conversation and cooperation among companies. That’s part of why Miller left Uber, he says: He wants the freedom to speak more openly within the industry. “I want to talk about how we’re securing cars and the scary things we see, instead of designing these things in private and hoping that we all know what we’re doing,” he says.

Car hacking, fortunately, remains largely a concern for the future: No car has yet been digitally hijacked in a documented, malicious case. But that means now’s the time to work on the problem, Miller says, before cars become more automated and make the problem far more real. “We have some time to build up these security measures and get them right before something happens,” says Miller. “And that’s why I’m doing this.”

https://www.wired.com/2017/04/ubers-former-top-hacker-securing-autonomous-cars-really-hard-problem/

Werbung