Source: „http://www.wired.com/2015/10/googles-lame-demo-shows-us-far-robo-car-come/“
Killing the Driver
Google has been developing this technology for six years, and is taking a distinctly different approach than everyone else. Conventional automakers are rolling out features piecemeal, over the course of many years, starting with active safety features like automatic braking and lane departure warnings.
Google doesn’t give a hoot about anything less than a completely autonomous vehicle, one that reduces “driving” to little more than getting in, typing in a destination, and enjoying the ride. It wants a consumer-ready product ready in four years.
The Silicon Valley juggernaut is making rapid progress. Its fleet of modified Lexus SUVs and prototypes has racked up 1.2 million autonomous miles on public roads, and covers 10,000 more each week. Most of that has been done in Mountain View, and Google expanded its testing to Austin last summer.
It’s unclear how this technology will reach consumers, but Google is more likely to sell its software than manufacture its own cars. At the very least, it won’t sell this dinky prototype to the public.
Predicting the Future
As the Google car moves, its laser, camera, and radar systems constantly scan the environment around it, 360 degrees and up to 200 yards away.
“We look at the world around us, and we detect objects in the scene, we categorize them as different types,” says Dmitri Dolgov, the project’s chief engineer. The car knows the difference between people, cyclists, cars, trucks, ambulances, cones, and more. Based on those categories and its surroundings, it anticipates what they’re likely to do.
Making those predictions is likely the most crucial work the team is doing, and it’s based on the huge amount of time the cars have spent dealing with the real world. Anything one car sees is shared with every other car, and nothing is forgotten. From that data, the team builds probabilistic models for the cars to follow.
“All the miles we’ve driven and all the data that we’ve collected allowed us to build very accurate models of how different types of objects behave,” Dolgov says. “We know what to expect from pedestrians, from cyclists, from cars.”
Those are the key learnings the test drive on the roof parking lot was meant to show off. If I may anthropomorphize: The car spotted a person on foot walking near its route and figured, “You’re probably going to jaywalk.” It saw a car coming up quickly from left and thought, “There’s a good chance you’re going to keep going and cut me off.” When the cyclist in front put his left arm out, the car understood that as a turn signal.
This is how good human drivers think. And the cars have the added advantage of better vision, quicker processing times, and the inability to get distracted, or tired, or drunk, or angry.
Detecting Anomalies
The great challenge of making a car without a steering wheel a human can grab is that the car must be able to handle every situation it encounters. Google acknowledges there’s no way to anticipate and model for every situation. So the team created what it calls “anomaly detection.”
If the cars see behavior or an object they can’t categorize, “they understand their own limitations,” Dolgov says. “They understand that there’s something really crazy going on and they might not be able to make really good, confident predictions about the future. So they take a very conservative approach.”
One of Google’s cars once encountered a woman in a wheelchair, armed with a broom, chasing a turkey. Seriously. Unsurprisingly, this was a first for the car. So the car did what a good human driver would have done. It slowed down, Dolgov says, and let the situation play out. Then it went along its way. Unlike a human, though, it did not make a video and post it on Instagram.