Schlagwort-Archive: autonomous car

Ford Says It’ll Have a Fleet of Fully Autonomous Cars in Just 5 Years

Ford Fusion Autonomous Research Vehicles Use LiDAR Sensor Techno

The social dilemma of autonomous vehicles

Here is the MIT Full Report posted in Science Magazine: http://science.sciencemag.org/content/352/6293/1573.full

Ethical question leaves potential buyers torn over self-driving cars, study says

Faced with two deadly options the public want driverless vehicles to crash rather than hurt pedestrians – unless the vehicle in question is theirs

A self-driving Lexus SUV, operated by Google, after colliding with a public bus in Mountain View, California, in February 2016.
A self-driving Lexus SUV, operated by Google, after colliding with a public bus in Mountain View, California, in February 2016. Photograph: AP

In catch-22 traffic emergencies where there are only two deadly options, people generally want a self-driving vehicle to, for example, avoid a group of pedestrians and instead slam itself and its passengers into a wall, a new study says. But they would rather not be travelling in a car designed to do that.

The findings of the study, released on Thursday in the journal Science, highlight just how difficult it may be for auto companies to market those cars to a public that tends to contradict itself.

“People want to live a world in which everybody owns driverless cars that minimize casualties, but they want their own car to protect them at all costs,” Iyad Rahwan, a co-author of the study and a professor at MIT, said. “And car makers who offer such cars will sell more cars, but if everybody thinks this way then we end up in a world in which every car will look after its own passenger’s safety … and society as a whole is worse off.”

Through a series of online surveys, the authors found that people generally approve of cars that sacrifice their passengers for the greater good, such as sparing a group of pedestrians, and would like others to buy those cars, but they themselves would prefer to ride in a car that protects its passengers at all cost.

Several people working on bringing self-driving cars to market said that while the philosophical and ethical question over the two programming options is important to consider, real-life situations would be far more complex.

Brian Lathrop, a cognitive scientist who works on Volkswagen’s self-driving cars project, stressed that in real life there are likelihoods and contingencies that the academic example leaves out.

“You have to make a decision that the occupant in the vehicle is always going to be safer than the pedestrians, because they’re in a 3,000lb steel cage with all the other safety features,” said Lathrop, who was not involved in the new study.

So in a situation in which a car needs to, say, slam into a tree to avoid hitting a group of pedestrians, “obviously, you would choose to program it to go into the tree,” he said.

A spokesman for Google, whose self-driving car technology is generally seen as being the furthest along, suggested that asking about hypothetical scenarios might ignore the more important question of how to avoid deadly situations in the first place.

The problem seems to be how to get people to trust cars to consistently do the right thing if we’re not even sure we want them to do what we think is the right thing.

The study’s authors argue that since self-driving cars are expected to drastically reduce traffic fatalities, a delay in adopting the new technology could itself be deadly. Regulations requiring self-driving cars to sacrifice their passengers could move things forward, they write. But, in another catch-22, forcing the self-sacrificing programming could actually delay widespread adoption by consumers.

Susan Anderson, an ethicist at the University of Connecticut, and her husband and research partner, Michael Anderson, a computer science professor at the University of Hartford, believe the cars will be able to make the right call.

“We do believe that properly programmed machines are likely to make decisions that are more ethically justifiable than humans,” they said in an email. “Also, properly programmed self-driving cars should have information that humans may not readily have,” including precise stopping distance, whether to swerve or brake, or the likelihood of degree of harm.

How to get those cars “properly programmed”? The Andersons, who were not involved in the study, suggest having the cars learn from or be given “general ethical principles from applied ethicists”.

https://www.theguardian.com/technology/2016/jun/23/self-driving-car-safety-study-pedestrian-crashes

BMW builds Worlds‘ First Autonomous Self Driving Drifting Car

The Ultimate Drifting Machine. BMW 2 series M235i

Enhanced Safety and precision at the vehicle’s limits with highly automated driving

Source: http://www.wired.com/2014/01/bmw-builds-self-drifting-car/

BMW is showing off a modified 2-Series Coupe and 6-Series Gran Coupe that can race around a track at the limits of adhesion, and slide around corners like a throttle-happy Formula Drift ace.

Both cars are outfitted with a LIDAR system, 360-degree radar, ultrasonic sensors, and cameras that track the environment. Partnered with the electronic braking, throttle, and steering control that’s standard on all new BMWs, the prototypes can run through a high-speed slalom, perform precise lane changes, and slide around corners, without any driver intervention.

The Robot Car of Tomorrow May Just Be Programmed to Hit You

Image: U.S. DOT

Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?

As a matter of physics, you should choose a collision with a heavier vehicle that can better absorb the impact of a crash, which means programming the car to crash into the Volvo. Further, it makes sense to choose a collision with a vehicle that’s known for passenger safety, which again means crashing into the Volvo.

But physics isn’t the only thing that matters here. Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best.

Is This a Realistic Problem?

Some road accidents are unavoidable, and even autonomous cars can’t escape that fate. A deer might dart out in front of you, or the car in the next lane might suddenly swerve into you. Short of defying physics, a crash is imminent. An autonomous or robot car, though, could make things better.

While human drivers can only react instinctively in a sudden emergency, a robot car is driven by software, constantly scanning its environment with unblinking sensors and able to perform many calculations before we’re even aware of danger. They can make split-second choices to optimize crashes–that is, to minimize harm. But software needs to be programmed, and it is unclear how to do that for the hard cases.

In constructing the edge cases here, we are not trying to simulate actual conditions in the real world. These scenarios would be very rare, if realistic at all, but nonetheless they illuminate hidden or latent problems in normal cases. From the above scenario, we can see that crash-avoidance algorithms can be biased in troubling ways, and this is also at least a background concern any time we make a value judgment that one thing is better to sacrifice than another thing.

In previous years, robot cars have been quarantined largely to highway or freeway environments. This is a relatively simple environment, in that drivers don’t need to worry so much about pedestrians and the countless surprises in city driving. But Google recently announced that it has taken the next step in testing its automated car in exactly city streets. As their operating environment becomes more dynamic and dangerous, robot cars will confront harder choices, be it running into objects or even people.

Ethics Is About More Than Harm

The problem is starkly highlighted by the next scenario, also discussed by Noah Goodall, a research scientist at the Virginia Center for Transportation Innovation and Research. Again, imagine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car?

In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision. In the last scenario, that meant smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.

But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.

By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet.

Not only does this discrimination seem unethical, but it could also be bad policy. That crash-optimization design may encourage some motorcyclists to not wear helmets, in order to not stand out as favored targets of autonomous cars, especially if those cars become more prevalent on the road. Likewise, in the previous scenario, sales of automotive brands known for safety may suffer, such as Volvo and Mercedes Benz, if customers want to avoid being the robot car’s target of choice.

The Role of Moral Luck

An elegant solution to these vexing dilemmas is to simply not make a deliberate choice. We could design an autonomous car to make certain decisions through a random-number generator. That is, if it’s ethically problematic to choose which one of two things to crash into–a large SUV versus a compact car, or a motorcyclist with a helmet versus one without, and so on–then why make a calculated choice at all?

A robot car’s programming could generate a random number; and if it is an odd number, the car will take one path, and if it is an even number, the car will take the other path. This avoids the possible charge that the car’s programming is discriminatory against large SUVs, responsible motorcyclists, or anything else.

This randomness also doesn’t seem to introduce anything new into our world: luck is all around us, both good and bad. A random decision also better mimics human driving, insofar as split-second emergency reactions can be unpredictable and are not based on reason, since there’s usually not enough time to apply much human reason.

A key reason for creating autonomous cars in the first place is that they should be able to make better decisions than we do

Yet, the random-number engine may be inadequate for at least a few reasons. First, it is not obviously a benefit to mimic human driving, since a key reason for creating autonomous cars in the first place is that they should be able to make better decisions than we do. Human error, distracted driving, drunk driving, and so on are responsible for 90 percent or more of car accidents today, and 32,000-plus people die on U.S. roads every year.

Second, while human drivers may be forgiven for making a poor split-second reaction–for instance, crashing into a Pinto that’s prone to explode, instead of a more stable object–robot cars won’t enjoy that freedom. Programmers have all the time in the world to get it right. It’s the difference between premeditated murder and involuntary manslaughter.

Third, for the foreseeable future, what’s important isn’t just about arriving at the “right” answers to difficult ethical dilemmas, as nice as that would be. But it’s also about being thoughtful about your decisions and able to defend them–it’s about showing your moral math.  In ethics, the process of thinking through a problem is as important as the result.  Making decisions randomly, then, evades that responsibility. Instead of thoughtful decisions, they are thoughtless, and this may be worse than reflexive human judgments that lead to bad outcomes.

Can We Know Too Much?

A less drastic solution would be to hide certain information that might enable inappropriate discrimination–a “veil of ignorance”, so to speak. As it applies to the above scenarios, this could mean not ascertaining the make or model of other vehicles, or the presence of helmets and other safety equipment, even if technology could let us, such as vehicle-to-vehicle communications. If we did that, there would be no basis for bias.

Not using that information in crash-optimization calculations may not be enough. To be in the ethical clear, autonomous cars may need to not collect that information at all. Should they be in possession of the information, and using it could have minimized harm or saved a life, there could be legal liability in failing to use that information. Imagine a similar public outrage if a national intelligence agency had credible information about a terrorist plot but failed to use it to prevent the attack.

A problem with this approach, however, is that auto manufacturers and insurers will want to collect as much data as technically possible, to better understand robot-car crashes and for other purposes, such as novel forms of in-car advertising. So it’s unclear whether voluntarily turning a blind eye to key information is realistic, given the strong temptation to gather as much data as technology will allow.

So, Now What?

In future autonomous cars, crash-avoidance features alone won’t be enough. Sometimes an accident will be unavoidable as a matter of physics, for myriad reasons–such as insufficient time to press the brakes, technology errors, misaligned sensors, bad weather, and just pure bad luck. Therefore, robot cars will also need to have crash-optimization strategies.

To optimize crashes, programmers would need to design cost-functions that potentially determine who gets to live and who gets to die.

To optimize crashes, programmers would need to design cost-functions–algorithms that assign and calculate the expected costs of various possible options, selecting the one with the lowest cost–that potentially determine who gets to live and who gets to die. And this is fundamentally an ethics problem, one that demands care and transparency in reasoning.

It doesn’t matter much that these are rare scenarios. Often, the rare scenarios are the most important ones, making for breathless headlines. In the U.S., a traffic fatality occurs about once every 100 million vehicle-miles traveled. That means you could drive for more than 100 lifetimes and never be involved in a fatal crash. Yet these rare events are exactly what we’re trying to avoid by developing autonomous cars, as Chris Gerdes at Stanford’s School of Engineering reminds us.

Again, the above scenarios are not meant to simulate real-world conditions anyway, but they’re thought-experiments–something like scientific experiments–meant to simplify the issues in order to isolate and study certain variables. In those cases, the variable is the role of ethics, specifically discrimination and justice, in crash-optimization strategies more broadly.

The larger challenge, though, isn’t thinking through ethical dilemmas. It’s also about setting accurate expectations with users and the general public who might find themselves surprised in bad ways by autonomous cars. Whatever answer to an ethical dilemma the car industry might lean towards will not be satisfying to everyone.

Ethics and expectations are challenges common to all automotive manufacturers and tier-one suppliers who want to play in this emerging field, not just particular companies. As the first step toward solving these challenges, creating an open discussion about ethics and autonomous cars can help raise public and industry awareness of the issues, defusing outrage (and therefore large lawsuits) when bad luck or fate crashes into us.

Source: http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you