Schlagwort-Archive: Self Driving

Tech Trends to Watch

It’s not just about robots. These seven other technologies will transform the future of work.

Advances in robotics and artificial intelligence aren’t the only tech trends reshaping the future of work. Rather, they are among the most visible of a confluence of powerful overlapping developments that strengthen, reinforce, and accelerate each other. The combination of these forces has led analysts to speak of a new era in the evolution of the global economy. Below, a primer on seven other new technologies driving that transition:

 

Digitization

One of the most remarkable and durable predictions about the pace of technological change in the modern era is Moore’s Law, the observation that the number of transistors on an integrated circuit doubles approximately every one or two years. Moore’s Law gets its name from Gordon Moore, co-founder of Intel, who first articulated the idea in 1965. Initially, Moore projected the number of transistors packed onto a silicon wafer to double annually for another decade. In 1975, he revised his estimate to doubling every two years and guessed it might hold a decade longer. In fact, Moore’s rule of thumb has held true for more than five decades and is used to guide long-term planning throughout the industrialized world. The latest Intel processor contains about 1.75 billion transistors compared to half a million compared to 2,300 transistors on the first microchip Intel sold commercially back in 1971.

Many experts think the physics of metal oxide technology will make it impractical to shrink transistors after around 2020. But even at a slower rate, the implications of such extraordinary gains in our ability to process and store data are far-reaching. If the invention of the microchip was the key technological breakthrough that unleashed the “Third Industrial Revolution”—destroying jobs in a slew of sectors including media, retail, financial, and legal services—unrelenting exponential advances in computing power have facilitated other profound new technological developments that now define the Fourth Industrial Revolution.

 

The Internet of Things

Smaller, faster transistors have made it possible for us to embed sensors and actuators in almost every imaginable object—not just computers, but also machines, hand-held gadgets, home appliances, cars, roads, product packaging, clothing, even humans themselves. Advances in mobile and wireless technologies have made it possible for all those “things” to exchange data with each other creating, in effect, an “Internet of Things.” This network of digitally enabled things has grown at such a staggering pace that, in data terms, it dwarfs the Internet that we use to connect with each other. Cisco predicts that by 2020, the number of connected things will exceed 50 billion—the equivalent of six objects for every human on the planet.

The real significance of the Internet of Things lies not in the profusion of data-gathering sensors but in the fact that these sensors can be connected, and that we can evaluate and act on the data collected via this new digital infrastructure in real time no matter what source it comes from or form it assumes. Suddenly every aspect of our lives can be made “smart.”

 

Big Data

Being able to collect loads of data and knowing how to analyze and interpret it are very different propositions. Today, more data crosses the Internet every second than were stored in the entire Internet just 20 years ago. Large companies generate data in petabytes—a quadrillion bytes, or the equivalent of 20 million filing cabinets worth of text. Gartner, a technology consultancy, definesBig Data in terms of “three Vs”: volume, velocity, and variety.

As the Economist put it, “Today we have more information than ever. But the importance of all that information extends beyond simply being able to do more, or know more, than we already do. The quantitative shift leads to a qualitative shift. Having more data allows us to do new things that weren’t possible before. In other words: More is not just more. More is new. More is better. More is different.”

Big Data will not only provide valuable new insights into consumer behavior, but will also change the way we work in all sorts of ways. It could change the hiring process, for example, and many employers are already using sensors and software to monitor employee performance and, indeed, their every move. In theory, Big Data can also work the other way, enabling prospective employees to ferret out employers who treat their workers badly. But my guess, for what it’s worth, is that Big Data will help tilt the balance of power decisively in favor of companies at the expense of workers.

 

Cloud Computing

What we have come to call “the cloud” is made up of networks of data centers that deliver services over the Internet. Unlike stand-alone computers, whose performance depends on the speed of their processor chips, computers connected to the cloud can be made more powerful without changes to their hardware. TheEconomist has called the shift to the cloud “the biggest upheaval in the IT industry since smaller, networked machines dethroned mainframe computers in the early 1990s.”

This shift will only accelerate as Moore’s Law comes to an end. Firms will upgrade their own computers and servers less often and rely instead on continuous improvement of services by cloud providers.

The clear leader in cloud computing is Amazon, which launched a separate cloud business, Amazon Web Services, in 2006. Today AWS boasts more than a million customers and offers a myriad of different services including encryption, data storage and machine learning. Other players include Google, Microsoft, Alibaba, Baidu and Tencent. These firms look well-positioned to disrupt traditional sellers of hardware and software. For small businesses, meanwhile, being able to purchase computer power, storage capacity, and applications as needed from the cloud will help lower costs, boost efficiency, and make it easier to deliver results quickly.

 

Self-driving Vehicles

Google surprised everyone with its 2010 announcement that it had developed a fleet of seven “self-piloting” Toyota Prius Hybrids capable of navigating public roadways and sensing and reacting to changes in the environment around it. Today, the idea of “autonomous vehicles” no longer feels like sci-fi fantasy. Audi, BMW, GM, Nissan, Toyota, and Volvo have all announced plans to unveil autonomous vehicles by 2020. Some experts estimate that by that year there could be as many as 10 million self-driving vehicles on the road.

The death of a Tesla driver using “autopilot” technology this past May marked the first fatality for self-driving cars and has raised questions about the safety of autonomous vehicles and, at the very least, highlighted the need for a new legal framework to sort out questions of liability. Still, governments have strong incentives to encourage the adoption of self-driving vehicles because of their potential to ease urban congestion and drastically reduce public speeding on roads, highways, and parking places. KPMG predicts all the technological and regulatory components necessary for widespread adoption of autonomous vehicles could fall into place as early as 2025. The employment implications of that shift are huge: according to data from the U.S. Census Bureau, truck, delivery, or tractor driver is the most common occupational category in 29 of the 50 American states.

 

 The Platform Economy

The widespread use of autonomous vehicles will have an even greater impact when paired with services like Uber and Lyft, which create online platforms for independent workers to contract out specific services to individual customers. KPMG estimates that combining autonomous vehicles in Uber or Lyft-like arrangements could reduce the number of cars in operation by as much as 90 percent.

The power of online marketplaces is not limited to the transportation sector. Online task brokers like TaskRabbit, Fivver, USource, and Amazon’s Mechanical Turk have given rise to a new model of work that has been called the “gig economy,” the “platform economy,” or “sharing economy.” Such platforms create a new marketplace for work by unbundling jobs into discrete tasks and connecting sellers directly with consumers. They make it possible to exchange not just services, but also assets and physical goods, as in the case of Airbnb, eBay, and Alibaba.

A recent study by the JPMorgan Chase Institute found that, as of September 2015, nearly 1 percent of U.S. adults earned income in via gig economy—up from just 0.1 percent of adults in 2012.

Many experts extol the virtues of the gig economy, pointing to the gig workers’ freedom to choose their hours and work from home. But these more flexible arrangements have a dark side. In many economies, particularly the U.S., employers shoulder the burden of providing health insurance, compensation for injury on the job, and retirement benefits. Freelancers have to take care of all those things on their own. While some highly talented stars will thrive as independent contractors, on balance, the gig economy, like advances in robotics, AI, and Big Data, gives employers the upper hand.

 

3D Printing

3D printing, sometimes called “additive manufacturing,” is often mentioned among the technologies that will change the way we work. Proponents predict that in the not-too-distant future, 3D printers will be able to manufacture everything from auto parts to shoes to human organs. Some think 3D printing will lead to wholesale “restoring” of manufacturing from low-wage economies like China back to advanced economies in the West, and might ultimately eliminate millions of manufacturing jobs.

But the range of products that can be produced cost-effectively with 3D printers remains relatively limited. 3D parts aren’t as strong as traditionally manufactured parts. Generally speaking you can only print in plastic, and the plastic required for 3D printing is expensive—meaning that it makes little sense to use the technology to produce large items on a mass scale. Programming and computer modeling necessary to print unique items is time-consuming and expensive. Count me among the skeptics. Still, even if the impact falls short of the rhetoric, 3D printing is another new technology that seems more likely to eliminate jobs than create new ones.

Other Tech Trends to Watch

Werbung

The social dilemma of autonomous vehicles

Here is the MIT Full Report posted in Science Magazine: http://science.sciencemag.org/content/352/6293/1573.full

Ethical question leaves potential buyers torn over self-driving cars, study says

Faced with two deadly options the public want driverless vehicles to crash rather than hurt pedestrians – unless the vehicle in question is theirs

A self-driving Lexus SUV, operated by Google, after colliding with a public bus in Mountain View, California, in February 2016.
A self-driving Lexus SUV, operated by Google, after colliding with a public bus in Mountain View, California, in February 2016. Photograph: AP

In catch-22 traffic emergencies where there are only two deadly options, people generally want a self-driving vehicle to, for example, avoid a group of pedestrians and instead slam itself and its passengers into a wall, a new study says. But they would rather not be travelling in a car designed to do that.

The findings of the study, released on Thursday in the journal Science, highlight just how difficult it may be for auto companies to market those cars to a public that tends to contradict itself.

“People want to live a world in which everybody owns driverless cars that minimize casualties, but they want their own car to protect them at all costs,” Iyad Rahwan, a co-author of the study and a professor at MIT, said. “And car makers who offer such cars will sell more cars, but if everybody thinks this way then we end up in a world in which every car will look after its own passenger’s safety … and society as a whole is worse off.”

Through a series of online surveys, the authors found that people generally approve of cars that sacrifice their passengers for the greater good, such as sparing a group of pedestrians, and would like others to buy those cars, but they themselves would prefer to ride in a car that protects its passengers at all cost.

Several people working on bringing self-driving cars to market said that while the philosophical and ethical question over the two programming options is important to consider, real-life situations would be far more complex.

Brian Lathrop, a cognitive scientist who works on Volkswagen’s self-driving cars project, stressed that in real life there are likelihoods and contingencies that the academic example leaves out.

“You have to make a decision that the occupant in the vehicle is always going to be safer than the pedestrians, because they’re in a 3,000lb steel cage with all the other safety features,” said Lathrop, who was not involved in the new study.

So in a situation in which a car needs to, say, slam into a tree to avoid hitting a group of pedestrians, “obviously, you would choose to program it to go into the tree,” he said.

A spokesman for Google, whose self-driving car technology is generally seen as being the furthest along, suggested that asking about hypothetical scenarios might ignore the more important question of how to avoid deadly situations in the first place.

The problem seems to be how to get people to trust cars to consistently do the right thing if we’re not even sure we want them to do what we think is the right thing.

The study’s authors argue that since self-driving cars are expected to drastically reduce traffic fatalities, a delay in adopting the new technology could itself be deadly. Regulations requiring self-driving cars to sacrifice their passengers could move things forward, they write. But, in another catch-22, forcing the self-sacrificing programming could actually delay widespread adoption by consumers.

Susan Anderson, an ethicist at the University of Connecticut, and her husband and research partner, Michael Anderson, a computer science professor at the University of Hartford, believe the cars will be able to make the right call.

“We do believe that properly programmed machines are likely to make decisions that are more ethically justifiable than humans,” they said in an email. “Also, properly programmed self-driving cars should have information that humans may not readily have,” including precise stopping distance, whether to swerve or brake, or the likelihood of degree of harm.

How to get those cars “properly programmed”? The Andersons, who were not involved in the study, suggest having the cars learn from or be given “general ethical principles from applied ethicists”.

https://www.theguardian.com/technology/2016/jun/23/self-driving-car-safety-study-pedestrian-crashes

teslas self-driving car

Tesla CEO Elon Musk has made a bold prediction: Tesla Motors will have a self-driving car within two years.

“I think we have all the pieces,” Musk told Fortune, “and it’s just about refining those pieces, putting them in place, and making sure they work across a huge number of environments — and then we’re done. It’s a much easier problem than people think it is.”

Although Musk’s comments to Fortune came Monday, The Street pegged a rise in Tesla’s shares to the comments on Tuesday. The ambitious timeframe appeared to be offering support to the stock again today, with shares trading up $1.47, or 0.64 percent, at $231.42 around 7:18 a.m. PST.

Musk’s driverless-car comments may have been overshadowed initially by the achievement of SpaceX on Monday night in landing a rocket during a commercial mission for the first time. Musk is also CEO of SpaceX.

This is the most aggressive timeline Musk has mentioned. While Musk claims the problem is easier than people think it is, he doesn’t think the tech is so accessible that any hacker could create a self-driving car. Musk took the opportunity to call out hacker George Hotz, who claimed via a Bloomberg article last week that he had developed self-driving car technology that could compete with Tesla’s. Musk said he wasn’t buying it.

“But it’s not like George Hotz, a one-guy-and-three-months problem,” Musk said to Fortune. “You know, it’s more like, thousands of people for two years.”

The company went so far as to post a statement last week about Hotz’s achievement.

“We think it is extremely unlikely that a single person or even a small company that lacks extensive engineering validation capability will be able to produce an autonomous driving system that can be deployed to production vehicles,” the company stated. “It may work as a limited demo on a known stretch of road — Tesla had such a system two years ago — but then requires enormous resources to debug over millions of miles of widely differing roads.”

While Tesla is unconcerned about Hotz, the company’s new timeline may have other autonomous car developers hitting the accelerator. Tech companies like Google and Apple, in addition to automakers such as Volvo and General Motors are all competing to be among the first to offer some form of self-driving tech. Many believe the early 2020s would be a realistic timeframe to expect to see the public engaging with self-driving cars.

Just yesterday, it was reported that Google and Ford will enter into a joint venture to build self-driving vehicles with Google’s technology, according to Yahoo Autos, citing sources familiar with the plans. The official announcement is expected to come during the Consumer Electronics Show in January, but there is no manufacturing timeline.

But even if Tesla moves quickly on self-driving cars, are consumers ready for them? The Palo Alto-based carmaker’s recent Firmware 7.1 Autopilot update includes restrictions on self-driving features. The update only allows its Autosteer feature to engage when the Model S is traveling below the posted speed limit. The update came shortly after it was reported that drivers were involved in dangerous activities while the Autopilot features were engaged.

Source: http://www.bizjournals.com/sanjose/news/2015/12/23/elon-musks-bold-new-timeline-for-driverless-cars.html?ana=yahoo

Predicting the Future

Source: „http://www.wired.com/2015/10/googles-lame-demo-shows-us-far-robo-car-come/“

Killing the Driver

Google has been developing this technology for six years, and is taking a distinctly different approach than everyone else. Conventional automakers are rolling out features piecemeal, over the course of many years, starting with active safety features like automatic braking and lane departure warnings.

Google doesn’t give a hoot about anything less than a completely autonomous vehicle, one that reduces “driving” to little more than getting in, typing in a destination, and enjoying the ride. It wants a consumer-ready product ready in four years.

The Silicon Valley juggernaut is making rapid progress. Its fleet of modified Lexus SUVs and prototypes has racked up 1.2 million autonomous miles on public roads, and covers 10,000 more each week. Most of that has been done in Mountain View, and Google expanded its testing to Austin last summer.

It’s unclear how this technology will reach consumers, but Google is more likely to sell its software than manufacture its own cars. At the very least, it won’t sell this dinky prototype to the public.

Predicting the Future

As the Google car moves, its laser, camera, and radar systems constantly scan the environment around it, 360 degrees and up to 200 yards away.

“We look at the world around us, and we detect objects in the scene, we categorize them as different types,” says Dmitri Dolgov, the project’s chief engineer. The car knows the difference between people, cyclists, cars, trucks, ambulances, cones, and more. Based on those categories and its surroundings, it anticipates what they’re likely to do.

Making those predictions is likely the most crucial work the team is doing, and it’s based on the huge amount of time the cars have spent dealing with the real world. Anything one car sees is shared with every other car, and nothing is forgotten. From that data, the team builds probabilistic models for the cars to follow.

“All the miles we’ve driven and all the data that we’ve collected allowed us to build very accurate models of how different types of objects behave,” Dolgov says. “We know what to expect from pedestrians, from cyclists, from cars.”

Those are the key learnings the test drive on the roof parking lot was meant to show off. If I may anthropomorphize: The car spotted a person on foot walking near its route and figured, “You’re probably going to jaywalk.” It saw a car coming up quickly from left and thought, “There’s a good chance you’re going to keep going and cut me off.” When the cyclist in front put his left arm out, the car understood that as a turn signal.

This is how good human drivers think. And the cars have the added advantage of better vision, quicker processing times, and the inability to get distracted, or tired, or drunk, or angry.

Detecting Anomalies

The great challenge of making a car without a steering wheel a human can grab is that the car must be able to handle every situation it encounters. Google acknowledges there’s no way to anticipate and model for every situation. So the team created what it calls “anomaly detection.”

If the cars see behavior or an object they can’t categorize, “they understand their own limitations,” Dolgov says. “They understand that there’s something really crazy going on and they might not be able to make really good, confident predictions about the future. So they take a very conservative approach.”

One of Google’s cars once encountered a woman in a wheelchair, armed with a broom, chasing a turkey. Seriously. Unsurprisingly, this was a first for the car. So the car did what a good human driver would have done. It slowed down, Dolgov says, and let the situation play out. Then it went along its way. Unlike a human, though, it did not make a video and post it on Instagram.

Accident Causes of the Google Self-Driving Car

Source: https://medium.com/backchannel/the-view-from-the-front-seat-of-the-google-self-driving-car-46fc9f3e6088

 

The View from the Front Seat of the Google Self-Driving Car

After 1.7 million miles we’ve learned a lot — not just about our system but how humans drive, too.

About 33,000 people die on America’s roads every year. That’s why so much of the enthusiasm for self-driving cars has focused on their potential to reduce accident rates. As we continue to work toward our vision of fully self-driving vehicles that can take anyone from point A to point B at the push of a button, we’re thinking a lot about how to measure our progress and our impact on road safety.

One of the most important things we need to understand in order to judge our cars’ safety performance is “baseline” accident activity on typical suburban streets. Quite simply, because many incidents never make it into official statistics, we need to find out how often we can expect to get hit by other drivers. Even when our software and sensors can detect a sticky situation and take action earlier and faster than an alert human driver, sometimes we won’t be able to overcome the realities of speed and distance; sometimes we’ll get hit just waiting for a light to change. And that’s important context for communities with self-driving cars on their streets; although we wish we could avoid all accidents, some will be unavoidable.

The most common accidents our cars are likely to experience in typical day to day street driving — light damage, no injuries — aren’t well understood because they’re not reported to police. Yet according to National Highway Traffic Safety Administration (NHTSA) data, these incidents account for 55% of all crashes. It’s hard to know what’s really going on out on the streets unless you’re doing miles and miles of driving every day. And that’s exactly what we’ve been doing with our fleet of 20+ self-driving vehicles and team of safety drivers, who’ve driven 1.7 million miles (manually and autonomously combined). The cars have self-driven nearly a million of those miles, and we’re now averaging around 10,000 self-driven miles a week (a bit less than a typical American driver logs in a year), mostly on city streets.

In the spirit of helping all of us be safer drivers, we wanted to share a few patterns we’ve seen. A lot of this won’t be a surprise, especially if you already know that driver error causes 94% of crashes.

If you spend enough time on the road, accidents will happen whether you’re in a car or a self-driving car. Over the 6 years since we started the project, we’ve been involved in 11 minor accidents (light damage, no injuries) during those 1.7 million miles of autonomous and manual driving with our safety drivers behind the wheel, and not once was the self-driving car the cause of the accident.

Rear-end crashes are the most frequent accidents in America, and often there’s little the driver in front can do to avoid getting hit; we’ve been hit from behind seven times, mainly at traffic lights but also on the freeway. We’ve also been side-swiped a couple of times and hit by a car rolling through a stop sign. And as you might expect, we see more accidents per mile driven on city streets than on freeways; we were hit 8 times in many fewer miles of city driving. All the crazy experiences we’ve had on the road have been really valuable for our project. We have a detailed review process and try to learn something from each incident, even if it hasn’t been our fault.

Not only are we developing a good understanding of minor accident rates on suburban streets, we’ve also identified patterns of driver behavior (lane-drifting, red-light running) that are leading indicators of significant collisions. Those behaviors don’t ever show up in official statistics, but they create dangerous situations for everyone around them.

Lots of people aren’t paying attention to the road. In any given daylight moment in America, there are 660,000 people behind the wheel who are checking their devices instead of watching the road. Our safety drivers routinely see people weaving in and out of their lanes; we’ve spotted people reading books, and even one playing a trumpet. A self-driving car has people beat on this dimension of road safety. With 360 degree visibility and 100% attention out in all directions at all times; our newest sensors can keep track of other vehicles, cyclists, and pedestrians out to a distance of nearly two football fields.

Intersections can be scary places. Over the last several years, 21% of the fatalities and about 50% of the serious injuries on U.S. roads have involved intersections. And the injuries are usually to pedestrians and other drivers, not the driver running the red light. This is why we’ve programmed our cars to pause briefly after a light turns green before proceeding into the intersection — that’s often when someone will barrel impatiently or distractedly through the intersection.

In this case, a cyclist (the light blue box) got a late start across the intersection and narrowly avoided getting hit by a car making a left turn (the purple box entering the intersection) who didn’t see him and had started to move when the light turned green. Our car predicted the cyclist’s behavior (the red path) and did not start moving until the cyclist was safely across the intersection.

Turns can be trouble. We see people turning onto, and then driving on, the wrong side of the road a lot — particularly at night, it’s common for people to overshoot or undershoot the median.

In this image you can see not one, but two cars (the two purple boxes on the left of the green path are the cars you can see in the photo) coming toward us on the wrong side of the median; this happened at night on one of Mountain View’s busiest boulevards.

Other times, drivers do very silly things when they realize they’re about to miss their turn.

A car (the purple box touching the green rectangles with an exclamation mark over it) decided to make a right turn from the lane to our left, cutting sharply across our path. The green rectangles, which we call a “fence,” indicate our car is going to slow down to avoid the car making this crazy turn.

And other times, cars seem to behave as if we’re not there. In the image below, a car in the leftmost turn lane (the purple box with a red fence through it) took the turn wide and cut off our car. In this case, the red fence indicates our car is stopping and avoiding the other vehicle.

These experiences (and countless others) have only reinforced for us the challenges we all face on our roads today. We’ll continue to drive thousands of miles so we can all better understand the all too common incidents that cause many of us to dislike day to day driving — and we’ll continue to work hard on developing a self-driving car that can shoulder this burden for us.

Chris Urmson is director of Google’s self-driving car program.

Delphis Self Driving Car

Do you know Delphi (or Formerly Delphi Packard)? It is one of the biggest worlds automotive suppliers, just like Magna (formerly Magna Steyr).

Here is the great story that outlines, why the next five years in automotive engineering will dramatically change the whole picture, how we see cars and what will be the next big thing in automotive driving.

Delphis-automated-driving-vehicle_DLSV-garage-660x440

„Google gets most of the attention when it comes to self-driving cars. And when it isn’t getting all the love, people focus on the efforts of premier automakers like Audi and Tesla. But the autonomous vehicle that makes human driving a quaint pastime may well come from an auto industry stalwart many people have never heard of: Delphi.

Delphi is one of the world’s largest automotive suppliers and has been working with automakers almost as long as there have been automakers. And it’s got a solid history of innovation. Among other things, it built the first electric starter in 1911, the first in-dash car radio in 1936, and the first integrated radio-navi system in 1994. Now it’s built a self-driving car, but it won’t be sold to the public. This robo-car, based on an Audi, is a shopping catalog for automakers. The car is contains every element needed to build a truly autonomous system, elements Delphi will happily sell.

In other words, it’s an off-the-shelf autonomous system that could help automakers catch up with Google.

The Jump Forward

Delphi has a long history in passive safety systems—things like airbag deployment electronics—and began the progression to active safety that strive to prevent rather than merely mitigate crashes. Delphi got in the game in 1999, when Jaguar used Delphi’s radar system in the adaptive cruise control first offered on the 2000 XKE. Today, Delphi offers a range of active safety systems, from automatic emergency braking to blind spot detection to autonomous lane keeping.

Delphis-Automated-Driving-Car_MP4video-2

Until now, those systems have operated independently of one another. Delphi wanted to make them work together. “The reality of automated driving is already here,” says John Absmeier, director of Delphi’s R&D lab in Silicon Valley. “It’s just been labeled mostly as active safety or advanced driver assistance. But really, when you take that one step further and marry it with some intelligent software, then you make automation. And you make cars that can navigate themselves.”

That marriage has come through a partnership with Ottomatika, a company spun out of Carnegie Mellon’s autonomous vehicle research efforts to commercialize its technology. Delphi provides the organs—the sensors and software for controlling the car. Ottomatika adds a central brain and nervous system—the control algorithm to bring all the data from sensors into one place and tell the car what to do. The result is Delphi’s Automated Driving System, a name so boring you’ve likely already forgotten it.

Work Like a Robot, Drive Like a Nun

The name is lame (even if the unintended acronym, DADS, is pretty funny), but at least Delphi had the sense to pack the tech into a 2014 Audi SQ5, which it chose simply because it’s “really cool,” Absmeier says. (The company changes up its showcase vehicles; earlier this year it rolled into CES with a Tesla Model S and Fiat 500.) At first glance, the car seems stock, but it’s actually covered in high-tech sensors.

A camera in the windshield looks for lane lines, road signs, and traffic lights. Delphi slapped a midrange radar, with a range of about 80 meters, on each corner. There’s another at the front and a sixth on the rear. That’s in addition to the long-range radars on the front and back, which look 180 meters ahead and behind. They’re all hidden behind the bodywork, but the LIDAR on each corner need a clear view. So Delphi put them behind acrylic windows. “We tried to make it look pretty,” Absmeier says. The Audi designer who styled the SQ5 might consider the changed look an affront, but he’s probably not as annoyed as the Lexus employee who sees Google sticking a spinning LIDAR on the roof of the RX450h like a police siren.

To give the computer command of the SUV, engineers tapped into the electronic throttle control and steering, and added an actuator to control the brakes. The interior is essentially as it appears in an Audi showroom but for the addition of an autonomous mode button, which you twist to turn on and push to turn off.

Delphis-automated-driving-vehicle_HMI-centerstack-660x370

Riding in the SQ5 in autonomous mode felt like being driven around by a nun (or at least like the former nun whose car I’ve traveled in a few times). It’s super conservative, accelerating slowly and braking early. No speeding, even on the highway to match the speed of traffic. (It’s likely this was the first time I was in a car that followed the speed limit on a highway off ramp.) It doesn’t turn right on red, which subjects the test drivers to honking and the occasional middle finger from annoyed humans. These are settings Delphi’s engineers could easily change, but for now they’re playing it safe. Very safe.

The emphasis on caution aside, the car drives remarkably well, even adjusting its position within its lane when neighboring cars get a bit close. In a 30-minute that included side roads, main thoroughfares, and Highway 101, the system faltered just twice. After accelerating after a light turned green, the car hit the brakes suddenly, apparently spooked by a car approaching quickly from the right. Pulling into Delphi’s parking lot, it hit a speed bump without slowing down. (Obstacles that are close to the ground, like speed bumps and curbs, are among the hardest things for the car’s sensors to pick up, Absmeier says.) The human in the driver’s seat, Delphi systems engineer Tory Smith, took the controls just once, to make a quick lane change the car was too timid to execute. That kind of caution is what Delphi wants. “If everything’s working, it should be boring,” Absmeier says. “We want boring.”

The Modular Approach

Google is taking a “moonshot” approach, aiming to put a fully autonomous car on the market within five years. Delphi, despite having developed an impressive system, is more circumspect about the prospect of eliminating the role of humans in the operation of a motor vehicle. “There’s a lot of romantic speculation—hype—around in the industry about that now,” says Owens. “I don’t know when we’ll get there, or if we’ll ever get there.”

And while Delphi likes the idea of one day selling a drop-in autonomous system, Absmeier says that’s not really the point of this project. “The platform enables us to build out all those different components that are required to make an automated driving system in a car, and OEMs can either take the whole package or they can say I want that algorithm and that sensor and that controller, or whatever it is that they need.”

A flexible system is the smart play, Delphi CTO Jeffrey Owens says, because automakers aren’t yet sure exactly what they want to offer. “They don’t know what path they’re gonna go down. They don’t know what governments are going to require, they don’t know what governments are going to not allow. They don’t know what consumers will pay for … They don’t know what insurance companies will incentivize and what they don’t care about. They don’t know what will help them in JD Power and what will hurt them in JD Power.”

That means that whether an automaker is shopping for systems to put in a luxury or bargain car, high volume or low, to meet regulations in the US or China, it can pick and choose the elements of Delphi’s system that it needs. And that’s good for Delphi, which is already in discussions with customers to sell elements from the self-driving platform in the next two years.“

Source: http://www.wired.com/2014/11/delphi-automated-driving-system/

Googles latest chapter for the self-driving car: mastering city street driving

Jaywalking pedestrians. Cars lurching out of hidden driveways. Double-parked delivery trucks blocking your lane and your view. At a busy time of day, a typical city street can leave even experienced drivers sweaty-palmed and irritable. We all dream of a world in which city centers are freed of congestion from cars circling for parking (PDF) and have fewer intersections made dangerous by distracted drivers. That’s why over the last year we’ve shifted the focus of the Google self-driving car project onto mastering city street driving.

Since our last update, we’ve logged thousands of miles on the streets of our hometown of Mountain View, Calif. A mile of city driving is much more complex than a mile of freeway driving, with hundreds of different objects moving according to different rules of the road in a small area. We’ve improved our software so it can detect hundreds of distinct objects simultaneously—pedestrians, buses, a stop sign held up by a crossing guard, or a cyclist making gestures that indicate a possible turn. A self-driving vehicle can pay attention to all of these things in a way that a human physically can’t—and it never gets tired or distracted.

Here’s a video showing how our vehicle navigates some common scenarios near the Googleplex:


As it turns out, what looks chaotic and random on a city street to the human eye is actually fairly predictable to a computer. As we’ve encountered thousands of different situations, we’ve built software models of what to expect, from the likely (a car stopping at a red light) to the unlikely (blowing through it). We still have lots of problems to solve, including teaching the car to drive more streets in Mountain View before we tackle another town, but thousands of situations on city streets that would have stumped us two years ago can now be navigated autonomously.

Our vehicles have now logged nearly 700,000 autonomous miles, and with every passing mile we’re growing more optimistic that we’re heading toward an achievable goal—a vehicle that operates fully without human intervention.