The Robot Car of Tomorrow May Just Be Programmed to Hit You

Image: U.S. DOT

Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?

As a matter of physics, you should choose a collision with a heavier vehicle that can better absorb the impact of a crash, which means programming the car to crash into the Volvo. Further, it makes sense to choose a collision with a vehicle that’s known for passenger safety, which again means crashing into the Volvo.

But physics isn’t the only thing that matters here. Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best.

Is This a Realistic Problem?

Some road accidents are unavoidable, and even autonomous cars can’t escape that fate. A deer might dart out in front of you, or the car in the next lane might suddenly swerve into you. Short of defying physics, a crash is imminent. An autonomous or robot car, though, could make things better.

While human drivers can only react instinctively in a sudden emergency, a robot car is driven by software, constantly scanning its environment with unblinking sensors and able to perform many calculations before we’re even aware of danger. They can make split-second choices to optimize crashes–that is, to minimize harm. But software needs to be programmed, and it is unclear how to do that for the hard cases.

In constructing the edge cases here, we are not trying to simulate actual conditions in the real world. These scenarios would be very rare, if realistic at all, but nonetheless they illuminate hidden or latent problems in normal cases. From the above scenario, we can see that crash-avoidance algorithms can be biased in troubling ways, and this is also at least a background concern any time we make a value judgment that one thing is better to sacrifice than another thing.

In previous years, robot cars have been quarantined largely to highway or freeway environments. This is a relatively simple environment, in that drivers don’t need to worry so much about pedestrians and the countless surprises in city driving. But Google recently announced that it has taken the next step in testing its automated car in exactly city streets. As their operating environment becomes more dynamic and dangerous, robot cars will confront harder choices, be it running into objects or even people.

Ethics Is About More Than Harm

The problem is starkly highlighted by the next scenario, also discussed by Noah Goodall, a research scientist at the Virginia Center for Transportation Innovation and Research. Again, imagine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car?

In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision. In the last scenario, that meant smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.

But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.

By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet.

Not only does this discrimination seem unethical, but it could also be bad policy. That crash-optimization design may encourage some motorcyclists to not wear helmets, in order to not stand out as favored targets of autonomous cars, especially if those cars become more prevalent on the road. Likewise, in the previous scenario, sales of automotive brands known for safety may suffer, such as Volvo and Mercedes Benz, if customers want to avoid being the robot car’s target of choice.

The Role of Moral Luck

An elegant solution to these vexing dilemmas is to simply not make a deliberate choice. We could design an autonomous car to make certain decisions through a random-number generator. That is, if it’s ethically problematic to choose which one of two things to crash into–a large SUV versus a compact car, or a motorcyclist with a helmet versus one without, and so on–then why make a calculated choice at all?

A robot car’s programming could generate a random number; and if it is an odd number, the car will take one path, and if it is an even number, the car will take the other path. This avoids the possible charge that the car’s programming is discriminatory against large SUVs, responsible motorcyclists, or anything else.

This randomness also doesn’t seem to introduce anything new into our world: luck is all around us, both good and bad. A random decision also better mimics human driving, insofar as split-second emergency reactions can be unpredictable and are not based on reason, since there’s usually not enough time to apply much human reason.

A key reason for creating autonomous cars in the first place is that they should be able to make better decisions than we do

Yet, the random-number engine may be inadequate for at least a few reasons. First, it is not obviously a benefit to mimic human driving, since a key reason for creating autonomous cars in the first place is that they should be able to make better decisions than we do. Human error, distracted driving, drunk driving, and so on are responsible for 90 percent or more of car accidents today, and 32,000-plus people die on U.S. roads every year.

Second, while human drivers may be forgiven for making a poor split-second reaction–for instance, crashing into a Pinto that’s prone to explode, instead of a more stable object–robot cars won’t enjoy that freedom. Programmers have all the time in the world to get it right. It’s the difference between premeditated murder and involuntary manslaughter.

Third, for the foreseeable future, what’s important isn’t just about arriving at the “right” answers to difficult ethical dilemmas, as nice as that would be. But it’s also about being thoughtful about your decisions and able to defend them–it’s about showing your moral math.  In ethics, the process of thinking through a problem is as important as the result.  Making decisions randomly, then, evades that responsibility. Instead of thoughtful decisions, they are thoughtless, and this may be worse than reflexive human judgments that lead to bad outcomes.

Can We Know Too Much?

A less drastic solution would be to hide certain information that might enable inappropriate discrimination–a “veil of ignorance”, so to speak. As it applies to the above scenarios, this could mean not ascertaining the make or model of other vehicles, or the presence of helmets and other safety equipment, even if technology could let us, such as vehicle-to-vehicle communications. If we did that, there would be no basis for bias.

Not using that information in crash-optimization calculations may not be enough. To be in the ethical clear, autonomous cars may need to not collect that information at all. Should they be in possession of the information, and using it could have minimized harm or saved a life, there could be legal liability in failing to use that information. Imagine a similar public outrage if a national intelligence agency had credible information about a terrorist plot but failed to use it to prevent the attack.

A problem with this approach, however, is that auto manufacturers and insurers will want to collect as much data as technically possible, to better understand robot-car crashes and for other purposes, such as novel forms of in-car advertising. So it’s unclear whether voluntarily turning a blind eye to key information is realistic, given the strong temptation to gather as much data as technology will allow.

So, Now What?

In future autonomous cars, crash-avoidance features alone won’t be enough. Sometimes an accident will be unavoidable as a matter of physics, for myriad reasons–such as insufficient time to press the brakes, technology errors, misaligned sensors, bad weather, and just pure bad luck. Therefore, robot cars will also need to have crash-optimization strategies.

To optimize crashes, programmers would need to design cost-functions that potentially determine who gets to live and who gets to die.

To optimize crashes, programmers would need to design cost-functions–algorithms that assign and calculate the expected costs of various possible options, selecting the one with the lowest cost–that potentially determine who gets to live and who gets to die. And this is fundamentally an ethics problem, one that demands care and transparency in reasoning.

It doesn’t matter much that these are rare scenarios. Often, the rare scenarios are the most important ones, making for breathless headlines. In the U.S., a traffic fatality occurs about once every 100 million vehicle-miles traveled. That means you could drive for more than 100 lifetimes and never be involved in a fatal crash. Yet these rare events are exactly what we’re trying to avoid by developing autonomous cars, as Chris Gerdes at Stanford’s School of Engineering reminds us.

Again, the above scenarios are not meant to simulate real-world conditions anyway, but they’re thought-experiments–something like scientific experiments–meant to simplify the issues in order to isolate and study certain variables. In those cases, the variable is the role of ethics, specifically discrimination and justice, in crash-optimization strategies more broadly.

The larger challenge, though, isn’t thinking through ethical dilemmas. It’s also about setting accurate expectations with users and the general public who might find themselves surprised in bad ways by autonomous cars. Whatever answer to an ethical dilemma the car industry might lean towards will not be satisfying to everyone.

Ethics and expectations are challenges common to all automotive manufacturers and tier-one suppliers who want to play in this emerging field, not just particular companies. As the first step toward solving these challenges, creating an open discussion about ethics and autonomous cars can help raise public and industry awareness of the issues, defusing outrage (and therefore large lawsuits) when bad luck or fate crashes into us.

Source: http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you

Has the self-driving car at last arrived?

Auto Correct

Has the self-driving car at last arrived?

by Burkhard Bilger

The Google car knows every turn. It never gets drowsy or distracted, or wonders who has the right-of-way.

The Google car knows every turn. It never gets drowsy or distracted, or wonders who has the right-of-way. Illustration by Harry Campbell.

Human beings make terrible drivers. They talk on the phone and run red lights, signal to the left and turn to the right. They drink too much beer and plow into trees or veer into traffic as they swat at their kids. They have blind spots, leg cramps, seizures, and heart attacks. They rubberneck, hotdog, and take pity on turtles, cause fender benders, pileups, and head-on collisions. They nod off at the wheel, wrestle with maps, fiddle with knobs, have marital spats, take the curve too late, take the curve too hard, spill coffee in their laps, and flip over their cars. Of the ten million accidents that Americans are in every year, nine and a half million are their own damn fault.

A case in point: The driver in the lane to my right. He’s twisted halfway around in his seat, taking a picture of the Lexus that I’m riding in with an engineer named Anthony Levandowski. Both cars are heading south on Highway 880 in Oakland, going more than seventy miles an hour, yet the man takes his time. He holds his phone up to the window with both hands until the car is framed just so. Then he snaps the picture, checks it onscreen, and taps out a lengthy text message with his thumbs. By the time he puts his hands back on the wheel and glances up at the road, half a minute has passed.

Levandowski shakes his head. He’s used to this sort of thing. His Lexus is what you might call a custom model. It’s surmounted by a spinning laser turret and knobbed with cameras, radar, antennas, and G.P.S. It looks a little like an ice-cream truck, lightly weaponized for inner-city work. Levandowski used to tell people that the car was designed to chase tornadoes or to track mosquitoes, or that he belonged to an élite team of ghost hunters. But nowadays the vehicle is clearly marked: “Self-Driving Car.”

Every week for the past year and a half, Levandowski has taken the Lexus on the same slightly surreal commute. He leaves his house in Berkeley at around eight o’clock, waves goodbye to his fiancée and their son, and drives to his office in Mountain View, forty-three miles away. The ride takes him over surface streets and freeways, old salt flats and pine-green foothills, across the gusty blue of San Francisco Bay, and down into the heart of Silicon Valley. In rush-hour traffic, it can take two hours, but Levandowski doesn’t mind. He thinks of it as research. While other drivers are gawking at him, he is observing them: recording their maneuvers in his car’s sensor logs, analyzing traffic flow, and flagging any problems for future review. The only tiresome part is when there’s roadwork or an accident ahead and the Lexus insists that he take the wheel. A chime sounds, pleasant yet insistent, then a warning appears on his dashboard screen: “In one mile, prepare to resume manual control.”

Levandowski is an engineer at Google X, the company’s semi-secret lab for experimental technology. He turned thirty-three last March but still has the spindly build and nerdy good nature of the kids in my high-school science club. He wears black frame glasses and oversized neon sneakers, has a long, loping stride—he’s six feet seven—and is given to excitable talk on fantastical themes. Cybernetic dolphins! Self-harvesting farms! Like a lot of his colleagues in Mountain View, Levandowski is equal parts idealist and voracious capitalist. He wants to fix the world and make a fortune doing it. He comes by these impulses honestly: his mother is a French diplomat, his father an American businessman. Although Levandowski spent most of his childhood in Brussels, his English has no accent aside from a certain absence of inflection—the bright, electric chatter of a processor in overdrive. “My fiancée is a dancer in her soul,” he told me. “I’m a robot.”

What separates Levandowski from the nerds I knew is this: his wacky ideas tend to come true. “I only do cool shit,” he says. As a freshman at Berkeley, he launched an intranet service out of his basement that earned him fifty thousand dollars a year. As a sophomore, he won a national robotics competition with a machine made out of Legos that could sort Monopoly money—a fair analogy for what he’s been doing for Google lately. He was one of the principal architects of Street View and the Google Maps database, but those were just warmups. “The Wright Brothers era is over,” Levandowski assured me, as the Lexus took us across the Dumbarton Bridge. “This is more like Charles Lindbergh’s plane. And we’re trying to make it as robust and reliable as a 747.”

Not everyone finds this prospect appealing. As a commercial for the Dodge Charger put it two years ago, “Hands-free driving, cars that park themselves, an unmanned car driven by a search-engine company? We’ve seen that movie. It ends with robots harvesting our bodies for energy.” Levandowski understands the sentiment. He just has more faith in robots than most of us do. “People think that we’re going to pry the steering wheel from their cold, dead hands,” he told me, but they have it exactly wrong. Someday soon, he believes, a self-driving car will save your life.

The Google car is an old-fashioned sort of science fiction: this year’s model of last century’s make. It belongs to the gleaming, chrome-plated age of jet packs and rocket ships, transporter beams and cities beneath the sea, of a predicted future still well beyond our technology. In 1939, at the World’s Fair in New York, visitors stood in lines up to two miles long to see the General Motors Futurama exhibit. Inside, a conveyor belt carried them high above a miniature landscape, spread out beneath a glass dome. Its suburbs and skyscrapers were laced together by superhighways full of radio-guided cars. “Does it seem strange? Unbelievable?” the announcer asked. “Remember, this is the world of 1960.”

Not quite. Skyscrapers and superhighways made the deadline, but driverless cars still putter along in prototype. Human beings, as it turns out, aren’t easy to improve upon. For every accident they cause, they avoid a thousand others. They can weave through tight traffic and anticipate danger, gauge distance, direction, pace, and momentum. Americans drive nearly three trillion miles a year, I was told by Ron Medford, a former deputy administrator of the National Highway Traffic Safety Administration who now works for Google. It’s no wonder that we have thirty-two thousand fatalities along the way, he said. It’s a wonder the number is so low.

Levandowski keeps a collection of vintage illustrations and newsreels on his laptop, just to remind him of all the failed schemes and fizzled technologies of the past. When he showed them to me one night at his house, his face wore a crooked grin, like a father watching his son strike out in Little League. From 1957: A sedan cruises down a highway, guided by circuits in the road, while a family plays dominoes inside. “No traffic jam . . . no collisions . . . no driver fatigue.” From 1977: Engineers huddle around a driverless Ford on a test track. “Cars like this one may be on the nation’s roads by the year 2000!” Levandowski shook his head. “We didn’t come up with this idea,” he said. “We just got lucky that the computers and sensors were ready for us.”

Almost from the beginning, the field divided into two rival camps: smart roads and smart cars. General Motors pioneered the first approach in the late nineteen-fifties. Its Firebird III concept car—shaped like a jet fighter, with titanium tail fins and a glass-bubble cockpit—was designed to run on a test track embedded with an electrical cable, like the slot on a toy speedway. As the car passed over the cable, a receiver in its front end picked up a radio signal and followed it around the curve. Engineers at Berkeley later went a step further: they spiked the track with magnets, alternating their polarity in binary patterns to send messages to the car—“Slow down, sharp curve ahead.” Systems like these were fairly simple and reliable, but they had a chicken-and-egg problem. To be useful, they had to be built on a large scale; to be built on a large scale, they had to be useful. “We don’t have the money to fix potholes,” Levandowski says. “Why would we invest in putting wires in the road?”

Smart cars were more flexible but also more complex. They needed sensors to guide them, computers to steer them, digital maps to follow. In the nineteen-eighties, a German engineer named Ernst Dickmanns, at the Bundeswehr University in Munich, equipped a Mercedes van with video cameras and processors, then programmed it to follow lane lines. Soon it was steering itself around a track. By 1995, Dickmanns’s car was able to drive on the Autobahn from Munich to Odense, Denmark, going up to a hundred miles at a stretch without assistance. Surely the driverless age was at hand! Not yet. Smart cars were just clever enough to get drivers into trouble. The highways and test tracks they navigated were strictly controlled environments. The instant more variables were added—a pedestrian, say, or a traffic cop—their programming faltered. Ninety-eight per cent of driving is just following the dotted line. It’s the other two per cent that matters.

“There was no way, before 2000, to make something interesting,” the roboticist Sebastian Thrun told me. “The sensors weren’t there, the computers weren’t there, and the mapping wasn’t there. Radar was a device on a hilltop that cost two hundred million dollars. It wasn’t something you could buy at Radio Shack.” Thrun, who is forty-six, is the founder of the Google Car project. A wunderkind from the west German city of Solingen, he programmed his first driving simulator at the age of twelve. Slender and tan, with clear blue eyes and a smooth, seemingly boneless gait, he looks as if he just stepped off a dance floor in Ibiza. And yet, like Levandowski, he has a gift for seeing things through a machine’s eyes—for intuiting the logic by which it might apprehend the world.

When Thrun first arrived in the United States, in 1995, he took a job at the country’s leading center for driverless-car research: Carnegie Mellon University. He went on to build robots that explored mines in Virginia, guided visitors through the Smithsonian, and chatted with patients at a nursing home. What he didn’t build was driverless cars. Funding for private research in the field had dried up by then. And though Congress had set a goal that a third of all ground combat vehicles be autonomous by 2015, little had come of the effort. Every so often, Thrun recalls, military contractors, funded by the Defense Advanced Research Projects Agency, would roll out their latest prototype. “The demonstrations I saw mostly ended in crashes and breakdowns in the first half mile,” he told me. “DARPA was funding people who weren’t solving the problem. But they couldn’t tell if it was the technology or the people. So they did this crazy thing, which was really visionary.”

They held a race.

The first DARPA Grand Challenge took place in the Mojave Desert on March 13, 2004. It offered a million-dollar prize for what seemed like a simple task: build a car that can drive a hundred and forty-two miles without human intervention. Ernst Dickmanns’s car had gone similar distances on the Autobahn, but always with a driver in the seat to take over in the tricky stretches. The cars in the Grand Challenge would be empty, and the road would be rough: from Barstow, California, to Primm, Nevada. Instead of smooth curves and long straightaways, it had rocky climbs and hairpin turns; instead of road signs and lane lines, G.P.S. waypoints. “Today, we could do it in a few hours,” Thrun told me. “But at the time it felt like going to the moon in sneakers.”

Levandowski first heard about it from his mother. She’d seen a notice for the race when it was announced online, in 2002, and recalled that her son used to play with remote-control cars as a boy, crashing them into things on his bedroom floor. Was this so different? Levandowski was now a student at Berkeley, in the industrial-engineering department. When he wasn’t studying or rowing crew or winning Lego competitions, he was casting about for cool new shit to build—for a profit, if possible. “If he’s making money, it’s his confirmation that he’s creating value,” his friend Randy Miller told me. “I remember, when we were in college, we were at his house one day, and he told me that he’d rented out his bedroom. He’d put up a wall in his living room and was sleeping on a couch in one half, next to a big server tower that he’d built. I said, ‘Anthony, what the hell are you doing? You’ve got plenty of money. Why don’t you get your own place?’ And he said, ‘No. Until I can move to a stateroom on a 747, I want to live like this.’ ”

DARPA’s rules were vague on the subject of vehicles: anything that could drive itself would do. So Levandowski made a bold decision. He would build the world’s first autonomous motorcycle. This seemed like a stroke of genius at the time. (Miller says that it came to them in a hot tub in Tahoe, which sounds about right.) Good engineering is all about gaming the system, Levandowski says—about sidestepping obstacles rather than trying to run over them. His favorite example is from a robotics contest at M.I.T. in 1991. Tasked with building a machine that could shoot the most Ping-Pong balls into a tube, the students came up with dozens of ingenious contraptions. The winner, though, was infuriatingly simple: it had a mechanical arm reach over, drop a ball into the tube, then cover it up so that no others could get in. It won the contest in a single move. The motorcycle could be like that, Levandowski thought: quicker off the mark than a car and more maneuverable. It could slip through tighter barriers and drive just as fast. Also, it was a good way to get back at his mother, who’d never let him ride motorcycles as a kid. “Fine,” he thought. “I’ll just make one that rides itself.”

The flaw in this plan was obvious: a motorcycle can’t stand up on its own. It needs a rider to balance it—or else a complex, computer-controlled system of shafts and motors to adjust its position every hundredth of a second. “Before you can drive ten feet you have to do a year of engineering,” Levandowski says. The other racers had no such problem. They also had substantial academic and corporate backing: the Carnegie Mellon team was working with General Motors, Caltech with Northrop Grumman, Ohio State with Oshkosh trucking. When Levandowski went to the Berkeley faculty with his idea, the reaction was, at best, bemused disbelief. His adviser, Ken Goldberg, told him frankly that he had no chance of winning. “Anthony is probably the most creative undergraduate I’ve encountered in twenty years,” he told me. “But this was a very great stretch.”

Levandowski was unfazed. Over the next two years, he made more than two hundred cold calls to potential sponsors. He gradually scraped together thirty thousand dollars from Raytheon, Advanced Micro Devices, and others. (No motorcycle company was willing to put its name on the project.) Then he added a hundred thousand dollars of his own. In the meantime, he went about poaching the faculty’s graduate students. “He paid us in burritos,” Charles Smart, now a professor of mathematics at M.I.T., told me. “Always the same burritos. But I remember thinking, I hope he likes me and lets me work on this.” Levandowski had that effect on people. His mad enthusiasm for the project was matched only by his technical grasp of its challenges—and his willingness to go to any lengths to meet them. At one point, he offered Smart’s girlfriend and future wife five thousand dollars to break up with him until the project was done. “He was fairly serious,” Smart told me. “She hated the motorcycle project.”

There came a day when Goldberg realized that half his Ph.D. students had been working for Levandowski. They’d begun with a Yamaha dirt bike, made for a child, and stripped it down to its skeleton. They added cameras, gyros, G.P.S. modules, computers, roll bars, and an electric motor to turn the wheel. They wrote tens of thousands of lines of code. The videos of their early test runs, edited together, play like a jittery reel from “The Benny Hill Show”: bike takes off, engineers jump up and down, bike falls over—more than six hundred times in a row. “We built the bike and rebuilt the bike, just sort of groping in the dark,” Smart told me. “It’s like one of my colleagues once said: ‘You don’t understand, Charlie, this is robotics. Nothing actually works.’ ”

Finally, a year into the project, a Russian engineer named Alex Krasnov cracked the code. They’d thought that stability was a complex, nonlinear problem, but it turned out to be fairly simple. When the bike tipped to one side, Krasnov had it steer ever so slightly in the same direction. This created centrifugal acceleration that pulled the bike upright again. By doing this over and over, tracing tiny S-curves as it went, the motorcycle could hold to a straight line. On the video clip from that day, the bike wobbles a little at first, like a baby giraffe finding its legs, then suddenly, confidently circles the field—as if guided by an invisible hand. They called it the Ghost Rider.

The Grand Challenge proved to be one of the more humbling events in automotive history. Its sole consolation lay in shared misery. None of the fifteen finalists made it past the first ten miles; seven broke down within a mile. Ohio State’s six-wheel, thirty-thousand-pound TerraMax was brought up short by some bushes; Caltech’s Chevy Tahoe crashed into a fence. Even the winner, Carnegie Mellon, earned at best a Pyrrhic victory. Its robotic Humvee, Sandstorm, drove just seven and a half miles before careering off course. A helicopter later found it beached on an embankment, wreathed in smoke, its back wheels spinning so furiously that they’d burst into flame.

As for the Ghost Rider, it managed to beat out more than ninety cars in the qualifying round—a mile-and-a-half obstacle course on the California Speedway in Fontana. But that was its high-water mark. On the day of the Grand Challenge, standing at the starting line in Barstow, half delirious with adrenaline and fatigue, Levandowski forgot to turn on the stability program. When the gun went off, the bike sputtered forward, rolled three feet, and fell over.

“That was a dark day,” Levandowski says. It took him a while to get over it—at least by his hyperactive standards. “I think I took, like, four days off,” he told me. “And then I was like, Hey, I’m not done yet! I need to go fix this!” DARPA apparently had the same thought. Three months later, the agency announced a second Grand Challenge for the following October, doubling the prize money to two million dollars. To win, the teams would have to address a daunting list of failures and shortcomings, from fried hard drives to faulty satellite equipment. But the underlying issue was always the same: as Joshua Davis later wrote in Wired, the robots just weren’t smart enough. In the wrong light, they couldn’t tell a bush from a boulder, a shadow from a solid object. They reduced the world to a giant marble maze, then got caught in the thickets between holes. They needed to raise their I.Q.

In the early nineties, Dean Pomerleau, a roboticist at Carnegie Mellon, had hit upon an unusually efficient way to do this: he let his car teach itself. Pomerleau equipped the computer in his minivan with artificial neural networks, modelled on those in the brain. As he drove around Pittsburgh, they kept track of his driving decisions, gathering statistics and formulating their own rules of the road. “When we started, the car was going about two to four miles an hour along a path through a park—you could ride a tricycle faster,” Pomerleau told me. “By the end, it was going fifty-five miles per hour on highways.” In 1996, the car steered itself from Washington, D.C., to San Diego with only minimal intervention—nearly four times as far as Ernst Dickmanns’s cars had gone a year earlier. “No Hands Across America,” Pomerleau called it.

Machine learning is an idea nearly as old as computer science—Alan Turing, one of the fathers of the field, considered it the essence of artificial intelligence. It’s often the fastest way for a computer to learn a complex behavior, but it has its drawbacks. A self-taught car can come to some strange conclusions. It may confuse the shadow of a tree for the edge of the road, or reflected headlights for lane markers. It may decide that a bag floating across a road is a solid object and swerve to avoid it. It’s like a baby in a stroller, deducing the world from the faces and storefronts that flicker by. It’s hard to know what it knows. “Neural networks are like black boxes,” Pomerleau says. “That makes people nervous, particularly when they’re controlling a two-ton vehicle.”

Computers, like children, are more often taught by rote. They’re given thousands of rules and bits of data to memorize—If X happens, do Y; avoid big rocks—then sent out to test them by trial and error. This is slow, painstaking work, but it’s easier to predict and refine than machine learning. The trick, as in any educational system, is to combine the two in proper measure. Too much rote learning can make for a plodding machine. Too much experiential learning can make for blind spots and caprice. The roughest roads in the Grand Challenge were often the easiest to navigate, because they had clear paths and well-defined shoulders. It was on the open, sandy trails that the cars tended to go crazy. “Put too much intelligence into a car and it becomes creative,” Sebastian Thrun told me.

The second Grand Challenge put these two approaches to the test. Nearly two hundred teams signed up for the race, but the top contenders were clear from the start: Carnegie Mellon and Stanford. The C.M.U. team was led by the legendary roboticist William (Red) Whittaker. (Pomerleau had left the university by then to start his own firm.) A burly, mortar-headed ex-marine, Whittaker specialized in machines for remote and dangerous locations. His robots had crawled over Antarctic ice fields and active volcanoes, and inspected the damaged nuclear reactors at Three Mile Island and Chernobyl. Seconded by a brilliant young engineer named Chris Urmson, Whittaker approached the race as a military operation, best won by overwhelming force. His team spent twenty-eight days laser-scanning the Mojave to create a computer model of its topography; then they combined those scans with satellite data to help identify obstacles. “People don’t count those who died trying,” he later told me.

The Stanford team was led by Thrun. He hadn’t taken part in the first race, when he was still just a junior faculty member at C.M.U. But by the following summer he had accepted an endowed professorship in Palo Alto. When DARPA announced the second race, he heard about it from one of his Ph.D. students, Mike Montemerlo. “His assessment of whether we should do it was no, but his body and his eyes and everything about him said yes,” Thrun recalls. “So he dragged me into it.” The contest would be a study in opposites: Thrun the suave cosmopolitan; Whittaker the blustering field marshal. Carnegie Mellon with its two military vehicles, Sandstorm and Highlander; Stanford with its puny Volkswagen Touareg, nicknamed Stanley.

It was an even match. Both teams used similar sensors and software, but Thrun and Montemerlo concentrated more heavily on machine learning. “It was our secret weapon,” Thrun told me. Rather than program the car with models of the rocks and bushes it should avoid, Thrun and Montemerlo simply drove it down the middle of a desert road. The lasers on the roof scanned the area around the car, while the camera looked farther ahead. By analyzing this data, the computer learned to identify the flat parts as road and the bumpy parts as shoulders. It also compared its camera images with its laser scans, so that it could tell what flat terrain looked like from a distance—and therefore drive a lot faster. “Every day it was the same,” Thrun recalls. “We would go out, drive for twenty minutes, realize there was some software bug, then sit there for four hours reprogramming and try again. We did that for four months.” When they started, one out of every eight pixels that the computer labelled as an obstacle was nothing of the sort. By the time they were done, the error rate had dropped to one in fifty thousand.

On the day of the race, two hours before start time, DARPA sent out the G.P.S. coördinates for the course. It was even harder than the first time: more turns, narrower lanes, three tunnels, and a mountain pass. Carnegie Mellon, with two cars to Stanford’s one, decided to play it safe. They had Highlander run at a fast clip—more than twenty miles an hour on average—while Sandstorm hung back a little. The difference was enough to cost them the race. When Highlander began to lose power because of a pinched fuel line, Stanley moved ahead. By the time it crossed the finish line, six hours and fifty-three minutes after it started, it was more than ten minutes ahead of Sandstorm and more than twenty minutes ahead of Highlander.

It was a triumph of the underdog, of brain over brawn. But less for Stanford than for the field as a whole. Five cars finished the hundred-and-thirty-two-mile course; more than twenty cars went farther than the winner had in 2004. In one year, they’d made more progress than DARPA’s contractors had in twenty. “You had these crazy people who didn’t know how hard it was,” Thrun told me. “They said, ‘Look, I have a car, I have a computer, and I need a million bucks.’ So they were doing things in their home shops, putting something together that had never been done in robotics before, and some were insanely impressive.” A team of students from Palos Verdes High School in California, led by a seventeen-year-old named Chris Seide, built a self-driving “Doom Buggy” that, Thrun recalls, could change lanes and stop at stop signs. A Ford S.U.V. programmed by some insurance-company employees from Louisiana finished just thirty-seven minutes behind Stanley. Their lead programmer had lifted his preliminary algorithms from textbooks on video-game design.

“When you look back at that first Grand Challenge, we were in the Stone Age compared to where we are now,” Levandowski told me. His motorcycle embodied that evolution. Although it never made it out of the semifinals of the second race—tripped up by some wooden boards—the Ghost Rider had become, in its way, a marvel of engineering, beating out seventy-eight four-wheeled competitors. Two years later, the Smithsonian added the motorcycle to its collection; a year after that, it added Stanley as well. By then, Thrun and Levandowski were both working for Google.

The driverless-car project occupies a lofty, garagelike space in suburban Mountain View. It’s part of a sprawling campus built by Silicon Graphics in the early nineties and repurposed by Google, the conquering army, a decade later. Like a lot of high-tech offices, it’s a mixture of the whimsical and the workaholic—candy-colored sheet metal over a sprung-steel chassis. There’s a Foosball table in the lobby, exercise balls in the sitting room, and a row of what look like clown bicycles parked out front, free for the taking. When you walk in, the first things you notice are the wacky tchotchkes on the desks: Smurfs, “Star Wars” toys, Rube Goldberg devices. The next things you notice are the desks: row after row after row, each with someone staring hard at a screen.

It had taken me two years to gain access to this place, and then only with a staff member shadowing my every step. Google guards its secrets more jealously than most. At the gourmet cafeterias that dot the campus, signs warn against “tailgaters”—corporate spies who might slink in behind an employee before the door swings shut. Once inside, though, the atmosphere shifts from vigilance to an almost missionary zeal. “We want to fundamentally change the world with this,” Sergey Brin, the co-founder of Google, told me.

Brin was dressed in a charcoal hoodie, baggy pants, and sneakers. His scruffy beard and flat, piercing gaze gave him a Rasputinish quality, dulled somewhat by his Google Glass eyewear. At one point, he asked if I’d like to try the glasses on. When I’d positioned the miniature projector in front of my right eye, a single line of text floated poignantly into view: “3:51 P.M. It’s okay.”

“As you look outside, and walk through parking lots and past multilane roads, the transportation infrastructure dominates,” Brin said. “It’s a huge tax on the land.” Most cars are used only for an hour or two a day, he said. The rest of the time, they’re parked on the street or in driveways and garages. But if cars could drive themselves, there would be no need for most people to own them. A fleet of vehicles could operate as a personalized public-transportation system, picking people up and dropping them off independently, waiting at parking lots between calls. They’d be cheaper and more efficient than taxis—by some calculations, they’d use half the fuel and a fifth the road space of ordinary cars—and far more flexible than buses or subways. Streets would clear, highways shrink, parking lots turn to parkland. “We’re not trying to fit into an existing business model,” Brin said. “We are just on such a different planet.”

When Thrun and Levandowski first came to Google, in 2007, they were given a simpler task: to create a virtual map of the country. The idea came from Larry Page, the company’s other co-founder. Five years earlier, Page had strapped a video camera on his car and taken several hours of footage around the Bay Area. He’d then sent it to Marc Levoy, a computer-graphics expert at Stanford, who created a program that could paste such footage together to show an entire streetscape. Google engineers went on to jury-rig some vans with G.P.S. and rooftop cameras that could shoot in every direction. Eventually, they were able to launch a system that could show three-hundred-and-sixty-degree panoramas for any address. But the equipment was unreliable. When Thrun and Levandowski came on board, they helped the team retool and reprogram. Then they equipped a hundred cars and sent them all over the United States.

Google Street View has since spread to more than a hundred countries. It’s both a practical tool and a kind of magic trick—a spyglass onto distant worlds. To Levandowski, though, it was just a start. The same data, he argued, could be used to make digital maps more accurate than those based on G.P.S. data, which Google had been leasing from companies like NAVTEQ. The street and exit names could be drawn straight from photographs, for instance, rather than faulty government records. This sounded simple enough but proved to be fiendishly complicated. Street View mostly covered urban areas, but Google Maps had to be comprehensive: every logging road logged on a computer, every gravel drive driven down. Over the next two years, Levandowski shuttled back and forth to Hyderabad, India, to train more than two thousand data processors to create new maps and fix old ones. When Apple’s new mapping software failed so spectacularly a year ago, he knew exactly why. By then, his team had spent five years entering several million corrections a day.

Street View and Maps were logical extensions of a Google search. They showed you where to locate the things you’d found. What was missing was a way to get there. Thrun, despite his victory in the second Grand Challenge, didn’t think that driverless cars could work on surface streets—there were just too many variables. “I would have told you then that there is no way on earth we can drive safely,” he says. “All of us were in denial that this could be done.” Then, in February of 2008, Levandowski got a call from a producer of “Prototype This!,” a series on the Discovery Channel. Would he be interested in building a self-driving pizza delivery car? Within five weeks, he and a team of fellow Berkeley graduates and other engineers had retrofitted a Prius for the purpose. They patched together a guidance system and persuaded the California Highway Patrol to let the car cross the Bay Bridge—from San Francisco to Treasure Island. It would be the first time an unmanned car had driven legally on American streets.

On the day of the filming, the city looked as if it were under martial law. The lower level of the bridge was closed to regular traffic, and eight police cruisers and eight motorcycle cops were assigned to accompany the Prius over it. “Obama was there the week before and he had a smaller escort,” Levandowski recalls. The car made its way through downtown and crossed the bridge in fine form, only to wedge itself against a concrete wall on the far side. Still, it gave Google the nudge that it needed. Within a few months, Page and Brin had called Thrun to green-light a driverless-car project. “They didn’t even talk about budget,” Thrun says. “They just asked how many people I needed and how to find them. I said, ‘I know exactly who they are.’ ”

Every Monday at eleven-thirty, the lead engineers for the Google car project meet for a status update. They mostly cleave to a familiar Silicon Valley demographic—white, male, thirty to forty years old—but they come from all over the world. I counted members from Belgium, Holland, Canada, New Zealand, France, Germany, China, and Russia at one sitting. Thrun began by cherry-picking the top talent from the Grand Challenges: Chris Urmson was hired to develop the software, Levandowski the hardware, Mike Montemerlo the digital maps. (Urmson now directs the project, while Thrun has shifted his attention to Udacity, an online education company that he co-founded two years ago.) Then they branched out to prodigies of other sorts: lawyers, laser designers, interface gurus—anyone, at first, except automotive engineers. “We hired a new breed,” Thrun told me. People at Google X had a habit of saying that So-and-So on the team was the smartest person they’d ever met, till the virtuous circle closed and almost everyone had been singled out by someone else. As Levandowski said of Thrun, “He thinks at a hundred miles an hour. I like to think at ninety.”

When I walked in one morning, the team was slouched around a conference table in T-shirts and jeans, discussing the difference between the Gregorian and the Julian calendar. The subtext, as usual, was time. Google’s goal isn’t to create a glorified concept car—a flashy idea that will never make it to the street—but a polished commercial product. That means real deadlines and continual tests and redesigns. The main topic for much of that morning was the user interface. How aggressive should the warning sounds be? How many pedestrians should the screen show? In one version, a jaywalker appeared as a red dot outlined in white. “I really don’t like that,” Urmson said. “It looks like a real-estate sign.” The Dutch designer nodded and promised an alternative for the next round. Every week, several dozen Google volunteers test-drive the cars and fill out user surveys. “In God we trust,” the company faithful like to say. “Everyone else, bring data.”

In the beginning, Brin and Page presented Thrun’s team with a series of DARPA-like challenges. They managed the first in less than a year: to drive a hundred thousand miles on public roads. Then the stakes went up. Like boys plotting a scavenger hunt, Brin and Page pieced together ten itineraries of a hundred miles each. The roads wound through every part of the Bay Area—from the leafy lanes of Menlo Park to the switchbacks of Lombard Street. If the driver took the wheel or tapped the brakes even once, the trip was disqualified. “I remember thinking, How can you possibly do that?” Urmson told me. “It’s hard to game driving through the middle of San Francisco.”

They started the project with Levandowski’s pizza car and Stanford’s open-source software. But they soon found that they had to rebuild from scratch: the car’s sensors were already outdated, the software just glitchy enough to be useless. The DARPA cars hadn’t concerned themselves with passenger comfort. They just went from point A to point B as efficiently as possible. To smooth out the ride, Thrun and Urmson had to make a deep study of the physics of driving. How does the plane of a road change as it goes around a curve? How do tire drag and deformation affect steering? Braking for a light seems simple enough, but good drivers don’t apply steady pressure, as a computer might. They build it gradually, hold it for a moment, then back off again.

For complicated moves like that, Thrun’s team often started with machine learning, then reinforced it with rule-based programming—a superego to control the id. They had the car teach itself to read street signs, for instance, but they underscored that knowledge with specific instructions: “STOP” means stop. If the car still had trouble, they’d download the sensor data, replay it on the computer, and fine-tune the response. Other times, they’d run simulations based on accidents documented by the National Highway Traffic Safety Administration. A mattress falls from the back of a truck. Should the car swerve to avoid it or plow ahead? How much advance warning does it need? What if a cat runs into the road? A deer? A child? These were moral questions as well as mechanical ones, and engineers had never had to answer them before. The DARPA cars didn’t even bother to distinguish between road signs and pedestrians—or “organics,” as engineers sometimes call them. They still thought like machines.

Four-way stops were a good example. Most drivers don’t just sit and wait their turn. They nose into the intersection, nudging ahead while the previous car is still passing through. The Google car didn’t do that. Being a law-abiding robot, it waited until the crossing was completely clear—and promptly lost its place in line. “The nudging is a kind of communication,” Thrun told me. “It tells people that it’s your turn. The same thing with lane changes: if you start to pull into a gap and the driver in that lane moves forward, he’s giving you a clear no. If he pulls back, it’s a yes. The car has to learn that language.”

It took the team a year and a half to master Page and Brin’s ten hundred-mile road trips. The first one ran from Monterey to Cambria, along the cliffs of Highway 1. “I was in the back seat, screaming like a little girl,” Levandowski told me. One of the last started in Mountain View, went east across the Dumbarton Bridge to Union City, back west across the bay to San Mateo, north on 101, east over the Bay Bridge to Oakland, north through Berkeley and Richmond, back west across the bay to San Rafael, south to the mazy streets of the Tiburon Peninsula, so narrow that they had to tuck in the side mirrors, and over the Golden Gate Bridge to downtown San Francisco. When they finally arrived, past midnight, they celebrated with a bottle of champagne. Now they just had to design a system that could do the same thing in any city, in all kinds of weather, with no chance of a do-over. Really, they’d just begun.

These days, Levandowski and the other engineers divide their time between two models: the Prius, which is used to test new sensors and software; and the Lexus, which offers a more refined but limited ride. (The Prius can drive on surface streets; the Lexus only on highways.) As the cars have evolved, they’ve sprouted appendages and lost them again, like vat-grown creatures in a science-fiction movie. The cameras and radar are now tucked behind sheet metal and glass, the laser turret reduced from a highway cone to a sand pail. Everything is smaller, sleeker, and more powerful than before, but there’s still no mistaking the cars. When Levandowski picked me up or dropped me off near the Berkeley campus on his commute, students would look up from their laptops and squeal, then run over to take snapshots of the car with their phones. It was their version of the Oscar Mayer Wienermobile.

Still, my first thought on settling into the Lexus was how ordinary things looked. Google’s experiments had left no scars, no signs of cybernetic alteration. The interior could have passed for that of any luxury car: burl-wood and leather, brushed metal and Bose speakers. There was a screen in the center of the dashboard for digital maps; another above it for messages from the computer. The steering wheel had an On button to the left and an Off button to the right, lit a soft, fibre-optic green and red. But there was nothing to betray their exotic purpose. The only jarring element was the big red knob between the seats. “That’s the master kill switch,” Levandowski said. “We’ve never actually used it.”

Levandowski kept a laptop open beside him as we rode. Its screen showed a graphic view of the data flowing in from the sensors: a Tron-like world of neon objects drifting and darting on a wireframe nightscape. Each sensor offered a different perspective on the world. The laser provided three-dimensional depth: its sixty-four beams spun around ten times per second, scanning 1.3 million points in concentric waves that began eight feet from the car. It could spot a fourteen-inch object a hundred and sixty feet away. The radar had twice that range but nowhere near the precision. The camera was good at identifying road signs, turn signals, colors, and lights. All three views were combined and color-coded by a computer in the trunk, then overlaid by the digital maps and Street Views that Google had already collected. The result was a road atlas like no other: a simulacrum of the world.

I was thinking about all this as the Lexus headed south from Berkeley down Highway 24. What I wasn’t thinking about was my safety. At first, it was a little alarming to see the steering wheel turn by itself, but that soon passed. The car clearly knew what it was doing. When the driver beside us drifted into our lane, the Lexus drifted the other way, keeping its distance. When the driver ahead hit his brakes, the Lexus was already slowing down. Its sensors could see so far in every direction that it saw traffic patterns long before we did. The effect was almost courtly: drawing back to let others pass, gliding into gaps, keeping pace without strain, like a dancer in a quadrille.

The Prius was an even more capable car, but also a rougher ride. When I rode in it with Dmitri Dolgov, the team’s lead programmer, it had an occasional lapse in judgment: tailgating a truck as it came down an exit ramp; rushing late through a yellow light. In those cases, Dolgov made a note on his laptop. By that night, he’d have adjusted the algorithm and run simulations till the computer got it right.

The Google car has now driven more than half a million miles without causing an accident—about twice as far as the average American driver goes before crashing. Of course, the computer has always had a human driver to take over in tight spots. Left to its own devices, Thrun says, it could go only about fifty thousand miles on freeways without a major mistake. Google calls this the dog-food stage: not quite fit for human consumption. “The risk is too high,” Thrun says. “You would never accept it.” The car has trouble in the rain, for instance, when its lasers bounce off shiny surfaces. (The first drops call forth a small icon of a cloud onscreen and a voice warning that auto-drive will soon disengage.) It can’t tell wet concrete from dry or fresh asphalt from firm. It can’t hear a traffic cop’s whistle or follow hand signals.

And yet, for each of its failings, the car has a corresponding strength. It never gets drowsy or distracted, never wonders who has the right-of-way. It knows every turn, tree, and streetlight ahead in precise, three-dimensional detail. Dolgov was riding through a wooded area one night when the car suddenly slowed to a crawl. “I was thinking, What the hell? It must be a bug,” he told me. “Then we noticed the deer walking along the shoulder.” The car, unlike its riders, could see in the dark. Within a year, Thrun added, it should be safe for a hundred thousand miles.

The real question is who will build it. Google is a software firm, not a car company. It would rather sell its programs and sensors to Ford or GM than build its own cars. The companies could then repackage the system as their own, as they do with G.P.S. units from NAVTEQ or TomTom. The difference is that the car companies have never bothered to make their own maps, but they’ve spent decades working on driverless cars. General Motors sponsored Carnegie Mellon’s DARPA races and has a large testing facility for driverless cars outside of Detroit. Toyota opened a nine-acre laboratory and “simulated urban environment” for self-driving cars last November, at the foot of Mt. Fuji. But aside from Nissan, which recently announced that it would sell fully autonomous cars by 2020, the manufacturers are much more pessimistic about the technology. “It’ll happen, but it’s a long way out,” John Capp, General Motors’ director of electrical, controls, and active safety research, told me. “It’s one thing to do a demonstration—‘Look, Ma, no hands!’ But I’m talking about real production variance and systems we’re confident in. Not some circus vehicle.”

When I went to visit the most recent International Auto Show in New York, the exhibits were notably silent about autonomous driving. That’s not to say that it wasn’t on display. Outside the convention center, Jeep had set up an obstacle course for its new Wrangler, including a row of logs to drive over and a miniature hill to climb. When I went down the hill with a Jeep sales rep, he kept telling me to take my foot off the brake. The car was equipped with “descent control,” he explained, but, like the other exhibitors, he avoided terms like “self-driving.” “We don’t even include it in our vocabulary,” Alan Hall, a communications manager at Ford, told me. “Our view of the future is that the driver remains in control of the vehicle. He is the captain of the ship.”

This was a little disingenuous—necessity passing as principle. The car companies can’t do full autonomy yet, so they do it piece by piece. Every decade or so, they introduce another bit of automation, another task gently lifted from the captain’s hands: power steering in the nineteen-fifties, cruise control as a standard feature in the seventies, antilock brakes in the eighties, electronic stability control in the nineties, the first self-parking cars in the two-thousands. The latest models can detect lane lines and steer themselves to stay within them. They can keep a steady distance from the car ahead, braking to a stop if necessary. They have night vision, blind-spot detection, and stereo cameras that can identify pedestrians. Yet the over-all approach hasn’t changed. As Levandowski puts it, “They want to make cars that make drivers better. We want to make cars that are better than drivers.”

Along with Nissan, Toyota and Mercedes are probably closest to developing systems like Google’s. Yet they hesitate to introduce them for different reasons. Toyota’s customers are a conservative bunch, less concerned with style than with comfort. “They tend to have a fairly long adoption curve,” Jim Pisz, the corporate manager of Toyota’s North American business strategy, told me. “It was only five years ago that we eliminated cassette players.” The company has been too far ahead of the curve before. In 2005, when Toyota introduced the world’s first self-parking car, it was finicky and slow to maneuver, as well as expensive. “We need to build incremental levels of trust,” Pisz said.

Mercedes has a knottier problem. It has a reputation for fancy electronics and a long history of innovation. Its newest experimental car can maneuver in traffic, drive on surface streets, and track obstacles with cameras and radar much as Google’s do. But Mercedes builds cars for people who love to drive, and who pay a stiff premium for the privilege. Taking the steering wheel out of their hands would seem to defeat the purpose—as would sticking a laser turret on a sculpted chassis. “Apart from the reliability factor, which can easily become a nightmare, it is not nice to look at,” Ralf Herrtwich, Mercedes’s director of driver assistance and chassis systems, told me. “One of my designers said, ‘Ralf, if you ever suggest building such a thing on top of one of our cars, I’ll throw you out of this company.’ ”

Even if the new components could be made invisible, Herrtwich says, he worries about separating people from the driving process. The Google engineers like to compare driverless cars to airplanes on autopilot, but pilots are trained to stay alert and take over in case the computer fails. Who will do the same for drivers? “This one-shot, winner-take-all approach, it’s perhaps not a wise thing to do,” Herrtwich says. Then again alert, fully engaged drivers are already becoming a thing of the past. More than half of all eighteen-to-twenty-four-year-olds admit to texting while driving, and more than eighty per cent drive while on the phone. Hands-free driving should seem like second nature to them: they’ve been doing it all along.

One afternoon, not long after the car show, I got an unsettling demonstration of this from engineers at Volvo. I was sitting behind the wheel of one of their S60 sedans in the parking lot of the company’s American headquarters in Rockleigh, New Jersey. About a hundred yards ahead, they’d placed a life-size figure of a boy. He was wearing khaki pants and a white T-shirt and looked to be about six years old. My job was to try to run him over.

Volvo has less faith in drivers than most companies do. Since the seventies, it has kept a full-time forensics team on call at its Swedish headquarters, in Gothenburg. Whenever a Volvo gets into an accident within a sixty-mile radius, the team races to the scene with local police to assess the wreckage and injuries. Four decades of such research have given Volvo engineers a visceral sense of all that can go wrong in a car, and a database of more than forty thousand accidents to draw on for their designs. As a result, the chances of getting hurt in a Volvo have dropped from more than ten per cent to less than three per cent over the life of a car. The company says this is just a start. “Our vision is that no one is killed or injured in a Volvo by 2020,” it declared three years ago. “Ultimately, that means designing cars that do not crash.”

Most accidents are caused by what Volvo calls the four D’s: distraction, drowsiness, drunkenness, and driver error. The company’s newest safety systems try to address each of these. To keep the driver alert, they use cameras, lasers, and radar to monitor the car’s progress. If the car crosses a lane line without a signal from the blinker, a chime sounds. If a pattern emerges, the dashboard flashes a steaming coffee cup and the words “Time for a break.” To instill better habits, the car rates the driver’s attentiveness as it goes, with bars like those on a cell phone. (Mercedes goes a step further: its advanced cruise control won’t work unless at least one of the driver’s hands is on the wheel.) In Europe, some Volvos even come with Breathalyzer systems, to discourage drunken driving. When all else fails, the cars take preëmptive action: tightening the seat belts, charging the brakes for maximum traction, and, at the last moment, stopping the car.

This was the system that I was putting to the test in the parking lot. Adam Kopstein, the manager of Volvo’s automotive safety and compliance office, was a man of crisp statistics and nearly Scandinavian scruples. So it was a little unnerving to hear him urge me to go faster. I’d spent the first fifteen minutes trying to crash into an inflatable car, keeping to a sedate twenty miles an hour. Three-quarters of all accidents occur at this speed, and the Volvo handled it with ease. But Kopstein was looking for a sterner challenge. “Go ahead and hit the gas,” he said. “You’re not going to hurt anyone.”

I did as instructed. The boy was just a mannequin, after all, stuffed with reflective material to simulate the water in a human body. First a camera behind the windshield would identify him as a pedestrian. Then radar from behind the grille would bounce off his reflective innards and deduce the distance to impact. “Some people scream,” Kopstein said. “Others just can’t do it. It’s so unnatural.” As the car sped up—fifteen, twenty, thirty-five miles an hour—the warning chime sounded, but I kept my foot off the brake. Then, suddenly, the car ground to a halt, juddering toward the boy with a final double lurch. It came to a stop with about five inches to spare.

Since 2010, Volvos equipped with a safety system have had twenty-seven per cent fewer property-damage claims than those without it, according to a study by the Insurance Institute for Highway Safety. The system goes out of its way to leave the driver in charge, braking only in extreme circumstances and ceding control at the tap of a pedal or a turn of the wheel. Still, the car sometimes gets confused. Later that afternoon, I took the Volvo out for a test drive on the Palisades Parkway. I contented myself with steering, while the car took care of braking and acceleration. Like Levandowski’s Lexus, it quickly earned my trust: keeping pace with highway traffic, braking smoothly at lights. Then something strange happened. I’d circled back to the Volvo headquarters and was about to turn into the parking lot when the car suddenly surged forward, accelerating into the curve.

The incident lasted only a moment—when I hit the brakes, the system disengaged—but it was a little alarming. Kopstein later guessed that the car thought it was still on the highway, in cruise control. For most of the drive, I’d been following Kopstein’s Volvo, but when that car turned into the parking lot, my car saw a clear road ahead. That’s when it sped up, toward what it thought was the speed limit: fifty miles an hour.

To some drivers, this may sound worse than the four D’s. Distraction and drowsiness we can control, but a peculiar horror attaches to the thought of death by computer. The screen freezes or power fails; the sensors jam or misread a sign; the car squeals to a stop on the highway or plows headlong into oncoming traffic. “We’re all fairly tolerant of cell phones and laptops not working,” GM’s John Capp told me. “But you’re not relying on your cell phone or laptop to keep you alive.”

Toyota got a taste of such calamities in 2009, when some drivers began to complain that their cars would accelerate of their own accord—sometimes up to a hundred miles an hour. The news caused panic among Toyota owners: the cars were accused of causing thirty-nine deaths. But this proved to be largely fictional. A ten-month study by NASA and the National Highway Traffic Safety Administration found that most of the incidents were caused by driver error or roving floor mats, and only a few by sticky gas pedals. By then, Toyota had recalled some ten million cars and paid more than a billion dollars in legal settlements. “Frankly, that was an indicator that we need to go slow,” Jim Pisz told me. “Deliberately slow.”

An automated highway could also be a prime target for cyberterrorism. Last year, DARPA funded a pair of well-known hackers, Charlie Miller and Chris Valasek, to see how vulnerable existing cars might be. In August, Miller presented some of their findings at the annual Defcon hackers conference in Las Vegas. By sending commands from their laptop, they’d been able to make a Toyota Prius blast its horn, jerk the wheel from the driver’s hands, and brake violently at eighty miles an hour. True, Miller and Valasek had to use a cable to patch into the car’s maintenance port. But a team at the University of California, San Diego, led by the computer scientist Stefan Savage, has shown that similar instructions could be sent wirelessly, through systems as innocuous as a Bluetooth receiver. “Existing technology is not as robust as we think it is,” Levandowski told me.

Google claims to have answers to all these threats. Its engineers know that a driverless car will have to be nearly perfect to be allowed on the road. “You have to get to what the industry calls the ‘six sigma’ level—three defects per million,” Ken Goldberg, the industrial engineer at Berkeley, told me. “Ninety-five per cent just isn’t good enough.” Aside from its test drives and simulations, Google has encircled its software with firewalls, backup systems, and redundant power supplies. Its diagnostic programs run thousands of internal checks per second, searching for system errors and anomalies, monitoring its engine and brakes, and continually recalculating its route and lane position. Computers, unlike people, never tire of self-assessment. “We want it to fail gracefully,” Dolgov told me. “When it shuts down, we want it to do something reasonable, like slow down and go on the shoulder and turn on the blinkers.”

Still, sooner or later, a driverless car will kill someone. A circuit will fail, a firewall collapse, and that one defect in three hundred thousand will send a car plunging across a lane or into a tree. “There will be crashes and lawsuits,” Dean Pomerleau said. “And because the car companies have deep pockets they will be targets, regardless of whether they’re at fault or not. It doesn’t take many fifty- or hundred-million-dollar jury decisions to put a big damper on this technology.” Even an invention as benign as the air bag took decades to make it into American cars, Pomerleau points out. “I used to say that autonomous vehicles are fifteen or twenty years out. That was twenty years ago. We still don’t have them, and I still think they’re ten years out.”

If driverless cars were once held back by their technology, then by ideas, the limiting factor now is the law. Strictly speaking, the Google car is already legal: drivers must have licenses; no one said anything about computers. But the company knows that won’t hold up in court. It wants the cars to be regulated just like human drivers. For the past two years, Levandowski has spent a good deal of his time flying around the country lobbying legislatures to support the technology. First Nevada, then Florida, California, and the District of Columbia have legalized driverless cars, provided that they’re safe and fully insured. But other states have approached the issue more skeptically. The bills proposed by Michigan and Wisconsin, for instance, both treat driverless cars as experimental technology, legal only within narrow limits.

Much remains to be defined. How should the cars be tested? What’s their proper speed and spacing? How much warning do drivers need before taking the wheel? Who’s responsible when things go wrong? Google wants to leave the specifics to motor-vehicle departments and insurers. (Since premiums are based on statistical risk, they should go down for driverless cars.) But the car companies argue that this leaves them too vulnerable. “Their original position was ‘We shouldn’t rush this. It’s not ready for prime time. It shouldn’t be legalized,’ ” Alex Padilla, the state senator who sponsored the California bill, told me. But their real goal, he believes, was just to buy time to catch up. “It became clear to me that the interest here was a race to the market. And everybody’s in the race.” The question is how fast should they go.

At the tech meeting I attended, Levandowski showed the team a video of Google’s newest laser, slated to be installed within the year. It had more than twice the range of previous models—eleven hundred feet instead of two hundred and sixty—and thirty times the resolution. At three hundred feet, it could spot a metal plate less than two inches thick. The laser would be about the size of a coffee mug, he told me, and cost around ten thousand dollars—seventy thousand less than the current model.

“Cost is the least of my worries,” Sergey Brin had told me earlier. “Driving the price of technology down is like”—he snapped his fingers. “You just wait a month. It’s not fundamentally expensive.” Brin and his engineers are motivated by more personal concerns: Brin’s parents are in their late seventies and starting to get unsteady behind the wheel. Thrun lost his best friend to a car accident, and Urmson has children just a few years shy of driving age. Like everyone else at Google, they know the statistics: worldwide, car accidents kill 1.24 million people a year, and injure another fifty million.

For Levandowski, the stakes first became clear three years ago. His fiancée, Stefanie Olsen, was nine months pregnant at the time. One afternoon, she had just crossed the Golden Gate Bridge on her way to visit a friend in Marin County when the car ahead of her abruptly stopped. Olsen slammed on her brakes and skidded to a halt, but the driver behind her wasn’t so quick. He collided into her Prius at more than thirty miles an hour, pile-driving it into the car ahead. “It was like a tin can,” Olsen told me. “The car was totalled and I was accordioned in there.” Thanks to her seat belt, she escaped unharmed, as did her baby. But when Alex was born he had a small patch of white hair on the back of his head.

“That accident never should have happened,” Levandowski told me. If the car behind Olsen had been self-driving, it would have seen the obstruction three cars ahead. It would have calculated the distance to impact, scanned the neighboring lanes, realized it was boxed in, and hit the brakes, all within a tenth of a second. The Google car drives more defensively than people do: it tailgates five times less, rarely coming within two seconds of the car ahead. Under the circumstances, Levandowski says, our fear of driverless cars is increasingly irrational. “Once you make the car better than the driver, it’s almost irresponsible to have him there,” he says. “Every year that we delay this, more people die.”

After a long day in Mountain View, the drive home to Berkeley can be a challenge. Levandowski’s mind, accustomed to pinwheeling in half a dozen directions, can have trouble focussing on the two-ton hunks of metal hurtling around him. “People should be happy when I’m on automatic mode,” he told me, as we headed home one night. He leaned back in his seat and put his hands behind his head, as if taking in the seaside sun. He looked like the vintage illustrations of driverless cars on his laptop: “Highways made safe by electricity!”

The reality was so close that he could envision each step: The first cars coming to market in five to ten years. Their numbers few at first—strange beasts on a new continent—relying on sensors to get the lay of the land, mapping territory street by street. Then spreading, multiplying, sharing maps and road conditions, accident alerts and traffic updates; moving in packs, drafting off one another to save fuel, dropping off passengers and picking them up, just as Brin had imagined. For once it didn’t seem like a fantasy. “If you look at my track record, I usually do something for two years and then I want to leave,” Levandowski said. “I’m a first-mile kind of guy—the guy who rushes the beach at Normandy, then lets other people fortify it. But I want to see this through. What we’ve done so far is cool; it’s scientifically interesting; but it hasn’t changed people’s lives.”

When we arrived at his house, his family was waiting. “I’m a bull!” his three-year-old, Alex, roared as he ran up to greet us. We acted suitably impressed, then wondered why a bull would have long whiskers and a red nose. “He was a kitten a little while ago,” his mother whispered. A former freelance reporter for the Times and CNET, Olsen was writing a techno-thriller set in Silicon Valley. She worked from home now, and had been cautious about driving since the accident. Still, two weeks earlier, Levandowski had taken her and Alex on their first ride in the Google car. She was a little nervous at first, she admitted, but Alex had wondered what all the fuss was about. “He thinks everything’s a robot,” Levandowski said.

While Olsen set the table, Levandowski gave me a brief tour of their place: an Arts and Crafts house from 1909, once home to a hippie commune led by Tom Hayden. “You can still see the burn marks on the living-room floor,” he said. For a registered Republican and a millionaire many times over, it was a quirky, modest choice. Levandowski probably could have afforded that stateroom in a 747 by now, and made good use of it. Last year alone, he flew more than a hundred thousand miles in his lobbying efforts. There was just one problem, he said. It was irrational, he knew. It went against all good sense and a raft of statistics, but he couldn’t help it. He was afraid of flying.

Source: http://www.newyorker.com/reporting/2013/11/25/131125fa_fact_bilger?currentPage=all

All the Apps and Services Apple Just Tried to Make Obsolete

Os-x-yosemite
Apple senior vice president of Software Engineering Craig Federighi speaks in front of a screen for the Yosemite operating system at the Apple Worldwide Developers Conference in San Francisco, Monday, June 2, 2014.
Image: Jeff Chiu/Associated Press
Apple gave developers a much-anticipated first look at iOS 8 and OS X 10.10 during Monday’s keynote presentation at the company’s annual World Wide Developers Conference.

The company unveiled an array of new and revamped features that will be coming to OS X and iOS — many of which may have looked a little familiar to those already used to popular third-party apps and services.

From messaging to fitness tracking to Google searches, here’s a look at some the apps and services Apple seems to be taking head-on with iOS 8 and OS X Yosemite.

Cloud storage platforms

Apple announced iCloud Drive, a cloud-based file management system that will be available on iOS 8 and OS X Yosemite, which will also work with third party apps and will run on Windows 8. Files stored in iCloud Drive can be viewed across devices and edits are automatically synced to the cloud do you can easily pick up where you left off.

hero_hero

Image: Apple

Sound familiar? That’s because the system is very similar to Dropbox, Google Drive, Box, Microsoft’s OneDrive and pretty much every other cloud storage platform, though Box’s CEO, Aaron Levie, seems to be excited about the move. Apple also announced new pricing more in line with its cloud storage competitors— 20 GB will cost $0.99 a month while 200 GB will be $3.99.

Messaging apps

Messages, the most-used iOS app, is getting a huge overhaul in iOS 8 that borrows many features from some of the most popular third-party messaging apps. As if Snapchat didn’t have enough to worry about lately, Apple seems to be taking some cues from the disappearing messages app.

Photos and videos will now automatically disappear from message threads, unless actively saved. Additionally, the gesture-based controls for recording audio messages are not unlike those used in the video chatting features that debuted in Snapchat’s most recent update.

Screen-Shot-2014-06-02-at-3.23.23-PM-640x493

Messages is also getting voice messaging and the ability to mute or leave group threads, all of which are features already available in other popular messaging apps, including WhatsApp.

WhatsApp CEO Jan Koum has already expressed his displeasure with Apple over some of its new messaging features, though he did not elaborate on which ones.

Apple is also taking on apps like Google Voice and Skype with the ability to make phone calls from the desktop in Yosemite.

Photo editors

Apple’s new Photos app, includes an array of new smart editing tools. While you can still control things like levels, brightness and contrast independently, the new features allow you to adjust all of these at once to the optimum level with just one swipe.

apple-photos1

Image: Apple

This is similar to many image editing apps that have features to automatically enhance and correct photos like Camera+, Afterlight and many others. Apple’s version will have an additional advantage though because all changes made within Photos will be synced with iCloud so the edited images will be available in real time across all of your devices.

Fitness trackers

Though we didn’t see the long-rumored iWatch during the keynote, Apple’s Health app, is still taking on the fitness tracking space.

Apple_Health
Apple Senior Vice President of Software Engineering Craig Federighi speaks during the Apple Worldwide Developers Conference at the Moscone West center on June 2, 2014 in San Francisco, California.

The app will monitor all of your health-related information in one place, including stats from fitness tracking apps, and work with third-party apps like Nike+.

Google Now, Google searches

Apple’s Spotlight Search is being revamped for both iOS 8 and Yosemite. Not only will Spotlight search for content stored locally on your device, it will search Wikipedia, maps, news, movie showtimes and iTunes and App Store content.

Screen Shot 2014-06-02 at 3.42.44 PM

Image: Apple

The new Spotlight also opts for Bing over Google for web searches in Yosemite. (It appears Spotlight searches in iOS 8 will continue to use Google, for now.) This change has already caused some to speculate whether Apple may be working on its own Siri-powered search engine.

Of course, whether or not Apple’s versions of these apps will eventually overshadow their third-party counterparts depends on how well they actually work. Apple has been known to epically miss the mark when it comes to launching new apps — Apple Maps is only just now enjoying a comeback, after a catastrophic rollout in 2012. But if Maps has taught us anything it’s that Apple tends to get it right eventually, and when it does, it sticks.

Source: http://mashable.com/2014/06/02/apps-apple-made-obsolete-wwdc

Google geht unter die Autohersteller

„Google geht unter die Autohersteller: Der Internetkonzern hat einen ersten Prototyp seines eigenen selbst fahrenden Fahrzeugs vorgestellt.

Die Vision sind kleine Zweisitzer mit Elektroantrieb, die komplett auf Lenkrad und Pedale verzichten. Zunächst sollen rund 100 Testfahrzeuge gebaut werden, kündigte der Konzern in einem Blogeintrag in der Nacht auf heute an.

Sie sollen anfangs noch die altbekannten Steuerelemente haben. Die Arbeit an einer marktreifen Version werde gemeinsam mit Partnern noch einige Jahre dauern, schrieb Projektleiter Chris Urmson.

Tests mit Prius
Google testet bereits seit 2009 Fahrzeuge mit Autopilot. Dabei wurden bestehende Fahrzeugtypen wie etwa der Prius von Toyota mit Lasersensoren und Radargeräten ausgestattet. Bisher ist aber vorgesehen, dass der Fahrer in bestimmten Situationen wieder die Kontrolle des Fahrzeugs übernehmen kann. Erste Gerüchte, dass der Internetkonzern auch komplett eigene Autos entwickelt, gab es im vergangenen Jahr.

Die Autobranche sieht in dem autonomen Fahren einen vielversprechenden Zukunftstrend. Alle großen Hersteller sowie Zulieferer und auch einige branchenfremde Konzerne arbeiten mittlerweile an dem Projekt Fahren ohne Fahrer.“

Quelle:
http://googleblog.blogspot.co.at/2014/05/just-press-go-designing-self-driving.html
http://orf.at/#/stories/2231789/
http://derstandard.at/2000001614203/Ohne-Lenkrad-und-Bremspedal-Google-stellt-selbstfahrendes-Auto-vor

Googles latest chapter for the self-driving car: mastering city street driving

Jaywalking pedestrians. Cars lurching out of hidden driveways. Double-parked delivery trucks blocking your lane and your view. At a busy time of day, a typical city street can leave even experienced drivers sweaty-palmed and irritable. We all dream of a world in which city centers are freed of congestion from cars circling for parking (PDF) and have fewer intersections made dangerous by distracted drivers. That’s why over the last year we’ve shifted the focus of the Google self-driving car project onto mastering city street driving.

Since our last update, we’ve logged thousands of miles on the streets of our hometown of Mountain View, Calif. A mile of city driving is much more complex than a mile of freeway driving, with hundreds of different objects moving according to different rules of the road in a small area. We’ve improved our software so it can detect hundreds of distinct objects simultaneously—pedestrians, buses, a stop sign held up by a crossing guard, or a cyclist making gestures that indicate a possible turn. A self-driving vehicle can pay attention to all of these things in a way that a human physically can’t—and it never gets tired or distracted.

Here’s a video showing how our vehicle navigates some common scenarios near the Googleplex:


As it turns out, what looks chaotic and random on a city street to the human eye is actually fairly predictable to a computer. As we’ve encountered thousands of different situations, we’ve built software models of what to expect, from the likely (a car stopping at a red light) to the unlikely (blowing through it). We still have lots of problems to solve, including teaching the car to drive more streets in Mountain View before we tackle another town, but thousands of situations on city streets that would have stumped us two years ago can now be navigated autonomously.

Our vehicles have now logged nearly 700,000 autonomous miles, and with every passing mile we’re growing more optimistic that we’re heading toward an achievable goal—a vehicle that operates fully without human intervention.

BitCoins 3 Fatal Design Flaws

For those that don’t know, BitCoin is a digital currency (known as a cryptocurrency) that is not issued by governments or banks. Instead the currency uses some complicated programming to limit the amount of money that can be created. Only 21 million BitCoins will ever be created, and there is no  human decision maker who can influence that. For advocates of the currency, this is a major advantage, as it prevents the abuse of the power to create money. It is easy to see why this would be so appealing – after all, we have recently seen the damage that can happen when commercial banks have the power to create hundreds of billions of pounds in just a few years.

But there are serious problems with BitCoin. This was highlighted most recently when one of the largest exchanges MtGox, revealed that it had lost around $350 million of customer’s money after hacking incident. “Lost” in this sense doesn’t mean they made bad investments that went bad; the BitCoins were literally stolen, now exist on somebody else’s computer, and the exchange has no idea where they are.

I want to look at BitCoin’s design flaws here, so if you want to know more about the details of the currency itself, read How to Explain BitCoin to your Grandmother by Brett Scott or this Chicago Federal Reserve paper for a central bank perspective.

BitCoin is a prototype

The key point to note is that BitCoin is a prototype for what is now known as crypto currency. It was the first of its kind, an experiment designed by someone (or a some group) going by the name Satoshi Nakamoto. The original paper that outlines the proposal for a currency is well written but has the tone of a working paper – an initial proposal, not fully thought out, rather than a fully worked out master plan.

What usually happens with a new idea or product is that you try it out, find that it’s inherently flawed, and then you alter the design to make it work better. Orville and Wilbur Wright’s original plane flew just a few metres. The first bicycle, designed in 1817, involved sitting on a saddle whilst pushing the bike along by running with your feet on the floor:

First Bicycle

The fanaticism of some BitCoin enthusiasts, along with the claims that BitCoin – specifically – will become the currency of the future, is a bit like someone in 1902 insisting that in the future we’ll all be flying across the Atlantic in individual gliders that look like this:

Wright Brothers Initial Plane

Of course we won’t. The first prototype of something should be a test case, which reveals the design flaws then gets discarded in favour of something better.

I believe there are two  design flaws that are fatal for BitCoin.

Design Flaw 1. The rate of money creation

BitCoin is designed so that new BitCoins are created (‘mined’) at a predetermined and gradually decelerating speed. Around half the BitCoins that were ever designed have been created already. The money supply will increase by another 66% between now and 2025, but by then the rate of creation of new BitCoins will have slowed to a negligible amount, essentially making it a fixed money supply by 2025.

bitcoin-monetary-base

This limited supply was supposed to be a clever design feature, but actually it’s turned BitCoin into a speculative asset. The problem with this is that the amount of the currency doesn’t increase in line with the number of people using it. Economists from the Austrian school would argue that this is fine: just allow prices to fall relative to the currency. Indeed, that’s what has happened with BitCoin – each BitCoin now buys you more real “stuff” in the economy than it did in the past.

The problem comes when the limited supply affects the way people use the currency. BitCoin users who have seen the currency go from 1 BitCoin = $5 (in 2011) to 1 BitCoin = $445 (as it currently is) don’t think “Great, the price of a Coke is falling in terms of BitCoin”. Instead they think, “If I sit on the BitCoins that I own, in 1 year they might be worth 10 times more. So I won’t spend them.”

This means that BitCoin users don’t want to pay using BitCoin. In other words, they want to use BitCoin as a speculative investment, rather than as a means of payment. 

The only way to avoid this is to ensure that the supply of the currency increases in line with how much it is being used, so that the exchange of BitCoin to other currencies or of BitCoin to real goods and services is broadly stable. Without this design feature, a currency that consistently and rapidly appreciates relative to other currencies will be held as an asset rather than being used to make payments.

This is a design flaw specific to BitCoin. Other cryptocurrencies have different ways of regulating the creation of the coins.

A Note on Volatility

BitCoin is also highly volatile, having jumped from $13.36 at the beginning of 2013 to $1,124.76 in November 2013 – an 8,313% increase – and then back down to $445 today. I don’t list this as one of the currency’s design flaws as it’s largely to do with the fact that BitCoin is new, uncertain, and that the authorities aren’t quite sure how to deal with it, so volatility is a result more of the speculation about whether BitCoin will be banned or accepted, rather than the fundamental issue of the rate of money creation.

Design Flaw 2: BitCoin rewards the adopters and speculators

As with the current monetary system, BitCoin rewards the creators of the currency (the ‘miners’ who use their computers to do complex calculations to create the currency). The early adopters have become very wealthy, along with speculators who sit on their coins rather than spending them. Again, this means that those who benefit from the currency are not those who use it to trade in the real economy i.e. people who actually produce real value and make BitCoin a viable and usable currency. Instead, the benefit goes to those who sit on the currency (which prevents it functioning as a currency and makes it a speculative asset).

I would prefer to see a cryptocurrency that rewards those who use the currency as a means of payment, rather than as a speculative asset. So the more you use the currency to buy goods and services from the real economy, the more you would get rewarded with a portion of any newly created currency, whereas those who sit on their coins and use them as a speculative asset would get no share of the newly created money.

My knowledge of computer science and maths aren’t sufficient to say how this could be programmed, but it doesn’t appear to be too complicated. (There would need to be some kind of check to ensure that you don’t end up with people gaming the system, for example two users trading the currency between themselves at high speed in order to ‘earn’ more of the newly created coins.)

Design Flaw 3: BitCoin is LESS secure that national currencies

Because of the design of BitCoin, each coin should be seen as a physical unit that exists on a specific computer hard drive. In the same way that a house burglar could steal gold coins (which I’m sure you have lying around the house), a computer hacker can steal your BitCoins.

One user left his coins on a hard-drive which went to landfill, and then saw the value of the coins appreciate to hundreds of thousands of pounds. The exchange MtGox has alleged ‘lost’ 650,000 BitCoins as a result of hacking.

Anyone holding a significant amount of BitCoins is advised to transfer them to “cold storage” –  a hard drive or USB disk that is disconnected from any computer connected to the internet, and hidden somewhere secure (eg. a physical safe).

For all the arguments that BitCoin is ‘safer’ because it has no central authority, it certainly isn’t safer in practical terms.

The Way Forward

Cryptocurrencies are fascinating. We’ve made a very clear argument that the current monetary system, in which most money is created by banks when they make loans, has been a disaster. But at the same time, when states have used their power to create money, such as through QE, they’ve used it to inflate financial markets (enriching the already wealthy), rather than benefitting the real economy and ordinary people.

We’re obviously campaigning for national currencies to be created and used in the public interest, but it’s still possible that national currencies might be bypassed completely if a currency comes along that is stable, works in the interest of ordinary people, and prevents abuse of the power to create money.

Since BitCoin was established, literally hundreds of other cryptocurrencies have been designed and released. One of them already out there might have the right design features to make a stable currency that can be a real benefit to society and the economy. Cryptocurrencies have only been around for half a decade; there will be a lot of innovation over the next 5 years and it’s possible that we might see something genuinely socially useful come out of it.

But with regards to BitCoin, it’s time to let it die to make way for something better.

 

PS. This reddit thread by people who lost money when the MtGox exchange shut down shows how BitCoin has become a speculative asset bubble similar to the dot com bubble or any stock market bubble. There are stories of people taking their  kid’s education fund, or partner’s life savings, and investing them entirely in BitCoin. One guy even claims his friend committed suicide after investing – and losing – over $900,000 in BitCoin.

But this is not the fault of BitCoin, or a disadvantage of BitCoin. It’s more a fault of a lack of general financial literacy, in particular an ignorance of the basic point that you should never invest all of your wealth in one single asset, whether it’s BitCoin, or RBS shares (or property for that matter). Many of these people had no concept of risk management. I’m not sure we can blame them – an understanding of money and financial literacy is not something that most people acquired at school.

There’s also the desire to “get rich quick” or even just boost your income beyond what you can earn from working. Again, I’m not sure how much we can blame people for that. When the current monetary system is making it harder and harder for people to save anything after paying the mortgage and the costs of living, it’s natural to look for other ways of making money. If the guy mentioned above genuinely believed that investing in BitCoin would mean that his kids could go to university whilst avoiding being saddled with the debt, then it’s natural for him to take that option. It was the lack of understanding of money, finance or risk management that led to him making such a bad decision.

Source: http://www.positivemoney.org/2014/04/bitcoins-fatal-design-flaws/

Job Search Tactics – with highest probability for success

You’ve been on the job hunt for weeks.

You’re applying immediately to every job you come across that’s remotely related to your field. You’re getting your resume in the hands of anyone you meet. You’refollowing up with hiring managers like your life depends on it.

And still? Nothing. Nada. Zilch.

Well, I’m going to tell you a little secret.

It might be you that’s the problem.

I know—before you get all ready to tussle with me, let me assure you that I realize that most people are smart and motivated and have all the best intentions when it comes to landing that next big thing. The problem is that most of us don’t have much training on how to not suck at the job search. Which means—we’re bound to make some gaffes along the way.

So let’s change that. ASAP.

Rule #1: If you’re using any of these (very-common) job search tactics, you must change course immediately.

 

1. Spending 100% of Your Search Time Submitting Online Applications

If trolling the job boards is your primary search tactic, you’re looking at a long road ahead. Realize that, for every job you pursue, at least one or two people are going to find an “in” at that company. And they’re going to use that “in” to get a direct introduction. Would you rather be the one with the “in,” or one of the other 20, 80, or 400 contenders coming in via the automated “clump” of applicants?

Instead: Even if you apply for the job online, the moment you hit “send,” head over to LinkedIn and see if you have a first- or second-degree connection at that company. Reach out, stat. Your goal is to be the one who gets the direct introduction.

 

2. Applying for Jobs (Blindly) When You’re Not an Obvious On-Paper Match

Nobody’s sitting around deducing what you might be good at or why you might make sense for any particular job. Read: When you apply online, if your resume and cover letter don’t speak to the specific needs and deliverables of the job—and spell out exactly how you are going to meet them—no applicant tracking system is going to even find it.

Instead: If you’re not an obvious match (on paper) for a job, you either need to figure out a way to make yourself one (i.e., gaining new skills, taking on volunteer opportunities or freelance work to boost your resume), or find an opportunity to explain your rationale for applying directly to a hiring manager (i.e., show how your previous work experience in your current field would translate seamlessly to this new job).

 

3. Expecting “I’m a Fast Learner” Will Clinch Anything for You

Unless you’re applying for a job that is, by nature, entry level, you should pretty much assume that the decision makers are on the lookout for someone who can hit the ground running. Does this mean you’ll never land a job in a new industry? Not at all. But if you’re pressed in an interview on why they should take a chance on you, don’t think for a moment the hiring manager is looking for “Because I’m a fast learner.”

Instead: Think about how the aggregate of your skills and experiences (no matter how unrelated) may actually make you a great candidate for that role. If you’re clear on why you’d be perfect for the job, it’ll be a heck of a lot easier for the decision makers to feel confident about hiring you, even if you’re a bit green.

 

4. Foisting Your Resume on Strangers Before You’ve Spent 10 Seconds Building Some Rapport

Would you ever walk up to a stranger and propose marriage? Of course you wouldn’t. So why do you think it’s remotely OK to find someone who works at your dream company and—before you’ve even gotten to the “How about that crazy weather?” stage of small talk—shove your resume at him, with a plea to take it on over to the manager? That’s not networking, that’s ambushing.

Instead: If you meet a contact or find a great connection on LinkedIn, look for ways to build a relationship before you ask for a job. Think: “Hi Jill, You and are both members of the Dallas Market Researchers group here on LinkedIn. I notice that you’re an analyst with Fort Knox Inc. I’m a research analyst, too, and I’ve heard great things about your firm. May I ask you just two quick questions about your role?”

 

5. Calling the HR Person, Recruiter, or Hiring Manager with Ridiculous Frequency

Yes, I know. The squeaky wheel gets the oil. Fortune favors the bold. Ask and ye shall receive. All sound mantras. But there is a very fine line between “confident, proactive professional” and “desperate dude who will not stop calling us.”

Instead: If you haven’t heard back about a position, follow up nicely by email after your original thank-you note: “Hi Mary, Just a quick note–you mentioned that you’d be firming up hiring plans this week. I’m very excited to help you bring the Canyon Product Line to market in 2015. No response needed, but please let me know if I can provide any additional information to aid you in your final decision.”

Job searching isn’t easy, nor can it be boiled down to a single, perfect formula. But if you eliminate the tactics that don’t work (or make you look flat-out foolish), and start replacing them with more effective alternatives?

You’ll probably start seeing progress. And progress gives you momentum. And momentum?

That’s what allows you to steamroll your way to greatness.

Source: http://www.themuse.com/advice/5-job-search-tactics-you-must-stop-immediately

Apples Strategie im Mobile Payment

Kein anderes Unternehmen wird so unter die Lupe genommen und beobachtet wie Apple. Kaum ein anderes entfacht so viele Spekulationen und Gerüchte, generiert so viele Nachrichten und heizt die Phantasie über neue Produkte, Angebote und Technologien so an wie dieses Unternehmen aus Cupertino. Und das aus gutem Grund. Denn gerade Apple hat in der Vergangenheit mit seinen neuen Angeboten und Produkten bewiesen, dass es in der Lage ist, das Geschäft verschiedener Branchen grundlegend auf den Kopf zu stellen, und den Markt radikal zu ändern.

grey Apples Schachzüge im Mobile Payment: eine Analyse (Teil 1)Doch gilt dies heute noch? In den letzten Jahren sind mit dem Übergang von Steve Jobs zu Tim Cook die Unkenrufe immer lauter geworden. Hat Apple seine besten Zeiten schon hinter sich? Steigt das Unternehmen ins Mobile Payment ein? Mit welchen Angeboten, in welcher Rolle? Holen sie vielleicht zum großen Rundumschlag aus? Oder ist der Zug für sie vielleicht schon abgefahren, weil sie bislang kein NFC (Near Field Communication) integriert haben? Oder läutet gar wegen der „Android-Strategie“ eines offenen Öko-Systems die Todesglocke für das Unternehmen?

Alles scheint offen. Apple lässt sich wie immer nicht in die Karten schauen. Warum auch? Es sind jedoch mittlerweile mehr als eindeutige Signale zu erkennen in welche Richtung das Unternehmen sich bewegt. Wenn wir die Vorgehensweise des Unternehmens aus der Vergangenheit ins Kalkül ziehen, dann sollten Apples Schachzüge im Mobile Payment  sorgfältig beobachtet und begleitet werden. Das Zeitfenster für eigene selbstgestaltete Mobile Payment Strategien und Maßnahmen beginnt sich allmählich zu schließen.

Kein Wettbewerber in diesem Segment sollte davon ausgehen, dass er in 1-2 Jahren noch die Zeit dafür hat, seine Figuren in aller Ruhe auf dem „Next Generation Payment-Schachbrett“ auf zu stellen. Es ist zu erwarten, dass der erste Zug von Apple nicht „Weiß, von e2 auf e4“ sondern „Schachmatt“ ist.

Angstmache oder Trend? Was spricht dafür?

Fakten, Analysen und Einschätzungen in sieben Punkten mit anschließender Bewertung sollen in diesem Artikel aufgezeigt werden. Die ersten 4 von 7 folgen hier im Teil 1:

1.    Die Fakten

Mit 600 Millionen iTunes-Nutzern ist die Kundenbasis 4,37 mal so groß wie bei PayPal und 3,4 mal so groß wie von Amazon. Täglich kommen 500.000 Kunden dazu. In den Vereinigten Staaten werden 66% der M-Commerce-Ausgaben über iOS-Anwendungen erzielt. In Deutschland nutzen 60% (nach den veröffentlichten Downloadzahlen) der 30 führenden Mobile Banking-Anwendungen (der deutschen Banken) dieses Betriebssystem. Entwickler oder die neuen Mobile-POS Startups wie z.B. Square, iZettle, Payleven lieben Apple-Endgeräte bzw. iOS.

Tim Cook meldete im letzten Monat den höchsten Umsatz der Firmengeschichte, und dass 51 Millionen iPhone 5S verkauft wurden. Das alles mit einer Marge über alle Endgeräte von 37,9 Prozent. 41% der Smartphone-Kunden in den USA sollen mittlerweile das iPhone nutzen und 57% des Mobilen Internetzugangs in China erfolgt über das iOS-Betriebssystem. Und iPhones erreichen eine Kundenzufriedenheit- und loyalität von über 90%.

2.    Apple „Passbook“

Im Herbst 2012 führte Apple mit dem „Passbook“ einen „Barcode Payment Container“ ein, über den Coupons, Tickets, Bordkarten gespeichert werden können. Über eine technische Schnittstelle können Entwickler und Brands leicht Dienste in das Passbook integrieren. Mehr als 100 Brands wie z.B. Lufthansa, Air Berlin oder British Airways lassen Tickets in Passbook speichern. Mehr als 150 Brands wie Starbucks integrieren Gutscheine darin. Erste Gutscheinkampagnen wie z.B. von Harvester in England lassen aufhorchen.

3.    Fingerprint Sensor: Touch ID

Sicherheit ist die größte Hürde für Kunden und Händler für ein erfolgreiches Mobile Banking und Payment und hat damit einen direkten Einfluss auf die Nutzungshäufigkeit und Akzeptanz der Services. Neben dem schnelleren Zugang zum Endgerät soll der Fingerabdruck zukünftig zur Authentifizierung genutzt werden. Eine wesentliche Voraussetzung für einen schnellen, attraktiven und sicheren Zahlungsprozess. In Amerika und Asien sind biometrische Verfahren im Alltag bereits weiter verbreitet als in Europa.

Es  werden nicht nur hier folgende Fragen beantwortet werden müssen: Akzeptieren Kunden- und Händler die biometrische Technik? Was ist bei dieser Technik zu beachten? Welche  Anforderungen stellen Banken und Händler? Ist die Technik geeignet für Online-Banking oder –Commerce? Was sagen die Aufsichtsbehörden dazu?  Allen Anpassungen zu Trotz ist laut Tim Cook: „Mobile Payment via Touch ID on the way.“

4.    Bluetooth Low Energy (BLE) und iBeacons

Der Technologiestandard BLE ist schon länger in den relevanten Smartphone- und Tablets der führenden Hersteller integriert und ist damit bislang weiter als NFC-Technik verbreitet. Durch die Nutzung von iBeacons – vorgestellt im September 2013 – sind dadurch Marketing- und GPS-Lösungen innerhalb von Läden möglich bis hin zu Mobile Payment im Zusammenspiel u.a. mit der Touch ID. Darüber hinaus sind BLE und die iBeacons eine Antwort Apples auf das „Internet der Dinge“, ein weiterer großer Trend, der mittel- bis langfristig die „mobile Nutzung“ von Smartphones und Tablets im Wirtschaftsleben integrieren wird.

5.    Einflussfaktor: China Der chinesische Markt, der wichtigste Zielmarkt Apples, ist nahezu aus dem Stand der größte Mobile Payment-Markt der Welt geworden. Dabei überrascht weniger die Tatsache als die Schnelligkeit, mit der dies geschehen ist. Die Geschwindigkeit zeigt vor allem eins, es gibt einen riesigen Bedarf nach Mobile Payment. Damit passen übrigens auch die Gartner-Prognosen wieder. Die Chinesen sind in die Bresche von Google gesprungen. Die chinesische Zentralnotenbank berichtete unlängst von 1,6 Billionen US-Dollar in Mobile Payment, wobei die Transaktionen gegenüber dem Vorjahr um 213% und der Wert der Transaktionen um 317% gestiegen sind. 0,8 Prozent der Mobile Payment-Transaktionen haben NFC als Grundlage. China hat aktuell 500 Millionen Mobile Internet-Nutzer.

grey Apples Schachzüge im Mobile Payment: eine Analyse (Teil 2)

Was hat dies jedoch mit Apple zu tun? Apple hatte im vergangenen Jahr nach langem Ringen einen Vertrag mit China Mobile, dem mit 750 Millionen Kunden größten Mobilfunkbetreiber der Welt, abgeschlossen und damit endlich den Fuss im wichtigen chinesischen Markt. China Mobile setzt im Bereich Mobile Payment mit seinem Partner NTT Docomo auf NFC-Technik und hat innerhalb kürzester Zeit 3 Millionen Nutzer erzielt. Darüber hinaus wird chinesischen Banken die Zusammenarbeit bei einer Mobile Payment-Plattform (Kollaboratives Strategiemodell) angeboten im Zusammenspiel mit China Mobile. Die Milliarden-Dollar-Preisfrage lautet daher: Wie wird sich Apple jetzt entscheiden?

6.    Einflussfaktor: Strategien Apple besitzt aufgrund seiner erfolgreichen Endgeräte-Strategien wesentliche Eckpfeiler der Mobile Payment-Wertschöpfungskette: die Kunden und Händler. Mit der Entscheidung für BLE und iBeacons sind kurzfristig schneller einsatzfähige Mobile Payment-Services mit geringerer Öko-System-Komplexität möglich. Fatal sind aktuelle NFC-Payment-Anbieter ist die aktuell fehlende NFC-Technik bei Apple. Da Länderübergreifend, auch in Deutschland die aktivsten Nutzer (ca. 60%) aus dem iOS-Lager kommen. Stickerlösungen sind eher als Second-Best-Lösungen anzusehen. Der Markt dürfte erst richtig abheben, wenn es integrierte Angebote gibt. Zu guter Letzt testet Apple seine Mobile Payment-Lösung vor der Einführung in den Markt ausgiebig in seinen Shops.

7.       Innovationen und Patente Apple hat in der jüngsten Vergangenheit zahlreiche Patente für Mobile Payment-Services angemeldet. Dazu gehören „Zahlungen für Güter, die über ein Signal von einem Smartphone zu einem drahtlosen Empfänger veranlasst werden,“ oder „ein Sensor der biometrische und NFC-Technik kombiniert,“ oder „eine Methode und ein System, das Kredits verwaltet.“

Bewertung:

Von allen Teilnehmern hat Apple die komfortabelste Position, da es mit Abstand über die meisten Optionen verfügt, und es sich beinahe Aussuchen kann, welche Rolle es einnehmen möchte, und welche Einnahmen an der Wertschöpfungskette es erzielen möchte. Als Stichworte für die Rollenoptionen mögen folgende ausreichen: Hardwarelieferant (Smartphone, Lesegeräte), Trusted Service Manager, White Label Plattform oder „Weltbank“. Das Unternehmen hat höchstes „Game Changer Potential“ aufgrund seiner Finanzkraft, der überragenden Kundenbasis (Endkunde/Händler), seiner Erfahrung mit komplexen Innovationsproblemen, seiner Historie und der skizzierten Schachzüge. Dabei wird das Unternehmen zukünftig nicht einseitig den Markt bestimmen, sondern wird auf Marktentwicklungen in seinen wichtigsten Zielmärkten wie China reagieren müssen. Wenn sich die größten Mobile Payment-Branchen auf die NFC-Technologie festlegen, wird das Unternehmen reagieren müssen. Die Patente geben dabei die hilfreiche Fingerzeige.

Erwartungen

  • Apple wird NFC in seinen Endgeräten (Smartphone / Tablet) integrieren. (Argumente: Patente, China, wachsende Verbreitung von NFC-fähigen End- und Lesegeräte bis 2016).
  • Mit BLE / iBeacons hat sich Apple weitere Optionen im Marketing, Mobile Payment und Internet der Dinge geschaffen.
  • Apple wird sich nicht zu einer „Weltbank“ entwickeln. Dies ist erstens nicht erfolgreich und aufgrund der großen regulatorischen Anforderungen an Banken weltweit unternehmerisch nicht sinnvoll.
  • Apple wird versuchen, seine aktuelle Position, in zentrale Rollen in dem kommenden Mobile Payment Öko-System, zu übersetzen. Idealerweise als „White-Label-Plattform-Anbieter“.

Empfehlungen an Banken

Kurzfristig: 

  • Design, Gestaltung und Umsetzung Digitaler Strategie
  • Einführung und Ausweitung von Mobile Banking-Services
  • Design einer eigenständigen Mobile Payment Strategie
  • Unternehmensstrategie
  • Vorantreiben der „Mobile Readyness“ des eigenen Unternehmens
  • Konsortium innerhalb der Branche / Branchenübergreifend
  • Enge Zusammenarbeit mit nationaler und europäischer Regulierung

Mittelfristig:

  • Einführung eines Mobile Payment Öko-Systems
  • Einführung einer Mobile Payment-Lösung

Dieser Artikel basiert auf einem Vortrag vom 18. März 2014 auf der Konferenz “Next Generation Payments” des Bankingclubs in Köln.

grey Apples Schachzüge im Mobile Payment: eine Analyse (Teil 2)

Der Autor: Thomas Lerner ist Management Berater, Autor, Trainer und Speaker bei Mobile Marketing, Banking, Payment und Banking Services. Im Dezember 2013 ist sein Buch „Mobile Payment“ in englischer Sprache erscheinen. Im Sommer 2014 erscheint sein neues Buch „Mobile Marketing / Mobile Banking“.

Quelle: http://www.mobile-zeitgeist.com/2014/03/27/apples-schachzuege-im-mobile-payment-eine-analyse-teil-1/ sowie http://www.mobile-zeitgeist.com/2014/03/28/apples-schachzuege-im-mobile-payment-eine-analyse-teil-2/

Mitarbeitergespräche: Die besten Tipps für Vorgesetzte und Mitarbeiter

Mindestens einmal im Jahr steht es an: das Mitarbeitergespräch. Und ganz oft haben Chef und Mitarbeiter wenig Spaß daran. Kein Wunder: Es bedarf guter Vorbereitung, ist anstrengend und zuweilen unangenehm. Insbesondere dann, wenn man etwa dem Kollegen sagen muss, wo er beruflich steht statt stehen könnte, und dass er sich mehr anstrengen muss. In diesem Artikel erfahren Sie, was Sie beachten müssen, damit eine solche Unterhaltung ein echter Erfolg wird – aus zwei Perspektiven: Wir schlüpfen dabei zunächst in die Rolle des Chefs, danach in die des Mitarbeiters und geben Ihnen so wertvolle Tipps für bessere Mitarbeitergespräche (PDF). Übrigens lohnt es sich immer beide Perspektiven kennenzulernen. Lesen Sie also gleich weiter, selbst wenn Sie nurleidender Angestellter sind…

 

Ausschlaggebend dafür ob Mitarbeiter-Jahresgespräche von allen Beteiligten als hilfreich empfunden werden, sind natürlich mehrere Faktoren. Die beiden wichtigsten davon wiederum sind die Vorbereitung und die Zielsetzung des Vorgesetzten beziehungsweise des Mitarbeiters. Gehen Sie bitte nie auf gut Glück in ein solches Gespräch – das kann nur als Debakel enden! Des weiteren entscheiden aber auch Ihre innere Haltung, der Inhalt und Ablauf über das Gespräch: Wenn Sie schon nach dem Motto starten: “Der Typ da hat sowieso keine Ahnung von meiner Arbeit!” Oder: “Wir ziehen das jetzt einfach mal durch, dann haben wir es hinter uns”, dann kommt sicher auch nicht mehr raus als ein Pfund Gehacktes. Seien Sie also offen, egal auf welcher Seite des Schreibtisches, und sehen Sie die Sache als Chance – und sei es nur, um mal wieder miteinander ins Gespräch zu kommen…

Aber so wäre es natürlich besser:

So führen Chefs bessere Mitarbeitergespräche

Schlechte Vorbereitung, keine Gesprächsstruktur, unsachliche Äußerungen – die Liste der typischen Fehler, die ein Vorgesetzter dabei machen kann, ist lang und zum Haare raufen. Die Folge: Frust auf beiden Seiten. Die Motivation von Mitarbeitern ist aber das Allerwichtigste bei einem solchen Gespräch. Danach sollte ein Mitarbeiter wissen wo er steht und mit erhobenem Kopf den Raum verlassen können. Eine strukturierte Vorbereitung hilft Ihnen, dieses Ziel zu erreichen, ganz gleich, ob Sie ein Anerkennungs-, Ermahnungs- oder Konfliktgespräch führen.

Vor dem Gespräch mit dem Mitarbeiter

Der Hauptbestandteil dieser Phase ist die Vorbereitung der Unterhaltung. Planen Sie die organisatorischen Eckpunkte (pdf) des Gesprächs, also Zeitpunkt, Dauer und Ort der Unterhaltung. Grenzen Sie das Gespräch aber auch inhaltlich ein. Das hilft, um bei der Sache zu bleiben und nicht abzuschweifen. Fragen Sie sich, welche Art von Gespräch Sie durchführen wollen. Zum Beispiel Feedback- oder Gehaltsgespräch. Was wollen Sie alles ansprechen? Liegt Ihr Schwerpunkt bei aufgabenbezogenen Punkten oder auf den Soft Skills und Verhaltensweisen des Mitarbeiter? Müssen Dokumente, wie zum Beispiel Zielvereinbarungen vorbereitet werden? Bereiten Sie all das sorgfältig vor und verfassen Sie ein paar schriftliche Notizen, die Sie im Gespräch als Leitfaden nutzen können.

Während des Gesprächs mit dem Mitarbeiter

Während der Unterhaltung kann das Gespräch unterwartete Wendungen annehmen. Auch darauf sollten Sie mental vorbereitet sein. Dabei ist es wichtig, einen kühlen Kopf zu bewahren. Wenn Sie sich an die wichtigsten Regeln und Verhaltensweisen halten, können Sie schwierige Situationen schon im Vorfeld umgehen und Konfliktpunkte gut lösen. Was Sie machen können, zeigt Ihnen die folgende Liste:

Atmosphäre herstellen. Mit einem guten und lockeren Einstieg schaffen Sie eine positive Stimmung, mit der Sie Ihr Gegenüber öffnen und dessen Kritikfähigkeit wecken können (Im übrigen auch Ihre eigene).
Themen eingrenzen. Fokussieren Sie sich auf die wesentlichen Inhalte und schweifen Sie nicht ab. Auf dieser Grundlage können Sie eine sachbezogene Diskussion führen und konkrete Maßnahmen beschließen. Wenn Mitarbeiter sehen, dass ein Gespräch zu einem handfesten Sachverhalt führt, gehen diese höchstwahrscheinlich motivierter aus der Unterhaltung.
Wahrnehmungen wiedergeben. Wie in jeder zwischenmenschlichen Beziehung lohnt es sich immer, dem Mitarbeiter Ihre persönlichen Wahrnehmungen mitzuteilen. Mit Sätzen wie: “Ich empfinde etwas so und so” oder “Auf mich macht das folgenenden Eindruck”, fühlen sich Menschen häufig nicht direkt angegriffen. Dieses Mittel hilft Ihnen, Probleme zu klären ohne, dass Ihr Gegenüber sofort abblockt oder sich verschließt.
Probleme ansprechen. Wenn Sie nicht nur Lob zu verteilen haben, gibt es immer Raum zum Ansprechen von Problemen. Das Abstellen von Fehlern kann langfristig zu einem besseren Arbeitsklima zwischen Ihnen und Ihren Mitarbeitern und zu einer höheren Produktivität Ihrer Leute führen. Nur müssen Sie den Mut haben, Probleme anzusprechen. Nur übertreiben Sie es nicht damit. Ein goldener Mittelweg und Fingerspitzengefühl haben langfristig die besseren Effekte.
Fairness zeigen. Sie motivieren niemanden, wenn Sie nur seine Schwächen und Fehler anprangern. Wie bei jedem guten Feedback sollten Sie zuerst positive Dinge und die Stärken ansprechen und erst danach auf die Fehler des Mitarbeiters eingehen. Wenn Sie ein Gleichgewicht zwischen Lob und Kritik finden, wird sich Ihr Gegenüber fair behandelt fühlen.
Anregungen liefern. Kritik kann mal mehr oder mal weniger umfangreich ausfallen. Wichtig bei der Kritik ist jedoch, dass Sie neben dem Ansprechen von kritikwürdigen Punkten gleichzeitig auch Anregungen zur Besserung liefern. Diese Hinweise können auch bei einem Lob ausgesprochen werden. Es ist ja nicht ausgeschlossen, dass gute Dinge noch weiter verbessert werden können.
Transparenz herstellen. Zeigen Sie Ihren Mitarbeitern wie Sie zu Ihrer Meinung oder einem Urteil gekommen sind. Das ist bei der Beurteilung von Angestellten von besonderer Bedeutung. Wenn Ihr Gegenüber Ihren Weg der Meinungsbildung nachvollziehen kann, wird der Mitarbeiter auch Ihr Urteil besser verstehen und akzeptieren.
Sozialkompetenz ansprechen. In solchen Gesprächen geht es oft um Zahlen. Was wurde bis wann eingehalten und was nicht? Welches Ziel wurde wie hoch erfüllt oder auch nicht? Meistens sind es aufgabenbezogene Ziele, die Sie dabei ansprechen. Vergessen Sie aber nicht die “weichen Faktoren”! Sind Sie zufrieden mit seiner Kommunikationsfähigkeit? Indem Sie auf seine Sozialkompetenz eingehen, zeigen Sie ihm, dass Sie ihn ganzheitlich betrachten und seine Leistung nicht nur an Zahlen fest machen.
Ziele definieren. Egal welche Art von einem Gespräch Sie mit Ihrem Mitarbeiter führen, Ziele zu setzen gehört in jedes Gespräch. Vorgaben geben Ihren Leuten einen Ankerpunkt, den Sie während der Arbeit vor Augen haben. Für das richtige Aufstellen von Zielen kann Ihnen die SMART-Regel (pdf) Orientierung geben.
Ergebnisse festhalten. Viel Reden hilft viel? Nicht ganz! Wichtige Ergebnisse des Mitarbeitergesprächs, wie zum Beispiel definerte Ziele und andere Vereinbarungen, sollten unbedingt schriftlich fixiert werden. Aufgabenbezogen können das die neuen Ziele sein. Nach einem Konfliktgespräch kann ein Protokoll Lösungen beinhalten. Der Zweck eines solchen Dokumentes ist es, sowohl den Chef, als auch den Mitarbeiter nach längerer Zeit an die wesentlichen Punkte des Gesprächs zu erinnern. Nicht umsonst gibt es den Ausspruch: “Papier ist geduldig.”

Nach dem Gespräch mit dem Mitarbeiter

In den Wochen nach dem Mitarbeitergespräch gilt es zu kontrollieren, ob die besprochenen Ziele verfolgt und Vereinbarungen eingehalten werden. Sorgen Sie dafür, dass Ihre Mitarbeiter sich von neuen Projekten, Technologien oder Kunden nicht ablenken lassen und sich auf ihre Ziele fokussieren. Sprechen Sie ansonsten die getroffenen Vereinbarungen noch einmal an. Gegebenenfalls müssen Sie Ihre Mitarbeiter erneut motivieren.

Wie Mitarbeiter die Gespräche besser nutzen

Vor dem Gespräch mit dem Vorgesetzten

Auch für den Mitarbeiter gilt es, sich auf ein Gespräch mit dem Chef vorzubereiten. Klären Sie Ihre Ziele für das Gespräch ebenfalls vorher ab: Geht es um die Vereinbarung von neuen Zielen oder um die Planung Ihrer beruflichen Entwicklung? Was wollen Sie erreichen? Wo möchten kurzfristig und langfristig hin? Solche Ziele helfen Ihnen, das Gespräch nachher als Erfolg oder Misserfolg einzuordnen.

Darüber hinaus sollten Sie sich auch auf Kritik seitens des Chefs gefasst machen. Gehen Sie vor dem Gespräch in sich und überlegen Sie, was an Ihren Leistungen in letzter Zeit kritikwürdig war. Seien Sie ehrlich zu sich, nicht zu optimistisch. Überlegen Sie sich aber auch, wie Sie es künftig besser machen können. Das will der Chef hören, keine Rechtfertigungen oder Ausflüchte. Zudem hilft Ihnen diese seelische Vorbereitung, die Kritik souveräner zu verarbeiten.

Während des Gesprächs mit dem Vorgesetzten

Im Gespräch mit Ihrem Chef können Sie mit verschiedenen Verhaltensweisen punkten und die Unterhaltung in Ihrem Sinne beeinflussen. Dabei können Ihnen folgende Empfehlungen helfen:

Interesse zeigen. Sagen Sie Ihrem Chef, was Ihnen Spaß macht und machen Sie deutlich, wie Ihre Interessen für die Weiterentwicklung des Unternehmens in der Zukunft genutzt werden können. Das kann sich im Hinblick auf Weiterbildungsmaßnahmen als wichtig erweisen.
Verbesserungsvorschläge vorbereiten. Sie können Ihren Chef neben den Leistungen der vergangenen Monate, auch mit neuen Ideen, Konzepten und Verbesserungsvorschlägen von Ihrer Qualität überzeugen. Stellt Ihr Chef die Frage danach, sollten Sie ohnehin vorbereitet sein.
Loyalität beweisen. Machen Sie Ihrem Vorgesetzten bewusst, dass Sie Einsatz für das Unternehmen zeigen und voll hinter seinen Zielen stehen. Loyalität wird von Mitarbeitern erwartet. Längst nicht alle zeigen diese jedoch, und so kann überdurchschnittliche Loyalität positiv hervorstechen.
Gleichberechtigung einfordern. Machen Sie Ihre Erfolge für das Unternehmen deutlich. Sie bringen Tag für Tag Ihre Leistung für die Firma und haben daher ein Recht darauf, beurteilt zu werden. In einem Mitarbeitergespräch treffen Sie sich mit Ihrem Chef auf Augenhöhe, nicht als Bittsteller oder Lehrling (es sei denn Sie sind wirklich noch in der Ausbildung).
Sachlichkeit zeigen. Ein Mitarbeitergespräch kann turbulent werden, wenn Sie die Unterhaltung dazu nutzen, um Ärger, Frust und Angst an Ihrem Vorgesetzten abzuladen. Ein Mitarbeiter hat noch nie einen Chef umstimmen können, nur weil er diesen angebrüllt hat. Lassen Sie Ihre Emotionen vor der Konferenztür und drücken Sie Ihre Gefühle lieber sachlich aus.
Kritik vorbereiten. Einige Chefs wollen auch ein Feedback von Ihren Mitarbeitern. Seien Sie also vorbereitet, Ihre positiven und negativen Wahrnehmungen von Ihrem Vorgesetzen kurz zu umreißen. Bei den negativen seien Sie aber bitte zurückhaltender und diplomatischer.
Konkret werden. Im Gespräch mit Ihrem Chef können Sie Ihre Stärken hervorheben und aufzeigen, dass Sie ein Gewinn für das Unternehmen sind. Sparen Sie dabei mit allgemeinem Verkäufergeschwätz und nutzen Sie konkrete Beispiele!
Zusammenarbeit ansprechen. Wie viel Freiraum für selbstständige Arbeit haben Sie? Wie oft kontrolliert Ihr Chef Ihre Arbeit? Brauchen Sie weniger oder gar mehr Feedback von Ihrem Chef? Das Mitarbeitergespräch ist ein guter Zeitpunkt, um die langfristige Zusammenarbeit neu abzustecken.

Nach dem Gespräch mit dem Vorgesetzten

Nach dem Gespräch sollten Sie unbedingt die abgestimmten Ziele verfolgen und versuchen, diese so gut es geht zu erfüllen. Legen Sie mit guter Leistung vor, dann sind Sie in einer komfortablen Situation, um auch Vereinbarungen seitens des Chefs (zum Beispiel Weiterbildungsmaßnahmen) einfordern zu können. Erinnern Sie Ihren Vorgesetzten gegebenenfalls an seinen Teil der Vereinbarungen.

Sollten sich später Ihre Arbeitsbedingungen personell (Kollege kündigt) oder fachlich (ein neues Projekt) ändern, sollten Sie auch eine Änderung der bisherigen Vereinbarungen ansprechen. Was nützen Ziele und Vereinbarungen, wenn sich die Bedingungen oder der Schwerpunkt der Arbeit verändert haben?

Darüber hinaus sollten Sie sich freilich peu à peu schon auf das nächste Gespräch mit dem Chef vorbereiten. Denn nach dem Mitarbeitergespräch ist vor dem Mitarbeitergespräch. Und das kommt alle Jahre wieder.

Quelle: http://karrierebibel.de/mitarbeitergesprache-tipps-fur-vorgesetzte-und-mitarbeiter/

 

This Woman Invented a Way to Run 30 Lab Tests on Only One Drop of Blood

ff_holmes_large

Phlebotomy. Even the word sounds archaic—and that’s nothing compared to the slow, expensive, and inefficient reality of drawing blood and having it tested. As a college sophomore, Elizabeth Holmes envisioned a way to reinvent old-fashioned phlebotomy and, in the process, usher in an era of comprehensive superfast diagnosis and preventive medicine.

That was a decade ago. Holmes, now 30, dropped out of Stanford and founded a company called Theranos with her tuition money. Last fall it finally introduced its radical blood-testing service in a Walgreens pharmacy near company head­quarters in Palo Alto, California. (The plan is to roll out testing centers nation­wide.) Instead of vials of blood—one for every test needed—Theranos requires only a pinprick and a drop of blood. With that they can perform hundreds of tests, from standard cholesterol checks to sophisticated genetic analyses. The results are faster, more accurate, and far cheaper than conventional methods.

The implications are mind-blowing. With inexpensive and easy access to the infor­mation running through their veins, people will have an unprecedented window on their own health. And a new generation of diagnostic tests could allow them to head off serious afflictions from cancer to diabetes to heart disease.

None of this would work if Theranos hadn’t figured out how to make testing trans­parent and inexpensive. The company plans to charge less than 50 percent of the standard Medicare and Medicaid reimbursement rates. And unlike the rest of the testing industry, Theranos lists its prices on its website: blood typing, $2.05; cholesterol, $2.99; iron, $4.45. If all tests in the US were performed at those kinds of prices, the company says, it could save Medicare $98 billion and Medicaid $104 billion over the next decade.

What was your goal in starting a lab-testing company?

We wanted to make actionable health information accessible to people everywhere at the time it matters most. That means two things: being able to detect conditions in time to do something about them and providing access to information that can empower people to improve their lives.

There are a billion tests done every year in the United States, but too many of them are done in the emergency room. If you were able to do some of those tests before a person gets checked into the ER, you’d start to see problems earlier; you’d have time to intervene before a patient needed to go to the hospital. If you remove the biggest barriers to these tests, you’ll see them used in smarter ways.

What was your motivation to launch Theranos at the age of 19? What set you on this road?

I definitely am afraid of needles. It’s the only thing that actually scares me. But I started this company because I wanted to spend my life changing our health care system. When someone you love gets really sick, most of the time when you find out, it’s too late to be able to do something about it. It’s heartbreaking.

You’re not alone in your fear of needles.

Phlebotomy is such a huge inhibitor to people getting tested. Some studies say that a substantive percentage of patients who get a lab requisition don’t follow through, because they’re scared of needles or they’re afraid of worrying, waiting to hear that something is wrong. We wanted to make this service convenient, to bring it to places close to people’s homes, and to offer rapid results.

Why the focus on rapid results?

We can get results, on average, in less than four hours. And this can be very helpful for doctors and patients, because it means that someone could, for example, go to a Walgreens in the morning to get a routine test for something their doctor is tracking, and the physician can have the results that afternoon when they see the patient. And we’re able to do all the testing using just a single microsample, rather than having to draw a dedicated tube for each type of test.

So if I got a blood test and my doctor saw the results and wanted other tests done, I wouldn’t have to have more blood drawn?

Exactly. And on their lab form, the physician can write, “If a given result is out of range, run this follow-up test.” And it can all be done immediately, using that same sample.

ff_holmes_f

Some conventional tests, like pH assays, can be done quickly. Others, like those that require culturing bacteria or viruses, can take days or even weeks. Are there some tests that take Theranos longer? Can everything really be turned around in four hours?

Yes, we had to develop assays or test methodologies that would make it possible to accelerate results. So we do not do things like cultures. In the case of a virus or bacteria, traditionally tested using a culture, we measure the DNA of the pathogen instead so we can report results much faster.

Where do you see this making a big difference?

Fertility testing is a good example. Most people pay for it out of pocket, and it can cost as much as $2,000. These tests provide the data you need to figure out someone’s fertility, and some women can’t afford them. Our new fertility panel is going to cost $35. That means women will be able to afford the tests. They’ll be able to better manage the process and take some of the stress out of trying to conceive.

What are you doing to ensure the accuracy of your testing?

The key is minimizing the variability that traditionally contributes to error in the lab process. Ninety-three percent of error is associated with what’s called pre-analytic processing — generally the part of the process where humans do things.

Such as?

Manually centrifuging a sample or how much time elapses before you test the sample, which brings its decay rate into play.

So how do you avoid these potential errors?

There’s no manual handling of the sample, no one is trying to pipette into a Nanotainer, no one is manually processing it. The blood is collected and put into a box that keeps it cold. The very next thing that happens is lab processing, and that’s done with automated devices at our centralized facility with no manual intervention or operation.

How can improved processes actually save lives?

We’ve created a tool for physicians to look at lab-test data over time and see trends. We don’t usually think about lab data this way today. It’s “Are you in range, or are you out of range?” Instead, we like to think, “Where are you going?” If you showed me a single frame from a movie and asked me to tell you the story, I wouldn’t be able to do it. But with many frames, you can start to see the movie unfold.

How else can you use this technology?

Many, many years of work went into making this possible. We started our business working with pharmaceutical companies. Because we made it possible to get data much faster, they could use our infrastructure to run clinical trials. They were also able to run what’s called an adaptive clinical trial, where based on the data, they could change the dosing for a patient in real time or in a premeditated way, as opposed to waiting a long period and then deciding to change a dose.

In the long run, what impact will your technology have?

The dream is to be able to help contribute to the research that’s going on to identify cancer signatures as they change over time, to help intervene early enough to do something about an illness.

Will people become more used to gathering and examining their own health data?

No one thinks of the lab-testing experience as positive. It should be! One way to create that is to help people engage with the data once their physicians release it. You can’t do that if you don’t really understand why you’re getting certain tests done and when you don’t know what the results mean when you get them back.

It drives me crazy when people talk about the scale as an indicator of health, because your weight doesn’t tell you what’s going on at a biochemical level. What’s really exciting is when you can begin to see changes in your lifestyle appear in your blood data. With some diseases, like type 2 diabetes, if people get alerted early they can take steps to avert getting sick. By testing, you can start to understand your body, understand yourself, change your diet, change your lifestyle, and begin to change your life.

Source: http://www.wired.com/wiredscience/2014/02/elizabeth-holmes-theranos/