Ethical question leaves potential buyers torn over self-driving cars, study says
Faced with two deadly options the public want driverless vehicles to crash rather than hurt pedestrians – unless the vehicle in question is theirs
In catch-22 traffic emergencies where there are only two deadly options, people generally want a self-driving vehicle to, for example, avoid a group of pedestrians and instead slam itself and its passengers into a wall, a new study says. But they would rather not be travelling in a car designed to do that.
The findings of the study, released on Thursday in the journal Science, highlight just how difficult it may be for auto companies to market those cars to a public that tends to contradict itself.
“People want to live a world in which everybody owns driverless cars that minimize casualties, but they want their own car to protect them at all costs,” Iyad Rahwan, a co-author of the study and a professor at MIT, said. “And car makers who offer such cars will sell more cars, but if everybody thinks this way then we end up in a world in which every car will look after its own passenger’s safety … and society as a whole is worse off.”
Through a series of online surveys, the authors found that people generally approve of cars that sacrifice their passengers for the greater good, such as sparing a group of pedestrians, and would like others to buy those cars, but they themselves would prefer to ride in a car that protects its passengers at all cost.
Several people working on bringing self-driving cars to market said that while the philosophical and ethical question over the two programming options is important to consider, real-life situations would be far more complex.
Brian Lathrop, a cognitive scientist who works on Volkswagen’s self-driving cars project, stressed that in real life there are likelihoods and contingencies that the academic example leaves out.
“You have to make a decision that the occupant in the vehicle is always going to be safer than the pedestrians, because they’re in a 3,000lb steel cage with all the other safety features,” said Lathrop, who was not involved in the new study.
So in a situation in which a car needs to, say, slam into a tree to avoid hitting a group of pedestrians, “obviously, you would choose to program it to go into the tree,” he said.
A spokesman for Google, whose self-driving car technology is generally seen as being the furthest along, suggested that asking about hypothetical scenarios might ignore the more important question of how to avoid deadly situations in the first place.
The problem seems to be how to get people to trust cars to consistently do the right thing if we’re not even sure we want them to do what we think is the right thing.
The study’s authors argue that since self-driving cars are expected to drastically reduce traffic fatalities, a delay in adopting the new technology could itself be deadly. Regulations requiring self-driving cars to sacrifice their passengers could move things forward, they write. But, in another catch-22, forcing the self-sacrificing programming could actually delay widespread adoption by consumers.
Susan Anderson, an ethicist at the University of Connecticut, and her husband and research partner, Michael Anderson, a computer science professor at the University of Hartford, believe the cars will be able to make the right call.
“We do believe that properly programmed machines are likely to make decisions that are more ethically justifiable than humans,” they said in an email. “Also, properly programmed self-driving cars should have information that humans may not readily have,” including precise stopping distance, whether to swerve or brake, or the likelihood of degree of harm.
How to get those cars “properly programmed”? The Andersons, who were not involved in the study, suggest having the cars learn from or be given “general ethical principles from applied ethicists”.
Despite what science-fiction wisdom says, talking to your computer is not normal. Sitting in the middle of a modern, open floor-plan office and saying „Hello, Computer,“ will garner some head-turns and a few scowls.
No matter. Companies like Microsoft, Amazon and Apple are convinced we want to talk to everything, including our desktop and laptop computers. Side-eye looks be damned.
Which brings us to today. Almost a year since Microsoft brought Cortana to Windows 10, Apple is following suit with Siri for the newly rechristened macOS.
Windows 10 with Cortana is, obviously, a shipping product, while macOS with Siri integration is in early beta. Even so, I can’t look at Siri’s first desktop jaunt in a vacuum, so when Apple supplied me with a MacBook running the beta of macOS Sierra (due to come to consumers in the fall), I compared the two desktop-based voice assistants. As you might surmise, they’re quite similar, but they have significant and strategic differences.
Where did they come from?
Siri arrives on the desktop as the oldest of the growing class of digital assistants, appearing first on the iPhone 4S in 2011. It’s long been rumored that it would eventually come to the Mac, so no one was surprised when Apple announced exactly that earlier this month at its Worldwide Developers Conference.
Cortana (which was named for the synthetic intelligence in Microsoft’s popular Halo game series), arrived with Windows 10 in 2015, a year after the digital assistant’s formal introduction on Windows Phone at the 2014 Microsoft Build conference.
Like Cortana, Siri has a permanent place on the macOS desktop. Actually, it has two. A tiny icon in the upper right corner and then another in the macOS dock. Both launch the familiar Siri „waiting to help you“ wave.
On Windows, Cortana sits next to the Start Button. it has a circular halo icon and, next to that, the ever-present „Ask me anything.“
It’s at this point that the two assistants diverge. Cortana is a voice assistant, but, by default, it’s a text-driven one. Most people who use it will type something into the Cortana box. If you want to speak to Cortana — as I did many times for this article — you have to click the little microphone icon icon on the right side of the Cortana box.
While Cortana combines universal search with the digital assistant, Apple’s Siri drawn a line between the two.
Importantly, you can put Cortana in an always-listening mode, so it (she?) will wake when you say „Hey Cortana.“ Even though you can also wake the mobile Siri with „Hey Siri,“ macOS offers no such always-listening feature. For the purposes of this comparison, I left „Hey Cortana“ off.
Siri is a voice assistant. It has no text box. A click on either Siri icon opens the same black box in the upper right-hand side of the macOS desktop (it actually slides in from offscreen — a nice touch). As soon as you hit that button, Siri is listening, waiting for you to ask a question.
Sitting right next to Siri is Spotlight, which last year got a significant update. It’s a universal search that can pore over you Mac, the Web, iTunes, the App Store, maps.
So while Microsoft’s Cortana combines universal search with the digital assistant, Apple’s drawn a line between the two — sort of. Spotlight can perform many of the same searches as Siri. However, if you type a question into Spotlight, it may launch Siri. A trigger word appears to be „What’s.“
I really don’t know why Apple chose to keep Spotlight and Siri separate, but they may reconsider in future versions of macOS.
Battle of the assistants
It’s early days for Siri on the desktop, but I’m already impressed with its performance and intelligence — especially as it compares to Microsoft’s Cortana.
To test the two voice assistants, I first closed my office door. I wanted to speak in a normal voice and didn’t want to attract any annoyed stares.
Both Siri on macOS and Cortana start by asking you to open up your privacy settings a bit. They simply do their jobs better if they know where you are. So I followed Siri’s instructions and turned on location services on the macOS.
Here’s something else Siri on macOS and Cortana have in common: Both can tap into your system to, for example, find files and make system-level adjustments, but they’re both pretty inconsistent. Siri on macOS, obviously, is still a work in progress, so take these criticisms with a grain of salt. Even so, I suspect that there will, at least for some time, be limits to what Siri can do even after the forma macOS launch, especially as long as Spotlight survives.
When I asked Siri to „increase my screen brightness,“ it opened a System Preferences: Brightness slider box within Siri and told me „I made the screen a little brighter.“
When I asked Cortana the same question, it opened a Bing search result inside the Cortana box, which told me how to adjust screen brightness, but didn’t do it for me.
On the other hand, when I told Cortana to turn off my Wi-Fi, it turned it off, it returned a message of „Wi-Fi is now off“ and showed the setting to confirm.
Siri can turn off Wi-Fi, too, but doing so also renders Siri for macOS useless. Unlike Cortana, it needs an Internet connection to work, which means once Siri on macOS has turned it off, you can’t use it to turn Wi-Fi back on. Even if you turn off network connectivity, Cortana will still be able to search your system.
Siri and Cortana excel at natural-language queries (asking questions in sentences), but Siri comes across as the smarter system.
It’s easy to check your schedule through both systems — you just need to ask one of them about your next appointment. However, Siri goes a big step further.
When I asked it about my next appointment, it showed me one for Thursday at 11:00 a.m. I then clicked the microphone icon below the calendar result and asked Siri, „Can you move that to 11:10.“ Siri responded, „Okay, I’ll make that change to your event. Shall I reschedule it?“ It then offered the option of confirming the change or cancelling it with my voice. Siri on macOS actually maintains the context between queries — that feels more like the future.
When I asked Cortana to make a similar change, it sent me to a Bing search result. (By the way, both voice assistants use Bing and neither will let you change it to Google.)
The level of conversational prowess in Siri could be a real game-changer and certainly puts Microsoft on notice.
Cortana and Siri on macOS both boast system access, but Siri does a better job of keeping track of system specs. I can ask about the speed of my system and how much iCloud storage I have left in Siri. Cortana, unfortunately, has no clue about my OneDrive storage and when I asked „How fast is my PC?“ I only got a Bing search result.
Where’s my stuff and who are you
Siri and Cortana each do a good job of finding system files that contain a keyword. For both, I asked, „Find me files with [keyword],“ and they both quickly showed me local, relevant results. Siri, however, excels at making results persistent. You can pin whatever you find to the notification center.
Similarly, both voice assistants do a good job of finding images, but only Siri on macOS lets me drag and drop one of the image results into a document or email. When I tried to do the same thing with a Cortana result, it only dragged and dropped the HTML for the original query.
Siri did struggle with contacts. I tried initiating a text and got stuck in a sort of infinite loop — it just kept going back to asking me which of my duplicate contacts I wanted to text. This felt like a pre-release bug.
No winners yet
Since Apple is still working Siri for macOS, it’s way too soon to crown a voice-assistant champion. Even so, Siri on mac OS is already faster (Cortana’s voice recognition seems plodding by comparison) and it’s already outstripping Cortana on the intelligence front. On the other hand, Cortana truly shines when you can type into it, a feat impossible in Siri for macOS, unless you start in Spotlight and use one of the magic words to auto-launch Siri.
Microsoft, of course, has its own big Cortana update in the wings as part of the Windows 10 Anniversary Update due later this summer. It will increase Cortana’s intelligence and utility (order plane tickets, shop), but based on what I’ve seen in Siri for macOS, it may only help Cortana achieve parity on some features, while still leaving it trailing in others.
When Apple released a preview version of iOS 10 at its annual developers conference last week, the company slipped in a surprise for security researchers — it left the core of its operating system, the kernel, unencrypted.
“The kernel cache doesn’t contain any user info, and by unencrypting it we’re able to optimize the operating system’s performance without compromising security,” an Apple spokesperson told TechCrunch.
Apple has kept the inner workings of the kernel obfuscated by encryption in previous versions of iOS, leaving developers and researchers in the dark. The kernel manages security and limits the ways applications on an iPhone or iPad can access the hardware of the device, making it a crucial part of the operating system.
Although encryption is often thought to be synonymous with security, the lack of encryption in this case doesn’t mean that devices running iOS 10 are less secure. It just means that that researchers and developers can poke around in the kernel’s code for the first time, and any security flaws will come to light more quickly. If flaws are revealed, they can be quickly patched.
Leaving the kernel unencrypted is a rare move of transparency for Apple. The company is so notoriously secretive about its products that some security experts speculated in the MIT Technology Review that the lack of encryption in the kernel was accidental. But such a mistake would be so shocking as to be practically unbelievable, researchers said. “This would have been an incredibly glaring oversight, like forgetting to put doors on an elevator,” iOS security expert Jonathan Zdziarski told the MIT Technology Review.
Apple has begun to shift towards greater transparency, particularly on security issues, in the wake of its battle with the FBI over unlocking an iPhone used by the San Bernardino shooter. When the FBI attempted to compel Apple to unlock the phone, CEO Tim Cook penned a rare open letter to Apple’s customers, explaining his decision to resist. “We feel we must speak up in the face of what we see as an overreach by the U.S. government,” Cook wrote. (The FBI eventually dropped its request after paying a third party to break into the device.)
Opening up the kernel’s code for inspection could weaken the market for security flaws like the one the FBI is presumed to have used to get into the San Bernardino iPhone. If flaws are revealed quickly and widely, it will reduce the prices law enforcement and black markets will pay for them — and it could mean quicker fixes for Apple’s customers.
Automakers have spent the majority of 2016 announcing their plans for self-driving and the future of automation, but while some just begin to prototype systems, others are soaring ahead of the pack.
Research and advisory firm Lux Research has charted the 12 major automakers on business execution and technical value, and noted if the company has a positive or negative view on the advent of self-driving.
Toyota, Honda, and Mercedes Benz are ahead right now, as you can see in the graph above. Tesla and BMW aren’t far behind, but the report claims that the two companies have a “wait and see” attitude to self-driving, rather than actively pushing for its arrival. The attitude is based on investments, partnerships, and demonstrated capability.
Daimler Trucks and Hyundai are the other two automakers in the top right on technical value and business execution. German automaker Audi has a decent technical value rating, but lacks the investment or business execution its German rivals BMW and Mercedes-Benz have achieved.
The two major automakers in the U.S.—General Motors and Ford—have similarly poor outlooks. The two companies are lower than all European rivals on technical value and business execution, apart from Renault-Nissan, which is far behind the group.
Self-driving car R&D market is white hot
General Motors has started spending heavily in the self-driving market, investing $500 million in a partnership with ridesharing app Lyft and purchasing Cruise Automation for $1 billion in March. Ford, on the other hand, may be looking to partner with Google to fix some of its self-driving shortcomings.
While it is worrying to see companies like Renault-Nissan and Audi not invest in self-driving as much as rivals, we are still three years away from any concrete legislation that allows driverless cars on the road. That is enough time for any automaker to change their attitude towards self-driving.
It’s not your imagination: Millennials really are glued to their smartphones.
Nearly four in 10 millennials (39%) say they interact more with their smartphones than they do with their significant others, parents, friends, children or co-workers, according to a survey of more than 1,000 people released Wednesday by Bank of America. That’s compared with fewer than one in three people of all ages who say they engage with their smartphones more.
This means that, on an average day, millennials — defined here as being ages 18 to 34 — “interact with their smartphone more than anything or anyone else,” the survey concluded.
This may not surprise anyone who has looked at millennial smartphone usage. More millennials (77%) own smartphones — and spend more time on them (over two hours a day) — than any other age group, according to a 2014 report that examined the behavior of more than 23,000 adults, and was released by Experian.“In fact, millennials spend so much time on their smartphones that they account for 41% of the total time that Americans spend using smartphones, despite making up just 29% of the population,” the report concluded.
Furthermore, nearly half of millennials — significantly more than any older age group — say they “couldn’t live without” their smartphone, according to data released in 2015 by the Pew Research Center, a nonprofit and nonpartisan think tank in Washington, D.C.
Millennials are also far more likely to use their smartphone as a social escape: More than seven in 10 millennials say they have used their smartphone to avoid a social interaction, compared with fewer than half (44%) of others, according to the Bank of America data.
To be fair, millennials have many compelling reasons for using their smartphones: Experian data show that roughly one in five millennials (again, more than other age groups) use their phones to read the news during a typical week, and millennials are more likely than other cohort to use their phones to stay in touch with friends. What’s more, Pew data shows that millennials are more likely than other groups to use their phones to look at educational content, find and apply for jobs and learn more about a health condition.
Look, I know you’re going to tell me that the traditional TRS headphone jack is a billion years old and prone to failure and that life is about progress and whatever else you need to repeat deliriously into your bed of old HTC extUSB dongles and insane magnetic Palm adapters to sleep at night. But just face facts: ditching the headphone jack on phones makes them worse, in extremely obvious ways. Let’s count them!
Oh look, I won this argument in one shot. For years the entertainment industry has decried what they call the „analog loophole“ of headphone jacks, and now we’re making their dreams come true by closing it.
Restricting audio output to a purely digital connection means that music publishers and streaming companies can start to insist on digital copyright enforcement mechanisms. We moved our video systems to HDMI and got HDCP, remember? Copyright enforcement technology never stops piracy and always hurts the people who most rely on legal fair use, but you can bet the music industry is going to start cracking down on „unauthorized“ playback and recording devices anyway. We deal with DRM when it comes to video because we generally don’t rewatch and take TV shows and movies with us, but you will rue the day Apple decided to make the iPhone another 1mm thinner the instant you get a „playback device not supported“ message. Winter is coming.
2. Wireless headphones and speakers are fine, not great
I am surrounded by wireless speaker systems. (I work at The Verge, after all.) And while they mostly work fine, sometimes they crackle out and fail. It sucks to share a wireless speaker among multiple devices. Bluetooth headphones require me to charge yet another battery. You haven’t known pain until you’ve chosen to use Bluetooth audio in a car instead of an aux jack.
3. Dongles are stupid, especially when they require other dongles
Shut up, you say. All of your complaints will be handled by this charming $29 dongle that converts digital audio to a standard headphone jack!
To which I will respond: here is a photo of Dieter Bohn and his beloved single-port MacBook, living his fullest #donglelife during our WWDC liveblog:
Everything is going to be great when you want to use your expensive headphones andcharge your phone at the same time. You are going to love everything about that situation. You are going to hold your 1mm thinner phone and sincerely believe that the small reduction in thickness is definitely worth carrying multiple additional dongles.
Also, they’re called fucking dongles. Let’s not do this to ourselves. Have some dignity.
4. Ditching a deeply established standard will disproportionately impact accessibility
The traditional headphone jack is a standard for a reason — it works. It works so well that an entire ecosystem of other kinds of devices has built up around it, and millions of people have access to compatible devices at every conceivable price point. The headphone jack might be less good on some metrics than Lightning or USB-C audio, but it is spectacularly better than anything else in the world at being accessible, enabling, open, and democratizing. A change that will cost every iPhone user at least $29 extra for a dongle (or more for new headphones) is not a change designed to benefit everyone. And you don’t need to get rid of the headphone jack to make a phone waterproof; plenty of waterproof phones have shipped with headphone jacks already.
5. Making Android and iPhone headphones incompatible is so incredibly arrogant and stupid there’s not even explanatory text under this one
6. No one is asking for this
Raise your hand if the thing you wanted most from your next phone was either fewer ports or more dongles.
I didn’t think so. You wanted better battery life, didn’t you? Everyone just wants better battery life.
BMW turns 100 this year, and although the company has a rich history, it is looking forward with a barrage of extreme concepts to prove its will remain relevant for the next 100 years. The company owns Rolls-Royce, which recently offered a glimpse of luxury in the 22nd Century, and Mini, which got a futuristic makeover. And now BMW peers into tomorrow and sees a sleek four-door with flexible skin, scissor doors, and an interior that flutters in three dimensions to communicate with you.
01 The exterior will look suitably futuristic
The silhouette of the Vision Next 100 Concept doesn’t look too out-there, and the lines bring to mind the sedan-coupe mashups you see today. Open the doors, which move upward (like this) instead of outward (like this), and the conceptual craziness steps up a notch. Of course the open on their own, as soon as you approach. The car, which presumably runs on electricity, or perhaps hydrogen, or maybe something not yet discovered, starts with a press of the ginormous BMW logo on the dashboard.
02 The car will be capable of driving itself
Today, BMW drivers can choose Eco, Comfort, and Sport mode. In the future, they’ll choose Ease or Boost. Feeling lazy? Select Ease mode and let the computer do everything. Want to see if BMW is once again the Ultimate Driving Machine? Boost mode lets you take the wheel. Whatever the mode, nearly 800 tiny triangles throughout the cabin flutter like birds, communicating with the driver. It sounds nuts, but the video makes it look like an effective way of conveying information. Besides—it’s a concept. It’s not like it actually has to work.
03 But the human can still take over
Swiping and scrolling has all but killed buttons and knobs, so it’s no surprise you won’t find any in BMW’s concept. Instead, you’ll handle everything through “The Companion,” which looks like a big, glowing gemstone on the dash. It uses colored light and voice commands to …. do something. Choose Boost mode and The Companion retracts and steering, er, bar pops out of the dash. And this being the future, the windscreen is a giant heads-up display.
04 It’s super slippery
The wheels sit at the far corners of the car because some things won’t change in 100 years. They’re covered with a flexible skin that BMW calls Alive Geometry, a fancy way of saying the bodywork moves as the wheels turn. (BMW’s explored this before with a shape-shifting car made of fabric.) Keeping the wheels covered improves aerodynamics, and BMW says it’s pretend car would have a drag coefficient of 0.18, making it much slicker than the sleek BMW i8.
05 And super green
To make the body pliable enough to change shape, BMW uses more of the tiny triangles that came alive when you got in the car. BMW believes the days of punching panels out of steel will end, and automakers will use recyclable materials and composites made with, say, random stuff that might otherwise go in the trash.
06 Copper is the new Rose Gold
BMW chose that shiny copper hue to underscore the idea that vehicles of the future “should appear technical yet still have a warmth about them.” Whatever. The color may be the coolest thing about this car.
07 The car is a sketchpad for the direction of the company
BMW built its reputation on quick cars that are a hoot to drive. (See also: 507, 2002, various M3s, etc.) But as the industry moves inexorably toward its autonomous future, that will become less important to most consumers—and far more important to those who see driving as more than a means of getting from Point A to Point B. This concept, as crazy as it is, shows BMW, like other automakers, hopes to serve both audiences.