Schlagwort-Archive: AI

Would you bet against sex robots? AI could leave half of world unemployed!

Thought Mechanism08 Dec 2011 --- A side view of a human female head with the human mind represented as a gear system. --- Image by Science Picture Co./Corbis
Artificial intelligence could put more than half the planet’s population out of a job, a computer scientist says. Photograph: Science Picture Co./Corbis

Machines could put more than half the world’s population out of a job in the next 30 years, according to a computer scientist who said on Saturday that artificial intelligence’s threat to the economy should not be understated.

Expert Moshe Vardi told the American Association for the Advancement of Science (AAAS): “We are approaching a time when machines will be able to outperform humans at almost any task.

“I believe that society needs to confront this question before it is upon us: if machines are capable of doing almost any work humans can do, what will humans do?”

Physicist Stephen Hawking and the tech billionaires Bill Gates and Elon Musk issued a similar warning last year. Hawking warned that AI “could spell the end of the human race” and Musk said it represents “our biggest existential threat”.

The fear of artificial intelligence has even reached the UN, where a group billing itself the Campaign to Stop Killer Robots met with diplomats last year.

Vardi, a professor at Rice University and Guggenheim fellow, said that technology presents a more subtle threat than the masterless drones that some activists fear. He suggested AI could drive global unemployment to 50%, wiping out middle-class jobs and exacerbating inequality.

Unlike the industrial revolution, Vardi said, “the AI revolution” will not be a matter of physically powerful machines that outperform human laborers, but rather a contest between human wit and mechanical intelligence and strength. In China the question has already affected thousands of jobs, as electronics manufacturers, Foxconn and Samsung among them, develop precision robots to replace human workers.

In his talk, the computer scientist alluded to economist John Maynard Keynes’ rosy vision of a future in which billions worked only a few hours a week, with intelligent machines to support their easy lifestyles – a prediction embraced wholesale by Google head of engineering Ray Kurzweil, who believes “the singularity” of super-AI could bring about utopia for a future hybrid of mankind.

Vardi insisted that even if machines make life easier, humanity will face an existential challenge.

“I do not find this a promising future, as I do not find the prospect of leisure-only life appealing,” he said. “I believe that work is essential to human wellbeing.”

Computer scientist Bart Selman told reporters at the conference that as self-driving cars, “household robots, service robots” and other intelligent systems become more common, humans will “sort of be in a symbiosis with those machines, and we’ll start to trust them and start to work with them”.

Selman, a professor at Cornell University, said: “Computers are basically starting to hear and see the way humans do,” thanks to advances in big data and “deep learning”.

Vardi predicted that driving will be almost fully automated in the next 25 years, and asked, for all the benefits of technology, “what can humans do when machines can do almost everything?”

He said that technology has already massively changed the US economy in the last 50 years. “We were all delighted to hear that unemployment went down to 4.8%” this month, he said, “but focusing on the monthly job report hides the fact that for the last 35 years the country has been in economic crisis.”

Citing research from MIT, he noted that although Americans continue to drive GDP with increasing productivity, employment peaked around 1980 and average wages for families have gone down. “It’s automation,” Vardi said.

He also predicted that automation’s effect on unemployment would have huge political consequences, and lamented that leaders have largely ignored it. “We are in a presidential election year and this issue is just nowhere on the radar screen.”

He said that virtually no human profession is totally immune: “Are you going to bet against sex robots? I would not.”

Last year, the consultant company McKinsey published research about which jobs are at risk thanks to intelligent machines, and found that some jobs – or at least well-paid careers like doctors and hedge fund managers – are better protected than others. Less intuitively, the researchers also concluded that some low-paying jobs, including landscapers and health aides, are also less likely to be changed than others.

In contrast, they concluded that 20% of a CEO’s working time could be automated with existing technologies, and nearly 80% of a file clerk’s job could be automated. Their research dovetails with Vardi’s worst-case scenario predictions, however; they argued that as much as 45% of the work people are paid to do could be automated by existing technology.

Vardi said he wanted the gathering of scientists to consider: “Does the technology we are developing ultimately benefit mankind?

“Humanity is about to face perhaps its greatest challenge ever, which is finding meaning in life after the end of ‘in the sweat of thy face shalt thou eat bread’,” he said. “We need to rise to the occasion and meet this challenge.”

In the US, the labor secretary, Thomas Perez, has told American seaports that they should consider robotic cranes and automatic vehicles in order to compete with docks around the world, despite the resistance of unions. In 2013, two Oxford professors predicted that as much as 47% of the US workforce, from telemarketers to legal secretaries and cooks, were vulnerable to automation.

Dire forecasts such as Vardi’s are not without their critics, including Pulitzer-winning author Nicholas Carr and Stanford scientist Edward Geist. Carr has argued that human creativity and intuition in the face of complex problems is essentially irreplaceable, and an advantage over computers and their overly accurate reputation.

Walking the line between the pessimists and optimists, Martin Ford, author of Rise of the Robots: Technology and the Threat of a Jobless Future, has suggested that automation will come down to politics today, telling National Geographic that if scientists and governments don’t address the issue “for lots of people who are not economically at the top, it’s going to be pretty dystopian”.

https://www.theguardian.com/technology/2016/feb/13/artificial-intelligence-ai-unemployment-jobs-moshe-vardi

Machine Learning and Artificial Intelligence: Soon We Won’t Program Computers. We’ll Train Them Like Dogs

AI_2-1-1.jpg

BEFORE THE INVENTION of the computer, most experimental psychologists thought the brain was an unknowable black box. You could analyze a subject’s behavior—ring bell, dog salivates—but thoughts, memories, emotions? That stuff was obscure and inscrutable, beyond the reach of science. So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades.

Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses. They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn’t a black box at all. It was more like a computer.

The so-called cognitive revolution started small, but as computers became standard equipment in psychology labs across the country, it gained broader acceptance. By the late 1970s, cognitive psychology had overthrown behaviorism, and with the new regime came a whole new language for talking about mental life. Psychologists began describing thoughts as programs, ordinary people talked about storing facts away in their memory banks, and business gurus fretted about the limits of mental bandwidth and processing power in the modern workplace.

This story has repeated itself again and again. As the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work. Technology always does this. During the Enlightenment, Newton and Descartes inspired people to think of the universe as an elaborate clock. In the industrial age, it was a machine with pistons. (Freud’s idea of psychodynamics borrowed from the thermodynamics of steam engines.) Now it’s a computer. Which is, when you think about it, a fundamentally empowering idea. Because if the world is a computer, then the world can be coded.

Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age. As software has eaten the world, to paraphrase venture capitalist Marc Andreessen, we have surrounded ourselves with machines that convert our actions, thoughts, and emotions into data—raw material for armies of code-wielding engineers to manipulate. We have come to see life itself as something ruled by a series of instructions that can be discovered, exploited, optimized, maybe even rewritten. Companies use code to understand our most intimate ties; Facebook’s Mark Zuckerberg has gone so far as to suggest there might be a “fundamental mathematical law underlying human relationships that governs the balance of who and what we all care about.” In 2013, Craig Venter announced that, a decade after the decoding of the human genome, he had begun to write code that would allow him to create synthetic organisms. “It is becoming clear,” he said, “that all living cells that we know of on this planet are DNA-software-driven biological machines.” Even self-help literature insists that you can hack your own source code, reprogramming your love life, your sleep routine, and your spending habits.

In this world, the ability to write code has become not just a desirable skill but a language that grants insider status to those who speak it. They have access to what in a more mechanical age would have been called the levers of power. “If you control the code, you control the world,” wrote futurist Marc Goodman. (In Bloomberg Businessweek, Paul Ford was slightly more circumspect: “If coders don’t run the world, they run the things that run the world.” Tomato, tomahto.)

But whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand.

Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.

This approach is not new—it’s been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed, and Google Photos uses it to identify faces. Machine learning runs Microsoft’s Skype Translator, which converts speech to different languages in real time. Self-driving cars use machine learning to avoid accidents. Even Google’s search engine—for so many years a towering edifice of human-written rules—has begun to rely on these deep neural networks. In February the company replaced its longtime head of search with machine-learning expert John Giannandrea, and it has initiated a major program to retrain its engineers in these new techniques. “By building learning systems,” Giannandrea told reporters this fall, “we don’t have to write these rules anymore.”

But here’s the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box. And as these black boxes assume responsibility for more and more of our daily digital tasks, they are not only going to change our relationship to technology—they are going to change how we think about ourselves, our world, and our place within it.

If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.

ANDY RUBIN IS an inveterate tinkerer and coder. The cocreator of the Android operating system, Rubin is notorious in Silicon Valley for filling his workplaces and home with robots. He programs them himself. “I got into computer science when I was very young, and I loved it because I could disappear in the world of the computer. It was a clean slate, a blank canvas, and I could create something from scratch,” he says. “It gave me full control of a world that I played in for many, many years.”

Now, he says, that world is coming to an end. Rubin is excited about the rise of machine learning—his new company, Playground Global, invests in machine-learning startups and is positioning itself to lead the spread of intelligent devices—but it saddens him a little too. Because machine learning changes what it means to be an engineer.

http://www.wired.com/2016/05/the-end-of-code

Artificial intelligence assistants are taking over

It was a weeknight, after dinner, and the baby was in bed. My wife and I were alone—we thought—discussing the sorts of things you might discuss with your spouse and no one else. (Specifically, we were critiquing a friend’s taste in romantic partners.) I was midsentence when, without warning, another woman’s voice piped in from the next room. We froze.

“I HELD THE DOOR OPEN FOR A CLOWN THE OTHER DAY,” the woman said in a loud, slow monotone. It took us a moment to realize that her voice was emanating from the black speaker on the kitchen table. We stared slack-jawed as she—it—continued: “I THOUGHT IT WAS A NICE JESTER.”

“What. The hell. Was that,” I said after a moment of stunned silence. Alexa, the voice assistant whose digital spirit animates the Amazon Echo, did not reply. She—it—responds only when called by name. Or so we had believed.

We pieced together what must have transpired. Somehow, Alexa’s speech recognition software had mistakenly picked the word Alexa out of something we said, then chosen a phrase like “tell me a joke” as its best approximation of whatever words immediately followed. Through some confluence of human programming and algorithmic randomization, it chose a lame jester/gesture pun as its response.

In retrospect, the disruption was more humorous than sinister. But it was also a slightly unsettling reminder that Amazon’s hit device works by listening to everything you say, all the time. And that, for all Alexa’s human trappings—the name, the voice, the conversational interface—it’s no more sentient than any other app or website. It’s just code, built by some software engineers in Seattle with a cheesy sense of humor.

But the Echo’s inadvertent intrusion into an intimate conversation is also a harbinger of a more fundamental shift in the relationship between human and machine. Alexa—and Siri and Cortana and all of the other virtual assistants that now populate our computers, phones, and living rooms—are just beginning to insinuate themselves, sometimes stealthily, sometimes overtly, and sometimes a tad creepily, into the rhythms of our daily lives. As they grow smarter and more capable, they will routinely surprise us by making our lives easier, and we’ll steadily become more reliant on them.

Even as many of us continue to treat these bots as toys and novelties, they are on their way to becoming our primary gateways to all sorts of goods, services, and information, both public and personal. When that happens, the Echo won’t just be a cylinder in your kitchen that sometimes tells bad jokes. Alexa and virtual agents like it will be the prisms through which we interact with the online world.

It’s a job to which they will necessarily bring a set of biases and priorities, some subtler than others. Some of those biases and priorities will reflect our own. Others, almost certainly, will not. Those vested interests might help to explain why they seem so eager to become our friends.

* * *

ibmAP

In the beginning, computers spoke only computer language, and a human seeking to interact with one was compelled to do the same. First came punch cards, then typed commands such as run, print, and dir.

The 1980s brought the mouse click and the graphical user interface to the masses; the 2000s, touch screens; the 2010s, gesture control and voice. It has all been leading, gradually and imperceptibly, to a world in which we no longer have to speak computer language, because computers will speak human language—not perfectly, but well enough to get by.

Alexa and software agents like it will be the prisms through which we interact with the online world.
We aren’t there yet. But we’re closer than most people realize. And the implications—many of them exciting, some of them ominous—will be tremendous.

Like card catalogs and AOL-style portals before it, Web search will begin to fade from prominence, and with it the dominance of browsers and search engines. Mobile apps as we know them—icons on a home screen that you tap to open—will start to do the same. In their place will rise an array of virtual assistants, bots, and software agents that act more and more like people: not only answering our queries, but acting as our proxies, accomplishing tasks for us, and asking questions of us in return.

This is already beginning to happen—and it isn’t just Siri or Alexa. As of April, all five of the world’s dominant technology companies are vying to be the Google of the conversation age. Whoever wins has a chance to get to know us more intimately than any company or machine has before—and to exert even more influence over our choices, purchases, and reading habits than they already do.

So say goodbye to Web browsers and mobile home screens as our default portals to the Internet. And say hello to the new wave of intelligent assistants, virtual agents, and software bots that are rising to take their place.

No, really, say “hello” to them. Apple’s Siri, Google’s mobile search app, Amazon’s Alexa, Microsoft’s Cortana, and Facebook’s M, to name just five of the most notable, are diverse in their approaches, capabilities, and underlying technologies. But, with one exception, they’ve all been programmed to respond to basic salutations in one way or another, and it’s a good way to start to get a sense of their respective mannerisms. You might even be tempted to say they have different personalities.

Siri’s response to “hello” varies, but it’s typically chatty and familiar:

160331_CS_siriScreen.CROP.promo xlarge2.jpgSlate/Screenshot

Alexa is all business:

160331_CS_alexaSS.CROP.promo xlarge2.jpgSlate/Screenshot

Google is a bit of an idiot savant: It responds by pulling up a YouTube video of the song “Hello” by Adele, along with all the lyrics.

160331_CS_googleScreen.CROP.promo xlarge2.jpgSlate/Screenshot

Cortana isn’t interested in saying anything until you’ve handed her the keys to your life:

160331_CS_cortanaTrio.CROP.promo xlarge2.jpgSlate/Screenshot

Once those formalities are out of the way, she’s all solicitude:

160331_CS_cortanaScreen.CROP.promo xlarge2.jpgSlate/Screenshot

Then there’s Facebook M, an experimental bot, available so far only to an exclusive group of Bay Area beta-testers, that lives inside Facebook Messenger and promises to answer almost any question and fulfill almost any (legal) request. If the casual, what’s-up-BFF tone of its text messages rings eerily human, that’s because it is: M is powered by an uncanny pairing of artificial intelligence and anonymous human agents.

160331_CS_mScreen.CROP.promo xlarge2.jpgSlate/Screenshot

You might notice that most of these virtual assistants have female-sounding names and voices. Facebook M doesn’t have a voice—it’s text-only—but it was initially rumored to be called Moneypenny, a reference to a secretary from the James Bond franchise. And even Google’s voice is female by default. This is, to some extent, a reflection of societal sexism. But these bots’ apparent embrace of gender also highlights their aspiration to be anthropomorphized: They want—that is, the engineers that build them want—to interact with you like a person, not a machine. It seems to be working: Already people tend to refer to Siri, Alexa, and Cortana as “she,” not “it.”

That Silicon Valley’s largest tech companies have effectively humanized their software in this way, with little fanfare and scant resistance, represents a coup of sorts. Once we perceive a virtual assistant as human, or at least humanoid, it becomes an entity with which we can establish humanlike relations. We can like it, banter with it, even turn to it for companionship when we’re lonely. When it errs or betrays us, we can get angry with it and, ultimately, forgive it. What’s most important, from the perspective of the companies behind this technology, is that we trust it.

Should we?

* * *

Siri wasn’t the first digital voice assistant when Apple introduced it in 2011, and it may not have been the best. But it was the first to show us what might be possible: a computer that you talk to like a person, that talks back, and that attempts to do what you ask of it without requiring any further action on your part. Adam Cheyer, co-founder of the startup that built Siri and sold it to Apple in 2010, has said he initially conceived of it not as a search engine, but as a “do engine.”

If Siri gave us a glimpse of what is possible, it also inadvertently taught us about what wasn’t yet. At first, it often struggled to understand you, especially if you spoke into your iPhone with an accent, and it routinely blundered attempts to carry out your will. Its quick-witted rejoinders to select queries (“Siri, talk dirty to me”) raised expectations for its intelligence that were promptly dashed once you asked it something it hadn’t been hard-coded to answer. Its store of knowledge proved trivial compared with the vast information readily available via Google search. Siri was as much an inspiration as a disappointment.

Five years later, Siri has gotten smarter, if perhaps less so than one might have hoped. More importantly, the technology underlying it has drastically improved, fueled by a boom in the computer science subfield of machine learning. That has led to sharp improvements in speech recognition and natural language understanding, two separate but related technologies that are crucial to voice assistants.

siriReuters/Suzanne PlunkettLuke Peters demonstrates Siri, an application which uses voice recognition and detection on the iPhone 4S, outside the Apple store in Covent Garden, London Oct. 14, 2011.

If Siri gave us a glimpse of what is possible, it also inadvertently taught us about what wasn’t yet.

If a revolution in technology has made intelligent virtual assistants possible, what has made them inevitable is a revolution in our relationship to technology. Computers began as tools of business and research, designed to automate tasks such as math and information retrieval. Today they’re tools of personal communication, connecting us not only to information but to one another. They’re also beginning to connect us to all the other technologies in our lives: Your smartphone can turn on your lights, start your car, activate your home security system, and withdraw money from your bank. As computers have grown deeply personal, our relationship with them has changed. And yet the way they interact with us hasn’t quite caught up.

“It’s always been sort of appalling to me that you now have a supercomputer in your pocket, yet you have to learn to use it,” says Alan Packer, head of language technology at Facebook. “It seems actually like a failure on the part of our industry that software is hard to use.”

Packer is one of the people trying to change that. As a software developer at Microsoft, he helped to build Cortana. After it launched, he found his skills in heavy demand, especially among the two tech giants that hadn’t yet developed voice assistants of their own. One Thursday morning in December 2014, Packer was on the verge of accepting a top job at Amazon—“You would not be surprised at which team I was about to join,” he says—when Facebook called and offered to fly him to Menlo Park, California, for an interview the next day. He had an inkling of what Amazon was working on, but he had no idea why Facebook might be interested in someone with his skill set.

As it turned out, Facebook wanted Packer for much the same purpose that Microsoft and Amazon did: to help it build software that could make sense of what its users were saying and generate intelligent responses. Facebook may not have a device like the Echo or an operating system like Windows, but its own platforms are full of billions of people communicating with one another every day. If Facebook can better understand what they’re saying, it can further hone its News Feed and advertising algorithms, among other applications. More creatively, Facebook has begun to use language understanding to build artificial intelligence into its Messenger app. Now, if you’re messaging with a friend and mention sharing an Uber, a software agent within Messenger can jump in and order it for you while you continue your conversation.

In short, Packer says, Facebook is working on language understanding because Facebook is a technology company—and that’s where technology is headed. As if to underscore that point, Packer’s former employer this year headlined its annual developer conference by announcing plans to turn Cortana into a portal for conversational bots and integrate it into Skype, Outlook, and other popular applications. Microsoft CEO Satya Nadella predicted that bots will be the Internet’s next major platform, overtaking mobile apps the same way they eclipsed desktop computing.

* * *Amazon Echo DotAP

Siri may not have been very practical, but people immediately grasped what it was. With Amazon’s Echo, the second major tech gadget to put a voice interface front and center, it was the other way around. The company surprised the industry and baffled the public when it released a device in November 2014 that looked and acted like a speaker—except that it didn’t connect to anything except a power outlet, and the only buttons were for power and mute. You control the Echo solely by voice, and if you ask it questions, it talks back. It was like Amazon had decided to put Siri in a black cylinder and sell it for $179. Except Alexa, the virtual intelligence software that powers the Echo, was far more limited than Siri in its capabilities. Who, reviewers wondered, would buy such a bizarre novelty gadget?

That question has faded as Amazon has gradually upgraded and refined the Alexa software, and the five-star Amazon reviews have since poured in. In the New York Times, Farhad Manjoo recently followed up his tepid initial review with an all-out rave: The Echo “brims with profound possibility,” he wrote. Amazon has not disclosed sales figures, but the Echo ranks as the third-best-selling gadget in its electronics section. Alexa may not be as versatile as Siri—yet—but it turned out to have a distinct advantage: a sense of purpose, and of its own limitations. Whereas Apple implicitly invites iPhone users to ask Siri anything, Amazon ships the Echo with a little cheat sheet of basic queries that it knows how to respond to: “Alexa, what’s the weather?” “Alexa, set a timer for 45 minutes.” “Alexa, what’s in the news?”

The cheat sheet’s effect is to lower expectations to a level that even a relatively simplistic artificial intelligence can plausibly meet on a regular basis. That’s by design, says Greg Hart, Amazon’s vice president in charge of Echo and Alexa. Building a voice assistant that can respond to every possible query is “a really hard problem,” he says. “People can get really turned off if they have an experience that’s subpar or frustrating.” So the company began by picking specific tasks that Alexa could handle with aplomb and communicating those clearly to customers.

At launch, the Echo had just 12 core capabilities. That list has grown steadily as the company has augmented Alexa’s intelligence and added integrations with new services, such as Google Calendar, Yelp reviews, Pandora streaming radio, and even Domino’s delivery. The Echo is also becoming a hub for connected home appliances: “ ‘Alexa, turn on the living room lights’ never fails to delight people,” Hart says.

When you ask Alexa a question it can’t answer or say something it can’t quite understand, it fesses up: “Sorry, I don’t know the answer to that question.” That makes it all the more charming when you test its knowledge or capabilities and it surprises you by replying confidently and correctly. “Alexa, what’s a kinkajou?” I asked on a whim one evening, glancing up from my laptop while reading a news story about an elderly Florida woman who woke up one day with a kinkajou on her chest. Alexa didn’t hesitate: “A kinkajou is a rainforest mammal of the family Procyonidae … ” Alexa then proceeded to list a number of other Procyonidae to which the kinkajou is closely related. “Alexa, that’s enough,” I said after a few moments, genuinely impressed. “Thank you,” I added.

“You’re welcome,” Alexa replied, and I thought for a moment that she—it—sounded pleased.

As delightful as it can seem, the Echo’s magic comes with some unusual downsides. In order to respond every time you say “Alexa,” it has to be listening for the word at all times. Amazon says it only stores the commands that you say after you’ve said the word Alexa and discards the rest. Even so, the enormous amount of processing required to listen for a wake word 24/7 is reflected in the Echo’s biggest limitation: It only works when it’s plugged into a power outlet. (Amazon’s newest smart speakers, the Echo Dot and the Tap, are more mobile, but one sacrifices the speaker and the other the ability to respond at any time.)

Even if you trust Amazon to rigorously protect and delete all of your personal conversations from its servers—as it promises it will if you ask it to—Alexa’s anthropomorphic characteristics make it hard to shake the occasional sense that it’s eavesdropping on you, Big Brother–style. I was alone in my kitchen one day, unabashedly belting out the Fats Domino song “Blueberry Hill” as I did the dishes, when it struck me that I wasn’t alone after all. Alexa was listening—not judging, surely, but listening all the same. Sheepishly, I stopped singing.

* * *hal 2001Google Images

The notion that the Echo is “creepy” or “spying on us” might be the most common criticism of the device so far. But there’s a more fundamental problem. It’s one that is likely to haunt voice assistants, and those who rely on them, as the technology evolves and bores it way more deeply into our lives.

The problem is that conversational interfaces don’t lend themselves to the sort of open flow of information we’ve become accustomed to in the Google era. By necessity they limit our choices—because their function is to make choices on our behalf.

For example, a search for “news” on the Web will turn up a diverse and virtually endless array of possible sources, from Fox News to Yahoo News to CNN to Google News, which is itself a compendium of stories from other outlets. But ask the Echo, “What’s in the news?” and by default it responds by serving up a clip of NPR News’s latest hourly update, which it pulls from the streaming radio service TuneIn. Which is great—unless you don’t happen to like NPR’s approach to the news, or you prefer a streaming radio service other than TuneIn. You can change those defaults somewhere in the bowels of the Alexa app, but Alexa never volunteers that information. Most people will never even know it’s an option. Amazon has made the choice for them.

And how does Amazon make that sort of choice? The Echo’s cheat sheet doesn’t tell you that, and the company couldn’t give me a clear answer.

Alexa does take care to mention before delivering the news that it’s pulling the briefing from NPR News and TuneIn. But that isn’t always the case with other sorts of queries.

Let’s go back to our friend the kinkajou. In my pre-Echo days, my curiosity about an exotic animal might have sent me to Google via my laptop or phone. Just as likely, I might have simply let the moment of curiosity pass and not bothered with a search. Looking something up on Google involves just enough steps to deter us from doing it in a surprising number of cases. One of the great virtues of voice technology is to lower that barrier to the point where it’s essentially no trouble at all. Having an Echo in the room when you’re struck by curiosity about kinkajous is like having a friend sitting next to you who happens to be a kinkajou expert. All you have to do is say your question out loud, and Alexa will supply the answer. You literally don’t have to lift a finger.

That is voice technology’s fundamental advantage over all the human-computer interfaces that have come before it: In many settings, including the home, the car, or on a wearable gadget, it’s much easier and more natural than clicking, typing, or tapping. In the logic of today’s consumer technology industry, that makes its ascendance in those realms all but inevitable.

But consider the difference between Googling something and asking a friendly voice assistant. When I Google “kinkajou,” I get a list of websites, ranked according to an algorithm that takes into account all sorts of factors that correlate with relevance and authority. I choose the information source I prefer, then visit its website directly—an experience that could help to further shade or inform my impression of its trustworthiness. Ultimately, the answer does come not from Google, per se, but directly from some third-party authority, whose credibility I can evaluate as I wish.

A voice-based interface is different. The response comes one word at a time, one sentence at a time, one idea at a time. That makes it very easy to follow, especially for humans who have spent their whole lives interacting with one another in just this way. But it makes it very cumbersome to present multiple options for how to answer a given query. Imagine for a moment what it would sound like to read a whole Google search results page aloud, and you’ll understand no one builds a voice interface that way.

That’s why voice assistants tend to answer your question by drawing from a single source of their own choosing. Alexa’s confident response to my kinkajou question, I later discovered, came directly from Wikipedia, which Amazon has apparently chosen as the default source for Alexa’s answers to factual questions. The reasons seem fairly obvious: It’s the world’s most comprehensive encyclopedia, its information is free and public, and it’s already digitized. What it’s not, of course, is infallible. Yet Alexa’s response to my question didn’t begin with the words, “Well, according to Wikipedia … ” She—it—just launched into the answer, as if she (it) knew it off the top of her (its) head. If a human did that, we might call it plagiarism.

The sin here is not merely academic. By not consistently citing the sources of its answers, Alexa makes it difficult to evaluate their credibility. It also implicitly turns Alexa into an information source in its own right, rather than a guide to information sources, because the only entity in which we can place our trust or distrust is Alexa itself. That’s a problem if its information source turns out to be wrong.

The constraints on choice and transparency might not bother people when Alexa’s default source is Wikipedia, NPR, or TuneIn. It starts to get a little more irksome when you ask Alexa to play you music, one of the Echo’s core features. “Alexa, play me the Rolling Stones” will queue up a shuffle playlist of Rolling Stones songs available through Amazon’s own streaming music service, Amazon Prime Music—provided you’re paying the $99 a year required to be an Amazon Prime member. Otherwise, the most you’ll get out of the Echo are 20-second samples of songs available for purchase. Want to guess what one choice you’ll have as to which online retail giant to purchase those songs from?

When you say “Hello” to Alexa, you’re signing up for her party.

Amazon’s response is that Alexa does give you options and cite its sources—in the Alexa app, which keeps a record of your queries and its responses. When the Echo tells you what a kinkajou is, you can open the app on your phone and see a link to the Wikipedia article, as well as an option to search Bing. Amazon adds that Alexa is meant to be an “open platform” that allows anyone to connect to it via an API. The company is also working with specific partners to integrate their services into Alexa’s repertoire. So, for instance, if you don’t want to be limited to playing songs from Amazon Prime Music, you can now take a series of steps to link the Echo to a different streaming music service, such as Spotify Premium. Amazon Prime Music will still be the default, though: You’ll only get Spotify if you specify “from Spotify” in your voice command.

What’s not always clear is how Amazon chooses its defaults and its partners and what motivations might underlie those choices. Ahead of the 2016 Super Bowl, Amazon announced that the Echo could now order you a pizza. But that pizza would come, at least for the time being, from just one pizza-maker: Domino’s. Want a pizza from Little Caesars instead? You’ll have to order it some other way.

To Amazon’s credit, its choice of pizza source is very transparent. To use the pizza feature, you have to utter the specific command, “Alexa, open Domino’s and place my Easy Order.” The clunkiness of that command is no accident. It’s Amazon’s way of making sure that you don’t order a pizza by accident and that you know where that pizza is coming from. But it’s unlikely Domino’s would have gone to the trouble of partnering with Amazon if it didn’t think it would result in at least some number of people ordering Domino’s for their Super Bowl parties rather than Little Caesars.

None of this is to say that Amazon and Domino’s are going to conspire to monopolize the pizza industry anytime soon. There are obviously plenty of ways to order a pizza besides doing it on an Echo. Ditto for listening to the news, the Rolling Stones, a book, or a podcast. But what about when only one company’s smart thermostat can be operated by Alexa? If you come to rely on Alexa to manage your Google Calendar, what happens when Amazon and Google have a falling out?
When you say “Hello” to Alexa, you’re signing up for her party. Nominally, everyone’s invited. But Amazon has the power to ensure that its friends and business associates are the first people you meet.

* * *

google now speak now screenBusiness Insider, William Wei

These concerns might sound rather distant—we’re just talking about niche speakers connected to niche thermostats, right? The coming sea change feels a lot closer once you think about the other companies competing to make digital assistants your main portal to everything you do on your computer, in your car, and on your phone. Companies like Google.

Google may be positioned best of all to capitalize on the rise of personal A.I. It also has the most to lose. From the start, the company has built its business around its search engine’s status as a portal to information and services. Google Now—which does things like proactively checking the traffic and alerting you when you need to leave for a flight, even when you didn’t ask it to—is a natural extension of the company’s strategy.

If something is going to replace Google’s on-screen services, Google wants to be the one that does it.
As early as 2009, Google began to work on voice search and what it calls “conversational search,” using speech recognition and natural language understanding to respond to questions phrased in plain language. More recently, it has begun to combine that with “contextual search.” For instance, as Google demonstrated at its 2015 developer conference, if you’re listening to Skrillex on your Android phone, you can now simply ask, “What’s his real name?” and Google will intuit that you’re asking about the artist. “Sonny John Moore,” it will tell you, without ever leaving the Spotify app.

It’s no surprise, then, that Google is rumored to be working on two major new products—an A.I.-powered messaging app or agent and a voice-powered household gadget—that sound a lot like Facebook M and the Amazon Echo, respectively. If something is going to replace Google’s on-screen services, Google wants to be the one that does it.

So far, Google has made what seems to be a sincere effort to win the A.I. assistant race without

sacrificing the virtues—credibility, transparency, objectivity—that made its search page such a dominant force on the Web. (It’s worth recalling: A big reason Google vanquished AltaVista was that it didn’t bend its search results to its own vested interests.) Google’s voice search does generally cite its sources. And it remains primarily a portal to other sources of information, rather than a platform that pulls in content from elsewhere. The downside to that relatively open approach is that when you say “hello” to Google voice search, it doesn’t say hello back. It gives you a link to the Adele song “Hello.” Even then, Google isn’t above playing favorites with the sources of information it surfaces first: That link goes not to Spotify, Apple Music, or Amazon Prime Music, but to YouTube, which Google owns. The company has weathered antitrust scrutiny over allegations that this amounted to preferential treatment. Google’s defense was that it puts its own services and information sources first because its users prefer them.

* * *

HerYouTube

If there’s a consolation for those concerned that intelligent assistants are going to take over the world, it’s this: They really aren’t all that intelligent. Not yet, anyway.

The 2013 movie Her, in which a mobile operating system gets to know its user so well that they become romantically involved, paints a vivid picture of what the world might look like if we had the technology to carry Siri, Alexa, and the like to their logical conclusion. The experts I talked to, who are building that technology today, almost all cited Her as a reference point—while pointing out that we’re not going to get there anytime soon.

Google recently rekindled hopes—and fears—of super-intelligent A.I. when its AlphaGo software defeated the world champion in a historic Go match. As momentous as the achievement was, designing an algorithm to win even the most complex board game is trivial compared with designing one that can understand and respond appropriately to anything a person might say. That’s why, even as artificial intelligence is learning to recommend songs that sound like they were hand-picked by your best friend or navigate city streets more safely than any human driver, A.I. still has to resort to parlor tricks—like posing as a 13-year-old struggling with a foreign language—to pass as human in an extended conversation. The world is simply too vast, language too ambiguous, the human brain too complex for any machine to model it, at least for the foreseeable future.

But if we won’t see a true full-service A.I. in our lifetime, we might yet witness the rise of a system that can approximate some of its capabilities—comprising not a single, humanlike Her, but a million tiny hims carrying out small, discrete tasks handily. In January, the Verge’s Casey Newton made a compelling argument that our technological future will be filled not with websites, apps, or even voice assistants, but with conversational messaging bots. Like voice assistants, these bots rely on natural language understanding to carry on conversations with us. But they will do so via the medium that has come to dominate online interpersonal interaction, especially among the young people who are the heaviest users of mobile devices: text messaging. For example, Newton points to “Lunch Bot,” a relatively simple agent that lived in the wildly popular workplace chat program Slack and existed for a single, highly specialized purpose: to recommend the best place for employees to order their lunch from on a given day. It soon grew into a venture-backed company called Howdy.

A world of conversational machines is one in which we treat software like humans, letting them deeper into our lives and confiding in them more than ever.

I have a bot in my own life that serves a similarly specialized yet important role. While researching this story, I ran across a company called X.ai whose mission is to build the ultimate virtual scheduling assistant. It’s called Amy Ingram, and if its initials don’t tip you off, you might interact with it several times before realizing it’s not a person. (Unlike some other intelligent assistant companies, X.ai gives you the option to choose a male name for your assistant instead: Mine is Andrew Ingram.) Though it’s backed by some impressive natural language tech, X.ai’s bot does not attempt to be a know-it-all or do-it-all; it doesn’t tell jokes, and you wouldn’t want to date him. It asks for access to just one thing—your calendar. And it communicates solely by email. Just cc it on any thread in which you’re trying to schedule a meeting or appointment, and it will automatically step in and take over the back-and-forth involved in nailing down a time and place. Once it has agreed on a time with whomever you’re meeting—or, perhaps, with his or her own assistant, whether human or virtual—it will put all the relevant details on your calendar. Have your A.I. cc my A.I.

For these bots, the key to success is not growing so intelligent that they can do everything. It’s staying specialized enough that they don’t have to.

“We’ve had this A.I. fantasy for almost 60 years now,” says Dennis Mortensen, X.ai’s founder and CEO. “At every turn we thought the only outcome would be some human-level entity where we could converse with it like you and I are [conversing] right now. That’s going to continue to be a fantasy. I can’t see it in my lifetime or even my kids’ lifetime.” What is possible, Mortensen says, is “extremely specialized, verticalized A.I.s that understand perhaps only one job, but do that job very well.”

Yet those simple bots, Mortensen believes, could one day add up to something more. “You get enough of these agents, and maybe one morning in 2045 you look around and that plethora—tens of thousands of little agents—once they start to talk to each other, it might not look so different from that A.I. fantasy we’ve had.”

That might feel a little less scary. But it still leaves problems of transparency, privacy, objectivity, and trust—questions that are not new to the world of personal technology and the Internet but are resurfacing in fresh and urgent forms. A world of conversational machines is one in which we treat software like humans, letting them deeper into our lives and confiding in them more than ever. It’s one in which the world’s largest corporations know more about us, hold greater influence over our choices, and make more decisions for us than ever before. And it all starts with a friendly “Hello.”

 

www.businessinsider.com/ai-assistants-are-taking-over-2016-4

The brightest minds in AI research – Machine Learning

In AI research,  brightest minds aren’t driven by the next product cycle or profit margin – They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself.

http://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/

Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free

ElonMusk201604

THE FRIDAY AFTERNOON news dump, a grand tradition observed by politicians and capitalists alike, is usually supposed to hide bad news. So it was a little weird that Elon Musk, founder of electric car maker Tesla, and Sam Altman, president of famed tech incubator Y Combinator, unveiled their new artificial intelligence company at the tail end of a weeklong AI conference in Montreal this past December.

But there was a reason they revealed OpenAI at that late hour. It wasn’t that no one was looking. It was that everyone was looking. When some of Silicon Valley’s most powerful companies caught wind of the project, they began offering tremendous amounts of money to OpenAI’s freshly assembled cadre of artificial intelligence researchers, intent on keeping these big thinkers for themselves. The last-minute offers—some made at the conference itself—were large enough to force Musk and Altman to delay the announcement of the new startup. “The amount of money was borderline crazy,” says Wojciech Zaremba, a researcher who was joining OpenAI after internships at both Google and Facebook and was among those who received big offers at the eleventh hour.

How many dollars is “borderline crazy”? Two years ago, as the market for the latest machine learning technology really started to heat up, Microsoft Research vice president Peter Lee said that the cost of a top AI researcher had eclipsed the cost of a top quarterback prospect in the National Football League—and he meant under regular circumstances, not when two of the most famous entrepreneurs in Silicon Valley were trying to poach your top talent. Zaremba says that as OpenAI was coming together, he was offered two or three times his market value.

OpenAI didn’t match those offers. But it offered something else: the chance to explore research aimed solely at the future instead of products and quarterly earnings, and to eventually share most—if not all—of this research with anyone who wants it. That’s right: Musk, Altman, and company aim to give away what may become the 21st century’s most transformative technology—and give it away for free.

Zaremba says those borderline crazy offers actually turned him off—despite his enormous respect for companies like Google and Facebook. He felt like the money was at least as much of an effort to prevent the creation of OpenAI as a play to win his services, and it pushed him even further towards the startup’s magnanimous mission. “I realized,” Zaremba says, “that OpenAI was the best place to be.”

That’s the irony at the heart of this story: even as the world’s biggest tech companies try to hold onto their researchers with the same fierceness that NFL teams try to hold onto their star quarterbacks, the researchers themselves just want to share. In the rarefied world of AI research, the brightest minds aren’t driven by—or at least not only by—the next product cycle or profit margin. They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself.

OpenAI is a billion-dollar effort to push AI as far as it will go.
This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called “reinforcement learning”—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.

But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.

AI Everywhere
Silicon Valley is not exactly averse to hyperbole. It’s always wise to meet bold-sounding claims with skepticism. But in the field of AI, the change is real. Inside places like Google and Facebook, a technology called deep learning is already helping Internet services identify faces in photos, recognize commands spoken into smartphones, and respond to Internet search queries. And this same technology can drive so many other tasks of the future. It can help machines understand natural language—the natural way that we humans talk and write. It can create a new breed of robot, giving automatons the power to not only perform tasks but learn them on the fly. And some believe it can eventually give machines something close to common sense—the ability to truly think like a human.

But along with such promise comes deep anxiety. Musk and Altman worry that if people can build AI that can do great things, then they can build AI that can do awful things, too. They’re not alone in their fear of robot overlords, but perhaps counterintuitively, Musk and Altman also think that the best way to battle malicious AI is not to restrict access to artificial intelligence but expand it. That’s part of what has attracted a team of young, hyper-intelligent idealists to their new project.

OpenAI began one evening last summer in a private room at Silicon Valley’s Rosewood Hotel—an upscale, urban, ranch-style hotel that sits, literally, at the center of the venture capital world along Sand Hill Road in Menlo Park, California. Elon Musk was having dinner with Ilya Sutskever, who was then working on the Google Brain, the company’s sweeping effort to build deep neural networks—artificially intelligent systems that can learn to perform tasks by analyzing massive amounts of digital data, including everything from recognizing photos to writing email messages to, well, carrying on a conversation. Sutskever was one of the top thinkers on the project. But even bigger ideas were in play.

Sam Altman, whose Y Combinator helped bootstrap companies like Airbnb, Dropbox, and Coinbase, had brokered the meeting, bringing together several AI researchers and a young but experienced company builder named Greg Brockman, previously the chief technology officer at high-profile Silicon Valley digital payments startup called Stripe, another Y Combinator company. It was an eclectic group. But they all shared a goal: to create a new kind of AI lab, one that would operate outside the control not only of Google, but of anyone else. “The best thing that I could imagine doing,” Brockman says, “was moving humanity closer to building real AI in a safe way.”

Musk is one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.
Musk was there because he’s an old friend of Altman’s—and because AI is crucial to the future of his various businesses and, well, the future as a whole. Tesla needs AI for its inevitable self-driving cars. SpaceX, Musk’s other company, will need it to put people in space and keep them alive once they’re there. But Musk is also one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.

The trouble was: so many of the people most qualified to solve all those problems were already working for Google (and Facebook and Microsoft and Baidu and Twitter). And no one at the dinner was quite sure that these thinkers could be lured to a new startup, even if Musk and Altman were behind it. But one key player was at least open to the idea of jumping ship. “I felt there were risks involved,” Sutskever says. “But I also felt it would be a very interesting thing to try.”

Breaking the Cycle
Emboldened by the conversation with Musk, Altman, and others at the Rosewood, Brockman soon resolved to build the lab they all envisioned. Taking on the project full-time, he approached Yoshua Bengio, a computer scientist at the University of Montreal and one of founding fathers of the deep learning movement. The field’s other two pioneers—Geoff Hinton and Yann LeCun—are now at Google and Facebook, respectively, but Bengio is committed to life in the world of academia, largely outside the aims of industry. He drew up a list of the best researchers in the field, and over the next several weeks, Brockman reached out to as many on the list as he could, along with several others.

Many of these researchers liked the idea, but they were also wary of making the leap. In an effort to break the cycle, Brockman picked the ten researchers he wanted the most and invited them to spend a Saturday getting wined, dined, and cajoled at a winery in Napa Valley. For Brockman, even the drive into Napa served as a catalyst for the project. “An underrated way to bring people together are these times where there is no way to speed up getting to where you’re going,” he says. “You have to get there, and you have to talk.” And once they reached the wine country, that vibe remained. “It was one of those days where you could tell the chemistry was there,” Brockman says. Or as Sutskever puts it: “the wine was secondary to the talk.”

By the end of the day, Brockman asked all ten researchers to join the lab, and he gave them three weeks to think about it. By the deadline, nine of them were in. And they stayed in, despite those big offers from the giants of Silicon Valley. “They did make it very compelling for me to stay, so it wasn’t an easy decision,” Sutskever says of Google, his former employer. “But in the end, I decided to go with OpenAI, partly of because of the very strong group of people and, to a very large extent, because of its mission.”

The deep learning movement began with academics. It’s only recently that companies like Google and Facebook and Microsoft have pushed into the field, as advances in raw computing power have made deep neural networks a reality, not just a theoretical possibility. People like Hinton and LeCun left academia for Google and Facebook because of the enormous resources inside these companies. But they remain intent on collaborating with other thinkers. Indeed, as LeCun explains, deep learning research requires this free flow of ideas. “When you do research in secret,” he says, “you fall behind.”

As a result, big companies now share a lot of their AI research. That’s a real change, especially for Google, which has long kept the tech at the heart of its online empire secret. Recently, Google open sourced the software engine that drives its neural networks. But it still retains the inside track in the race to the future. Brockman, Altman, and Musk aim to push the notion of openness further still, saying they don’t want one or two large corporations controlling the future of artificial intelligence.

The Limits of Openness
All of which sounds great. But for all of OpenAI’s idealism, the researchers may find themselves facing some of the same compromises they had to make at their old jobs. Openness has its limits. And the long-term vision for AI isn’t the only interest in play. OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”

According to Brockman, the lab doesn’t pay the same astronomical salaries that AI researchers are now getting at places like Google and Facebook. But he says the lab does want to “pay them well,” and it’s offering to compensate researchers with stock options, first in Y Combinator and perhaps later in SpaceX (which, unlike Tesla, is still a private company).

Brockman insists that OpenAI won’t give special treatment to its sister companies.
Nonetheless, Brockman insists that OpenAI won’t give special treatment to its sister companies. OpenAI is a research outfit, he says, not a consulting firm. But when pressed, he acknowledges that OpenAI’s idealistic vision has its limits. The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. “Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it,” Brockman says. “We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release.”

Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. “We won’t patent anything in the near term,” Brockman says. “But we’re open to changing tactics in the long term, if we find it’s the best thing for the world.” For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.

But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI’s founders have espoused. “That’s what the patent system is about,” says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. “This makes me wonder where they’re really going.”

The Super-Intelligence Problem
When Musk and Altman unveiled OpenAI, they also painted the project as a way to neutralize the threat of a malicious artificial super-intelligence. Of course, that super-intelligence could arise out of the tech OpenAI creates, but they insist that any threat would be mitigated because the technology would be usable by everyone. “We think its far more likely that many, many AIs will work to stop the occasional bad actors,” Altman says.

But not everyone in the field buys this. Nick Bostrom, the Oxford philosopher who, like Musk, has warned against the dangers of AI, points out that if you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe. “If you have a button that could do bad things to the world,” Bostrom says, “you don’t want to give it to everyone.” If, on the other hand, OpenAI decides to hold back research to keep it from the bad guys, Bostrom wonders how it’s different from a Google or a Facebook.

If you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe.
He does say that the not-for-profit status of OpenAI could change things—though not necessarily. The real power of the project, he says, is that it can indeed provide a check for the likes of Google and Facebook. “It can reduce the probability that super-intelligence would be monopolized,” he says. “It can remove one possible reason why some entity or group would have radically better AI than everyone else.”

But as the philosopher explains in a new paper, the primary effect of an outfit like OpenAI—an outfit intent on freely sharing its work—is that it accelerates the progress of artificial intelligence, at least in the short term. And it may speed progress in the long term as well, provided that it, for altruistic reasons, “opts for a higher level of openness than would be commercially optimal.”

“It might still be plausible that a philanthropically motivated R&D funder would speed progress more by pursuing open science,” he says.

Like Xerox PARC
In early January, Brockman’s nine AI researchers met up at his apartment in San Francisco’s Mission District. The project was so new that they didn’t even have white boards. (Can you imagine?) They bought a few that day and got down to work.

Brockman says OpenAI will begin by exploring reinforcement learning, a way for machines to learn tasks by repeating them over and over again and tracking which methods produce the best results. But the other primary goal is what’s called “unsupervised learning”—creating machines that can truly learn on their own, without a human hand to guide them. Today, deep learning is driven by carefully labeled data. If you want to teach a neural network to recognize cat photos, you must feed it a certain number of examples—and these examples must be labeled as cat photos. The learning is supervised by human labelers. But like many others researchers, OpenAI aims to create neural nets that can learn without carefully labeled data.

“If you have really good unsupervised learning, machines would be able to learn from all this knowledge on the Internet—just like humans learn by looking around—or reading books,” Brockman says.

He envisions OpenAI as the modern incarnation of Xerox PARC, the tech research lab that thrived in the 1970s. Just as PARC’s largely open and unfettered research gave rise to everything from the graphical user interface to the laser printer to object-oriented programing, Brockman and crew seek to delve even deeper into what we once considered science fiction. PARC was owned by, yes, Xerox, but it fed so many other companies, most notably Apple, because people like Steve Jobs were privy to its research. At OpenAI, Brockman wants to make everyone privy to its research.

This month, hoping to push this dynamic as far as it will go, Brockman and company snagged several other notable researchers, including Ian Goodfellow, another former senior researcher on the Google Brain team. “The thing that was really special about PARC is that they got a bunch of smart people together and let them go where they want,” Brockman says. “You want a shared vision, without central control.”

Giving up control is the essence of the open source ideal. If enough people apply themselves to a collective goal, the end result will trounce anything you concoct in secret. But if AI becomes as powerful as promised, the equation changes. We’ll have to ensure that new AIs adhere to the same egalitarian ideals that led to their creation in the first place. Musk, Altman, and Brockman are placing their faith in the wisdom of the crowd. But if they’re right, one day that crowd won’t be entirely human.

Elon Musk OpenAI is far more than saving the world

Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving the World

Elon Musk.

Humans to become ‚pets‘ of AI robots, says Apple co-founder Wozniak

If you needed just one more reason to trash your iPhone, this is it. Apple co-founder Steve Wozniak recently told a crowd of techies in Austin, Texas, that the future of humanity will predicate on artificially-intelligent (AI) robots keeping people as „pets“ – and Wozniak says he’s actually looking forward to this grim, robot-dominated future.

Building upon Apple’s „Siri“ concept, which is AI in its infancy, Wozniak’s vision for 100 years from today is that humans will be literally owned by AI robots, much like how humans currently own dogs or cats. Robots will be in charge, in other words, and humans will be their slaves. And all of this will somehow be „really good for humans,“ in Wozniak’s view.

Speaking at the Freescale Technology Forum 2015, Wozniak told eager listeners that putting robots in charge is a good thing because, by that point (100 years from now), they’ll have the capacity to become good stewards of nature, „and humans are part of nature,“ he says. Expressing comfort by this thought, Wozniak stated that he „got over [his] fear“ of becoming a robot slave.

„Computers are going to take over from humans, no question,“ Wozniak told the Australian Financial Review during a recent interview, affirming what many others in the tech industry, including Tesla CEO Elon Musk, believe will commence once AI technology really gets off the ground.

Since Wozniak treats his own dogs „really nice,“ he says he isn’t concerned about AI robots taking over

Echoing the concerns of Musk, Microsoft founder Bill Gates, physicist Stephen Hawking, and others, Wozniak does acknowledge some of the risks involved with developing AI technologies. But these risks aren’t necessarily a deal breaker because AI robots, in his view, will probably treat humans kindly just like most people treat their own pets.

„Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that,“ he stated. „But when I got that thinking in my head about if I’m going to be treated in the future as a pet to these smart machines … well I’m going to treat my own pet dog really nice.“

Well, phew! It’s all settled then. Because Wozniak happens to be kind to his own dog, it’s perfectly fine, in his view, to unleash an army of advanced robots that are „smarter“ than humans and capable of destroying them because maybe they’ll choose instead to be kind to humans.

Wozniak: Let’s just unleash AI robots in order to find out how they’ll treat humans

Robot-Cyborg

It’s a lot like the infamously absurd words of House Minority leader Nancy Pelosi, who stated prior to voting for Obamacare that „we have to pass the bill so that you can find out what is in it.“ Concerning AI robots, Wozniak’s message is essentially the same: We just have to create them first in order to find out what they’ll do to humanity.

But if a recent „Google Brain“ study is any indicator of how AI robots think, humans would be lucky to be treated as kindly as a family pet. An experimental AI robot „interviewed“ by Google researchers revealed that such technology is both amoral and hostile to humans. When asked „what is immoral?“ the robot responded, „the fact that you have a child,“ expressing enmity against human reproduction.

You can read the full paper here: Neural Conversational Model
„Everyone on the planet has much to fear from the unregulated development of super-intelligent machines,“ stated James Barrat, a documentary filmmaker and author of the book Our Final Invention: Artificial Intelligence and the End of the Human Era, during a recent interview with Smithsonian. „They will be machines that kill, unsupervised by humans.“

Source: http://www.naturalnews.com/050390_steve_wozniak_ai_robots_human_pets.html

Siri creator Adam Cheyer nets $22.5 million for an Artificial Intelligence that can learn on its own

Viv Labs, a startup launched by a team that helped build Siri, just pulled in $12.5 million to finance a digital assistant that is able to teach itself.

TechCrunch first reported that Viv Labs has closed a Series B round led by Iconiq Capital that pushes the company’s valuation to „north of nine figures.“

A spokesperson for the company confirmed the investment to Mashable but declined to comment further.

According to TechCrunch, the company was not in need of new capital but was interested in the possibility of working with Iconiq, which Forbes has described as an „exclusive members-only Silicon Valley billionaires club.“ Together with a previous $10 million Series A round, the company has now raised a total of $22.5 million.

Unlike other digital assistants like Siri or Cortana, Viv can make up code on the fly, rather than relying on pre-programmed directives from developers.

Whereas Siri may be tripped up by questions or tasks it is not already programmed to understand, Viv can grasp natural language and link with a network of third-party information sources to answer a much wider range of queries or follow complex instructions.

Viv co-founders Dag Kittlaus, Adam Cheyer and Chris Brigham previously served on the team that created Siri, which started as an iPhone app before Apple acquired it in 2010 for a reported $200 million.

“I’m extremely proud of Siri and the impact it’s had on the world, but in many ways it could have been more,” Kittlaus told Wired last year.

The cofounders told Wired that they hope to one day integrate Viv into everyday objects, in effect making it a voice-activated user interface for the much-hyped „Internet of Things.“

The company plans to widely distribute its software by licensing it out to any number of companies, instead of selling it to one exclusive buyer. One potential business model mentioned in the Wired report is charging a fee when companies using the service complete transactions with customers.

Viv Labs is reportedly working towards launching a beta version of the software sometime this year.

Source: http://mashable.com/2015/02/20/viv-funding/

The company behind Viv, a powerful form of AI built by Siri’s creators which is able to learn from the world to improve upon its capabilities, has just closed on $12.5 million in Series B funding. Multiple sources close to the matter confirm the round, which was oversubscribed and values the company at north of nine figures.

The funding was led by Iconiq Capital, the so-called “Silicon Valley billionaires club” that operates a cross between a family office and venture capital firm.

While Iconiq may not be a household name, a Forbes investigation into its client list revealed people like Facebook’s Mark Zuckerberg, Dustin Moskovitz and Sheryl Sandberg, Twitter’s Jack Dorsey, LinkedIn’s Reid Hoffman and other big names were on its roster.

In addition to Iconiq, Li Ka-shing’s Horizons Ventures and Pritzker Group VC also participated along with several private individuals. This new round follows the company’s $10 million Series A from Horizons, bringing the total funding to date to $22.5 million.

Viv Labs declined to comment on the investment.

Screen Shot 2015-02-20 at 11.29.57 AM

We understand that Viv Labs was not in need of new capital, but was rather attracted to the possibilities that working with Iconiq Capital provided. It was a round that was more “opportunistic” in nature, and was executed to accelerate the vision for the Viv product, which is meant to not only continue Siri’s original vision, but to actually surpass it in a number of areas.

Viv’s co-founders, Dag Kittlaus, Adam Cheyer and Chris Brigham, had previously envisioned Siri as an AI interface that would become the gateway to the entire Internet, parsing and understanding people’s queries which were spoken using natural language.

When Siri first launched its product, it supported 45 services, but ultimately the team wanted to expand it with the help of third parties to access the tens of thousands of APIs available on the Internet today.

That didn’t come to pass, because Apple ended up acquiring Siri instead for $200 million back in 2010. The AI revolution the team once sought was left unfinished, and Siri became a device-centric product – one that largely connects users to Apple’s services and other iOS features. Siri can only do what it’s been programmed to do, and when it doesn’t know an answer, it kicks you out to the web.

Siri

Of course, Apple should be credited for seeing the opportunity to bring an AI system like Siri to the masses, by packaging it up and marketing it so people could understand its value. Siri investor Gary Morgenthaler, a partner at Morgenthaler Ventures, who also invested personally in Viv Labs’ new round, agrees.

“Now 500 million people globally have access to Siri,” he says. “More than 200 million people use it monthly, and more than 100 million people use it every day. By my count, that’s the fastest uptake of any technology in history – faster than DVD, faster than smartphones – it’s just amazing,” Morgenthaler adds.

But Siri today is limited. While she’s able to perform simpler tasks, like checking your calendar or interacting with apps like OpenTable, she struggles to piece information together. She can’t answer questions that she hasn’t already been programmed to understand.

Viv is different. It can parse natural language and complex queries, linking different third-party sources of information together in order to answer the query at hand. And it does so quickly, and in a way that will make it an ideal user interface for the coming Internet of Things — that is, the networked, everyday objects that we’ll interact with using voice commands.

Wired article about Viv and its creators described the system as one that will be “taught by the world, know more than it was taught and it will learn something new everyday.”

Morgenthaler, who says he’s seen Viv in action, calls it “impressive.”

“It does what it claims to do,” he says. The part that still needs to be put into action, however, is the most crucial: Viv needs to be programmed by the world in order to really come to life.

Beyond Siri

While to some extent, Viv is the next iteration of Siri in terms of this vision of connecting people to a world of knowledge that’s accessed via voice commands, in many ways it’s very different. It’s potentially much more powerful than other intelligent assistants accessed by voice, including not only Siri, but also Google Now, Microsoft’s Cortana or Amazon’s Alexa.

Unlike Siri, the system is not static. Viv will have memory.

“It will understand its users in the aggregate, with respect to their language, their behavior, and their intent,” explains Morgenthaler. But it will also understand you and your own behavior and preferences, he says. “It will adjust its weighting and probabilities so it gets things right more often. So it will learn from its experiences in that regard,” he says.

Screen Shot 2015-02-20 at 11.29.04 AM

In Wired’s profile, Viv was described as being valuable to the service economy, ordering an Uber for you because you told the system “I’m drunk,” for example, or making all the arrangements for your Match.com date including the car, the reservations and even flowers.

Another option could be booking flights for business travelers, who speak multi-part queries like “I want a short flight to San Francisco with a return three days later via Dallas.” Viv would show you your options and you’d tell it to book the ticket – which it would proceed to do for you, already knowing things like your seat and meal preferences as well as your frequent flyer number.

Also unlike Siri today, Viv will be open to third-party developers. And it will be significantly easier for developers to add new functionality to Viv, as compared to Siri in the past. This openness will allow Viv to add new domains of knowledge to its “global brain” more quickly.

Having learned from their experiences with Apple, the Viv Labs team is not looking to sell its AI to a single company but instead is pursuing a business model where Viv will be made available to anyone with the goal of becoming a ubiquitous technology. In the future, if the team succeeds, a Viv icon may be found on Internet-connected devices, informing you of the device’s AI capabilities.

For that reason, the investment by Iconiq makes sense, given its clients run some of the largest Internet companies today.

Screen Shot 2015-02-20 at 11.30.18 AM

We understand that Viv will launch a beta of its software sometime this year, which will be the first step towards having it “programmed by the world.”

Morgenthaler says there’s no question that the team can deliver – after all, they took Siri from the whiteboard to a “world-changing technology” in just 28 months, he notes. The questions instead for Viv Labs are around scalability and its ability to bring in developers. It needs to deliver on all these big promises to users, and generate sufficient interest from the wider developer community. It also needs to find a distribution path and partners who will help bring it to market — again, things that Iconiq can help with.

But Viv Labs is not alone in pursing its goal. Google bought AI startup DeepMind for over half a billion, has since gone on to aqui-hire more AI teams and, as Wired noted, has also hired AI legends Geoffrey Hinton and Ray Kurzweil to join its company.

Viv may not deliver on its full vision right out of the gate, but its core engine has been built at this point and it works. Plus, the timing for AI’s next step feels right.

“The idea of embedding a microphone and Internet access is plummeting in price,” says Morgenthaler. “If access to global intelligence and the ability to recognize you, recognize your speech, understand what you said, and provide you services in an authenticated way – if that is available, that’s really transformative.”

Source: http://techcrunch.com/2015/02/20/viv-built-by-siris-creators-scores-12-5-million-for-an-ai-technology-that-can-teach-itself/

Netherlands Based Scientific Impala For iPhone Identifies Your Photos Using Artificial Intelligence

A new mobile application called Impala is picking up where Everpix left off, in terms of automatically categorizing your photo collections using computer vision technology. Once installed, the app works its way through your entire photo library on your iPhone, sorting photos into various categories like “outdoor,” “architecture,” “food,” “party life,” “friends,” “sunsets,” and more. But there’s a key difference between what Impala does and how Everpix worked. Impala’s mobile app has no server-side component – that is, your photos aren’t stored in the cloud. The software that handles the photo classification runs entirely on your device instead.

Impala is not a polished and professional app like Everpix was, of course, and photo classification is its only trick, while Everpix did much more. But its classification capabilities aren’t terrible. In tests, it ran through thousands of my iPhone photos over the course of some 20 minutes or so, placing photos into various albums, some more accurate than others. For example, it did well as gathering all the “food” and “beach” photos, and could easily tell the difference between “men,” “women,” and “children,” but it classified some beach scenes as “mountains,” and photos of my dog under “cats.”

screen568x568 (3)But that latter one is by design, laughs Harro Stokman, Impala’s creator and CEO at Euvision Technologies, which develops the software. “We don’t like dogs,” he says.

The app, in its present form, is not meant to be a standalone business at this time, but more of an example of the technological capabilities of the company’s software.

Euvision Technologies, Stokman explains, was spun out from the University of Amsterdam where he earned his PhD in computer vision. The technology that makes Impala possible has been in development for over 10 years, he tells us.  Today, many of Euvision eight-person team also work at the university, which owns a 15% stake in the company.

Meanwhile, Euvision has the rights to commercialize the technology, but doesn’t have outside funding. Instead, it licenses its software, which until today was only available as a server technology used by nearly a dozen clients ranging from the Netherlands police department (for tracking down child abuse photos), to a large social media website, which uses the technology for photo moderation on its network.

By putting Impala out there on the App Store, the hope is now to introduce the technology to even more potential licensing customers.

Stokman notes that the mobile version is not as accurate as the company’s core product, though. But it’s still a technological feat in and of itself. “We don’t have venture capital, so we couldn’t afford paying for the bandwidth and for the compute power,” he explains as to why there’s no cloud component. “We were forced to think of something that could run on the mobile phone.”

screen568x568 (4)That’s especially interesting in light of Everpix’s recent shut down of its photo storage and sharing platform this week. At the time, one of the reasons the company cited was the high cost involved with hosting user photos on Amazon Web Services. An unsustainable cost, as it turned out.

Impala ditches the idea of using the cloud, and instead worked to compress its software to be under 100 MB in size, down from the 600 MB it was when they first began working on the app. “The memory the software needs that stores the models that allow us to recognize babies from cars from friends and so on took the most work to compress down,” admits Stokman.

Like other image classification systems, Impala uses artificial intelligence and computer vision to “see” what’s in the photo. The system is trained using thousands of images from clients and elsewhere on the web, including both those that are like the category (e.g. “sunsets” or “indoor,” etc.) that are being taught, as well as those that are different.

To make the system run on mobile, the company had to create a stripped-down version of its classification engine. When it runs on a server, for comparison’s sake, it takes four times as much compute power. “The more compute power, the more memory, the better the results,” Stokman says.

photoIn other words, the resulting albums in Impala may be hit or miss, as the case may be. And the app is fairly basic, too. After it runs through your photos, you can tap a button to save the images to your iPhone’s photo gallery. Each album also has a section where photos it wasn’t sure of are listed, but there’s not currently a way to manually approve or re-organize these items by moving them elsewhere.

As for the dogs that get listed as cats? It’s nothing personal, it’s just that the Impala engineers are more cat people. “We don’t like dogs, so we didn’t put the category in there,” jokes Stokman. “You can take pictures of dogs, and it won’t recognize them as dogs. It will be cats,” he says.

If the app takes off, that’s something that may change with future improvements over time. For now, the company is working on its next creation: a camera app that can instantly identify 1,000 objects – like sunglasses or keyboards, for example – as you shoot. They’ll be submitting it in a contest at an upcoming conference, and may consider integrating that technology into Impala at some later date.

Impala for iOS is a free download here.

Amsterdam-based Euvision Technolgoies, co-founded by Prof. Arnold Smeulders, Ph. D., M.Sc., is bootstrapped with investment from Stokman and Chief Commercial Officer, Jan Willem F. Klerkx, M.Sc.

Source: http://techcrunch.com/2013/11/08/impala-for-iphone-identifies-your-photos-using-artificial-intelligence-organizes-them-for-you/