Archiv der Kategorie: Innovation

Machine Learning and Artificial Intelligence: Soon We Won’t Program Computers. We’ll Train Them Like Dogs

AI_2-1-1.jpg

BEFORE THE INVENTION of the computer, most experimental psychologists thought the brain was an unknowable black box. You could analyze a subject’s behavior—ring bell, dog salivates—but thoughts, memories, emotions? That stuff was obscure and inscrutable, beyond the reach of science. So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades.

Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses. They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn’t a black box at all. It was more like a computer.

The so-called cognitive revolution started small, but as computers became standard equipment in psychology labs across the country, it gained broader acceptance. By the late 1970s, cognitive psychology had overthrown behaviorism, and with the new regime came a whole new language for talking about mental life. Psychologists began describing thoughts as programs, ordinary people talked about storing facts away in their memory banks, and business gurus fretted about the limits of mental bandwidth and processing power in the modern workplace.

This story has repeated itself again and again. As the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work. Technology always does this. During the Enlightenment, Newton and Descartes inspired people to think of the universe as an elaborate clock. In the industrial age, it was a machine with pistons. (Freud’s idea of psychodynamics borrowed from the thermodynamics of steam engines.) Now it’s a computer. Which is, when you think about it, a fundamentally empowering idea. Because if the world is a computer, then the world can be coded.

Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age. As software has eaten the world, to paraphrase venture capitalist Marc Andreessen, we have surrounded ourselves with machines that convert our actions, thoughts, and emotions into data—raw material for armies of code-wielding engineers to manipulate. We have come to see life itself as something ruled by a series of instructions that can be discovered, exploited, optimized, maybe even rewritten. Companies use code to understand our most intimate ties; Facebook’s Mark Zuckerberg has gone so far as to suggest there might be a “fundamental mathematical law underlying human relationships that governs the balance of who and what we all care about.” In 2013, Craig Venter announced that, a decade after the decoding of the human genome, he had begun to write code that would allow him to create synthetic organisms. “It is becoming clear,” he said, “that all living cells that we know of on this planet are DNA-software-driven biological machines.” Even self-help literature insists that you can hack your own source code, reprogramming your love life, your sleep routine, and your spending habits.

In this world, the ability to write code has become not just a desirable skill but a language that grants insider status to those who speak it. They have access to what in a more mechanical age would have been called the levers of power. “If you control the code, you control the world,” wrote futurist Marc Goodman. (In Bloomberg Businessweek, Paul Ford was slightly more circumspect: “If coders don’t run the world, they run the things that run the world.” Tomato, tomahto.)

But whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand.

Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.

This approach is not new—it’s been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed, and Google Photos uses it to identify faces. Machine learning runs Microsoft’s Skype Translator, which converts speech to different languages in real time. Self-driving cars use machine learning to avoid accidents. Even Google’s search engine—for so many years a towering edifice of human-written rules—has begun to rely on these deep neural networks. In February the company replaced its longtime head of search with machine-learning expert John Giannandrea, and it has initiated a major program to retrain its engineers in these new techniques. “By building learning systems,” Giannandrea told reporters this fall, “we don’t have to write these rules anymore.”

But here’s the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box. And as these black boxes assume responsibility for more and more of our daily digital tasks, they are not only going to change our relationship to technology—they are going to change how we think about ourselves, our world, and our place within it.

If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.

ANDY RUBIN IS an inveterate tinkerer and coder. The cocreator of the Android operating system, Rubin is notorious in Silicon Valley for filling his workplaces and home with robots. He programs them himself. “I got into computer science when I was very young, and I loved it because I could disappear in the world of the computer. It was a clean slate, a blank canvas, and I could create something from scratch,” he says. “It gave me full control of a world that I played in for many, many years.”

Now, he says, that world is coming to an end. Rubin is excited about the rise of machine learning—his new company, Playground Global, invests in machine-learning startups and is positioning itself to lead the spread of intelligent devices—but it saddens him a little too. Because machine learning changes what it means to be an engineer.

http://www.wired.com/2016/05/the-end-of-code

Daydream: Google’s Ambitious New Bid To Bring VR To The Masses

Google is releasing a new ecosystem to put virtual reality into the hands of everyone.

Today at Google I/O, the company revealed a new virtual reality standard that may finally bring VR to the masses. It’s called Daydream, and it’s an open-source headset and motion controller that’s compatible with souped-up Android phones. Daydream will arrive this September for an unknown, but likely not that expensive, price.

If Google pulls it off, Daydream will be both cheaper and easier to use than its fancy VR counterparts, because Google has reimagined Android software to work in VR—and every Android phone of the future could be built VR-ready at its core.

Google Finds A Better Metaphor For VR

When you think about VR, what’s the metaphor you use? The Matrix? Lawnmower Man? It’s all dark imagery of headsets, body suits, black metal and plastic, like someone designed a commando knife for your face, then tethered it to a 1980s PC with more wires than your surround sound home theater system. It became the HTC Vive or Oculus Rift.

That’s one end of the spectrum. On the other, you have Google Cardboard. It’s literally the prize from a cereal box, stuck to your face. It’s cheap, fun, and inherently kind of crap. Nobody wants to unwind from a stressful day by climbing into a cardboard cube.

With Daydream, Google has landed on a happy medium. It’s a headset that’s built with soft materials like fabric. It takes only a moment to pop your phone inside, then it clasps shut like the lever on a self-corking bottle, and you’re in VR. By swinging what looks like a TV remote, you can do things like grab objects, flip pancakes, or go fishing in the virtual world.

In Daydream, you can walk the sidewalks of Paris with Streetview or watch YouTube clips on an IMAX screen, and it probably won’t be that expensive (we’d ballpark $100 or less, given the comparative price of the Samsung Gear VR).

How? Because Google created the Daydream headset and controller as a reference spec that’s open for any manufacturer to make—and potentially even compete with each other to drive down the price. And Google being as influential as it is, the company convinced Android phone manufacturers to build their phones differently. So your next Android phone may double as one of the best VR headsets in the world—one that you may actually want to use.

Google Is Upgrading Android, And Convincing Phone Manufacturers To Standardize For VR

But why will Daydream be any better than the mobile VR offerings of Cardboard, or Samsung’s (admittedly superb) Gear VR? In short, it’s the software, and it’s the hardware.

On the software side, Google has built Android N (that’s the next version of Android coming out this September) to accommodate VR. Developers can code their software to take full advantage of the phone’s processing cores to push the sorts of specs you need for VR, like high frame rates. For users, Android will be designed to feel welcoming to people in VR. Upon putting on the headset, Android users will enter a new app called Daydream Home that looks like a virtual 3-D environment, crossed with an app manager. Android N will also support several VR-enabled core apps on the phone. That means you can buy items inside a VR version of Google Play, or use Streetview and YouTube in VR. Right now, if you use a VR headset on an Android phone, you feel sequestered to a select few apps. Android N seems to invite Daydream VR as an alternate way of using the phone itself.

On the hardware side, Google has convinced phone manufacturers to build what are being called „Daydream Ready“ phones. The list of partners is long and impressive, including Samsung, HTC, Huawei, Xiaomi, LG, and HTC. Google doesn’t go into a lot of detail on what constitutes a Daydream Ready phone, but they appear to be certified to share basic specs of performance (processors, GPUs, and RAM), a few extra sensors, and similarly designed screens that can allow the phone to slip into a special set of lenses to transport you into high-performance VR. (Samsung’s Gear VR works so well because it has extra sensors inside. Daydream distributes all of the technology to the phone itself, so many VR experiences could perform just as well with a super simple, lens-only Cardboard style headset.)

How good will the Daydream experience be? It’s hard to know without actually trying it, and Google isn’t offering demos at I/O. Daydream still won’t have the full six-axis tracking that the highest-end headsets, the Oculus Rift and HTC Vive, do. That means in Daydream, you can still look around up, down, and in a 360-degree circle, but when you poke your head forward or lean back, your perspective doesn’t change. It’s a compromise of immersion, but from my experience, VR can still leave you in awe without all six axes involved.

But The Bigger Deal May Be Google Standardizing The Control Of VR

Alongside their Daydream headset, Google also introduced what looks like a littler version of Nintendo’s Wiimote controller. They’re not sharing much in terms of technical specs, but it appears to be a motion-sensitive remote that enables all those gestures you might remember from the Nintendo Wii. (Tennis, anyone?) It also features a touchpad on top, so you can flick or swipe.

The importance of this little remote to the future of VR can’t be overstated. It’s Google’s attempt to bring control parity to the mobile VR industry, all via an approachable bit of industrial design that shouldn’t freak people out like a pair of these. And Google seems to want their headset and remote to feel comfortable and intuitive above all else.

Thus far, mobile VR has relied entirely on an aim-your-head, tap-one-button on your temple, control experience. That’s a literal pain in the neck after about 20 minutes. There has been limited support for gamepads and other controllers, but the problem is that, with countless hardware manufacturers developing their own weird configurations, there’s no baseline for all of the app developers to design to. So even an app that technically supports a gamepad might not play very well, because the tiniest bits of finesse with an analog stick are lost to a developer coding for 10,000 different possible controllers. Meanwhile, Google has created one remote to rule them all.

Daydream Will Be The Way Most People Experience Decent VR, Soon

Google could be successful because the company is really thinking through the whole ecosystem of VR—hardware, apps, the UX, and even the phones that could power a mobile VR revolution. However, Google still faces one big hurdle: Google. The company’s broad strategy makes a lot of sense, but that doesn’t change the fact that it has an unsteady track record when stepping into the world of hardware. There’s no guarantee that any piece of hardware designed by Google will be a hit. Google’s own Nexus smartphones (technically made by third parties) are not the most popular Android phones, and Google abandoned projects like the Nexus Q and Google TV from lack of interest. If Daydream flops out of the gate, what then?

But I can’t help but consider the brass tacks: Five million Google Cardboard headsets have sold to date. One million people are using Samsung’s Gear VR. These numbers are respectable in a world that’s only trending more in the direction of mobile. Meanwhile, there are 1.5 billion active Android phones in the world. They can’t, and won’t, support Daydream today. The next 1.5 billion, however? If the standards are in place to make the experience both decent and affordable, why not? With Daydream, Google just gave us a VR standard that could unite the mobile VR world. And increasingly, it’s looking like the mobile VR world may be the only one that matters.

All Images: courtesy Google

https://www.fastcodesign.com/3059928/daydream-googles-ambitious-new-bid-to-bring-vr-to-the-masses

12 Ways AI Will Disrupt Your C-Suite

McKinsey & Company estimates that as much as 45% of the tasks currently performed by people can be automated using existing technologies. If you haven’t made an effort to understand how artificial intelligence will affect your company, now is the time to start.

(Image: geralt via Pixabay)

(Image: geralt via Pixabay)

Artificial Intelligence (AI) is gaining momentum across industries with the help of companies such as IBM, Google, and Microsoft. McKinsey & Company estimates that as much as 45% of the tasks currently performed by people can be automated using current technologies — not only low-level rote tasks, but high-level knowledge work as well.

„Our point of view is that there is no function, no industry, almost no role that won’t potentially be affected by this set of technologies — not just every occupation, but every activity within each occupation,“ said Michael Chui, a partner with McKinsey Global Institute, in an interview. „It’s not just automating the labor that’s being done, but the work people do will have to change as well. Understanding how to take advantage of these technologies is going to be critically important.“

Even if your company isn’t actively experimenting with it, AI is finding its way in via online transactions and modern cyber-security systems, among other examples. As AI technologies and their use-cases start to take hold across industries, it’s time for the C-suite to pay attention.

If you haven’t made an effort to understand how AI will affect your company, now is the time to start.

The attitude of C-[suite] executives should be to add AI as a top strategic priority,“ said George Zarkadakis, digital lead at global professional services firm Willis Towers Watson and author of In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence, in an interview. „This time, technology will move faster than ever, and the laggards will pay a hefty price.“

Of course, the impact of AI is not limited to technological change and innovation. It also involves cultural evolution and, in some cases, revolution.

„Today’s leaders have time, as well as a responsibility, to understand what’s ahead of them before acting,“ said Deborah Westphal, CEO of strategic consulting and advisory firm Toffler Associates, in an interview. „It’s important to ask the hard questions, and then, using those insights, determine the best action for an organization.“

In short, AI is going to affect a lot of things in the near future, some of which have not yet been anticipated.

Organizational Intelligence Explodes 

Organizations are using AI to solve problems at scale. Michele Goetz, a principal analyst at Forrester Research, estimates that most organizations only take advantage of 10% to 30% of their data, with most of that still being structured, transactional information. 
'There's a difference in what AI technology is going to bring to the organization compared to what other technologies have brought,' said Goetz, in an interview. '[The C-suite executives] will have better visibility into market opportunities and [become aware of] threats faster. Because they can see their environment more holistically and clearly, they'll understand partners and customers better. It's [also] going to change the way employees work.'     
(Image: geralt via Pixabay)

Organizational Intelligence Explodes

Organizations are using AI to solve problems at scale. Michele Goetz, a principal analyst at Forrester Research, estimates that most organizations only take advantage of 10% to 30% of their data, with most of that still being structured, transactional information.

„There’s a difference in what AI technology is going to bring to the organization compared to what other technologies have brought,“ said Goetz, in an interview. „[The C-suite executives] will have better visibility into market opportunities and [become aware of] threats faster. Because they can see their environment more holistically and clearly, they’ll understand partners and customers better. It’s [also] going to change the way employees work.“

First-Mover Advantage 

The seeds of what some are calling The Exponential Age were planted long ago, manifesting themselves as exponential increases in processing power, storage capacity, bandwidth utilization, and -- as a result of all of that -- digital information. The same rule applies to machine learning.     
'True AI learns at an exponential rate, evolves and sometimes even rewrites better versions of itself,' said Walter O'Brien, founder and CEO of Scorpion Computer Services, in an interview. 'Because of this factor, the first company to market can also be the first to gather the most training data material -- for example, Google's Voice recognition on cell phones. The lessons learned can be encoded as heuristics or subtle guidelines which become the IP of the company -- for example, the definition of Google's relevance scores. This all creates a barrier to competition.'
Imagine cramming 250 years of human thinking into 90 minutes. Scorpion Computer Services' AI platform does that.
(Image: skeeze via Pixabay)

First-Mover Advantage

The seeds of what some are calling The Exponential Age were planted long ago, manifesting themselves as exponential increases in processing power, storage capacity, bandwidth utilization, and — as a result of all of that — digital information. The same rule applies to machine learning.

„True AI learns at an exponential rate, evolves and sometimes even rewrites better versions of itself,“ said Walter O’Brien, founder and CEO of Scorpion Computer Services, in an interview. „Because of this factor, the first company to market can also be the first to gather the most training data material — for example, Google’s Voice recognition on cell phones. The lessons learned can be encoded as heuristics or subtle guidelines which become the IP of the company — for example, the definition of Google’s relevance scores. This all creates a barrier to competition.“

Imagine cramming 250 years of human thinking into 90 minutes. Scorpion Computer Services‘ AI platform does that.

Employees May Lead The Charge 

AI is creeping into organizations in various ways, online and embedded in enterprise applications. The trend is accelerating, necessitating the C-suite's attention, since it will at some point noticeably affect corporate culture and business strategy. 
'The tipping point for the acceptance and widespread application of AI will not come from the C-suite, but from employees seeing the benefits of AI in their daily lives through applications like intelligent personal assistants and smart devices,' said Robert DeMaine, lead technology sector analyst at global advisory service company Ernst & Young (EY), in an interview. 'Like the [bring your own device] trend, employees will begin to use their own 'smart' personal productivity applications in the office, challenging the organization to reassess its policies. AI will change corporate culture from the bottom up, not the top down.' 
(Image: Broadmark via Pixabay)

Employees May Lead The Charge

AI is creeping into organizations in various ways, online and embedded in enterprise applications. The trend is accelerating, necessitating the C-suite’s attention, since it will at some point noticeably affect corporate culture and business strategy.

„The tipping point for the acceptance and widespread application of AI will not come from the C-suite, but from employees seeing the benefits of AI in their daily lives through applications like intelligent personal assistants and smart devices,“ said Robert DeMaine, lead technology sector analyst at global advisory service company Ernst & Young (EY), in an interview. „Like the [bring your own device] trend, employees will begin to use their own ’smart‘ personal productivity applications in the office, challenging the organization to reassess its policies. AI will change corporate culture from the bottom up, not the top down.“

Organizational Structures Will Shift 

Hierarchical organizational structures adversely affect business agility and the ability to drive value from data. Similarly, the lingering barriers between departments and business units limit a company's ability to derive additional types of value from data because data remains trapped in silos. 
'Projectized' organizations, which operate in a matrix environment, are better positioned to take full advantage of AI systems [than] vertical organizations are,' said Armen Kherlopian, VP of analytics and research at business process transformation company Genpact, in an interview. 'This is because these so-called projectized organizations can more readily gain access to resources and key business channels across the enterprise. Additionally, the levers associated with [business] value do not fit neatly into vertical groups.' 
Genpact estimates nearly $400 billion of digital investments were wasted globally in 2015 because of a failure to align expected results throughout organizations. 
(Image: geralt via Pixabay)

Organizational Structures Will Shift

Hierarchical organizational structures adversely affect business agility and the ability to drive value from data. Similarly, the lingering barriers between departments and business units limit a company’s ability to derive additional types of value from data because data remains trapped in silos.

„Projectized“ organizations, which operate in a matrix environment, are better positioned to take full advantage of AI systems [than] vertical organizations are,“ said Armen Kherlopian, VP of analytics and research at business process transformation company Genpact, in an interview. „This is because these so-called projectized organizations can more readily gain access to resources and key business channels across the enterprise. Additionally, the levers associated with [business] value do not fit neatly into vertical groups.“

Genpact estimates nearly $400 billion of digital investments were wasted globally in 2015 because of a failure to align expected results throughout organizations.

AI Requires Context 

AI systems need a lot of input to produce the appropriate output. Since each company, its culture, and its objectives are unique, AI systems need to be trained on those details in order to assist employees effectively, and to serve the needs of the organization accurately. Unlike traditional analytics systems, which can be built without regard to some of the softer organizational issues, AI requires organizations to be aware of the information they're bringing in and why they're bringing it in. 
'There is a clear trend towards machines becoming more intelligent so that humans can work more intelligently with them,' said George Zarkadakis. 'Although machines will increasingly gain more autonomy, they will do so within the human space and within human norms and ethics.' 
(Image: terg via Pixabay)

AI Requires Context

AI systems need a lot of input to produce the appropriate output. Since each company, its culture, and its objectives are unique, AI systems need to be trained on those details in order to assist employees effectively, and to serve the needs of the organization accurately. Unlike traditional analytics systems, which can be built without regard to some of the softer organizational issues, AI requires organizations to be aware of the information they’re bringing in and why they’re bringing it in.

„There is a clear trend towards machines becoming more intelligent so that humans can work more intelligently with them,“ said George Zarkadakis. „Although machines will increasingly gain more autonomy, they will do so within the human space and within human norms and ethics.“

Organizations Have To Adapt 

AI automates some tasks and assists with others, both displacing and complementing the work employees do. The C-suite needs to think about how the shifting division of labor can influence the way a company is managed and how it's organized.  
'AI is impacting many aspects of the business, from workflow management to advertising strategy. It can enable executives to make better, faster, and more accurate business decisions to streamline operations, allocate resources, understand market trends, and connect with customers,' said Robert DeMaine, lead technology sector analyst at EY. 'As a result, executives will need to be prepared to address a number of business issues, including reassessing internal operations, a changing workforce, sales and marketing strategies, and shifting investment priorities.'  
(Image: badalyanrazmik via Pixabay)

Organizations Have To Adapt

AI automates some tasks and assists with others, both displacing and complementing the work employees do. The C-suite needs to think about how the shifting division of labor can influence the way a company is managed and how it’s organized.

„AI is impacting many aspects of the business, from workflow management to advertising strategy. It can enable executives to make better, faster, and more accurate business decisions to streamline operations, allocate resources, understand market trends, and connect with customers,“ said Robert DeMaine, lead technology sector analyst at EY. „As a result, executives will need to be prepared to address a number of business issues, including reassessing internal operations, a changing workforce, sales and marketing strategies, and shifting investment priorities.“

It's Not All About Technology 

AI is gaining momentum as entrepreneurs, industry behemoths, and companies in-between bring AI products, tools, APIs, and services to market. However, as always, the successful application of technology isn't simply about technology. It's about technology, people, and processes.
'A company will be distinguished by how well it works using AI, and increasingly human-digital convergence, rather than by which specific AI technologies it chooses to deploy,' said Deborah Westphal, CEO of strategic consulting and advisory firm Toffler Associates. 'If a company only addresses the technological elements, without addressing the organizational people and process aspects, it may see a short-term gain, but will suffer in the longer term and likely be [sur]passed by those companies that addressed the internal questions first.'  
(Image: avtar via Pixabay)

It’s Not All About Technology

AI is gaining momentum as entrepreneurs, industry behemoths, and companies in-between bring AI products, tools, APIs, and services to market. However, as always, the successful application of technology isn’t simply about technology. It’s about technology, people, and processes.

„A company will be distinguished by how well it works using AI, and increasingly human-digital convergence, rather than by which specific AI technologies it chooses to deploy,“ said Deborah Westphal, CEO of strategic consulting and advisory firm Toffler Associates. „If a company only addresses the technological elements, without addressing the organizational people and process aspects, it may see a short-term gain, but will suffer in the longer term and likely be [sur]passed by those companies that addressed the internal questions first.“

Employee Empowerment Is Necessary 

Companies have worked toward democratizing the use of data analytics, enabling managers and employees to make better decisions faster. As the velocity of business continues to accelerate at scale with the help of AI, even more employee empowerment will be necessary.  
'AI and greater human-digital convergence magnify the strengths and weaknesses of an existing corporate culture, particularly with respect to how much autonomy is afforded to an organization's people,' said Deborah Westphal of Toffler Associates. 'Given a faster rate of change and near real-time environment in which to make decisions, an organization's people who don't have the necessary autonomy will find that its processes, no matter how good, will break down quickly and its ability to serve its customers [will be] compromised.'  
(Image: alan8197 via Pixabay)

Employee Empowerment Is Necessary

Companies have worked toward democratizing the use of data analytics, enabling managers and employees to make better decisions faster. As the velocity of business continues to accelerate at scale with the help of AI, even more employee empowerment will be necessary.

„AI and greater human-digital convergence magnify the strengths and weaknesses of an existing corporate culture, particularly with respect to how much autonomy is afforded to an organization’s people,“ said Deborah Westphal of Toffler Associates. „Given a faster rate of change and near real-time environment in which to make decisions, an organization’s people who don’t have the necessary autonomy will find that its processes, no matter how good, will break down quickly and its ability to serve its customers [will be] compromised.“

Learn By Doing  

Companies successfully using AI make a point of investing in people and talent. They also actively encourage innovation and experimentation so they can learn quickly from mistakes and capitalize on opportunities, hopefully faster than their competitors. 
'Hire talent that knows how to do this. Start experimenting with it and learn how to use it,' said Michael Chui, a partner with McKinsey Global Institute. 'I don't think this is something you plan for five years and then get started. It's something you learn by doing. When you see something working, the ability to scale is important.' 
(Image: janeb13 via Pixabay)

Learn By Doing

Companies successfully using AI make a point of investing in people and talent. They also actively encourage innovation and experimentation so they can learn quickly from mistakes and capitalize on opportunities, hopefully faster than their competitors.

„Hire talent that knows how to do this. Start experimenting with it and learn how to use it,“ said Michael Chui, a partner with McKinsey Global Institute. „I don’t think this is something you plan for five years and then get started. It’s something you learn by doing. When you see something working, the ability to scale is important.“

Expect The Unexpected 

AI should not be viewed as simply another technology acquisition, because different things are required to get it up and running successfully. Because the purpose of AI is to provide a superhuman analytic or problem-solving capacity, its training cannot be limited to executing mindlessly on a task.  
'You can't assume that how you train these systems is going to produce the results in the context you want them to be produced,' said Michele Goetz, a Forrester principal analyst. 'There has to be an emotional element [because] if you're introducing AI in your call center, you don't want to offend your customers.'
Because AI learns from itself, as well as from its human trainers, unexpected circumstances can arise which may be positive or negative.
(Image: geralt via Pixabay)

Expect The Unexpected

AI should not be viewed as simply another technology acquisition, because different things are required to get it up and running successfully. Because the purpose of AI is to provide a superhuman analytic or problem-solving capacity, its training cannot be limited to executing mindlessly on a task.

„You can’t assume that how you train these systems is going to produce the results in the context you want them to be produced,“ said Michele Goetz, a Forrester principal analyst. „There has to be an emotional element [because] if you’re introducing AI in your call center, you don’t want to offend your customers.“

Because AI learns from itself, as well as from its human trainers, unexpected circumstances can arise which may be positive or negative.

Pay Attention To Possibilities 

Data-driven companies, including IBM, Google, Microsoft, Amazon, and Netflix, are constantly pushing the envelope of what's possible in order to accelerate innovation, differentiate themselves, and, in some cases, cultivate communities that can extend the breadth and depth of AI techniques and use-cases. It's wise for C-suite executives to understand the kind of value AI can provide, and how that value might help the company achieve its strategic objectives.  
'Machine learning techniques are what make a company like Amazon truly successful. Being able to learn from historical data in order to recommend to a given shopper what [she] may buy next is a key differentiator. Yet, the real 'Deep Learning' techniques are still just emerging,' said Mike Matchett, senior analyst and consultant at storage analysis and consulting firm Taneja Group, in an interview. 'Google will not just win 'Go' championships, but will drive cars with [AI], optimize their data center with [AI], and in my opinion, will try to own the global optimization clearing house for the Internet of Things.'
(Image: como-esta via Pixabay)

Pay Attention To Possibilities

Data-driven companies, including IBM, Google, Microsoft, Amazon, and Netflix, are constantly pushing the envelope of what’s possible in order to accelerate innovation, differentiate themselves, and, in some cases, cultivate communities that can extend the breadth and depth of AI techniques and use-cases. It’s wise for C-suite executives to understand the kind of value AI can provide, and how that value might help the company achieve its strategic objectives.

„Machine learning techniques are what make a company like Amazon truly successful. Being able to learn from historical data in order to recommend to a given shopper what [she] may buy next is a key differentiator. Yet, the real ‚Deep Learning‘ techniques are still just emerging,“ said Mike Matchett, senior analyst and consultant at storage analysis and consulting firm Taneja Group, in an interview. „Google will not just win ‚Go‘ championships, but will drive cars with [AI], optimize their data center with [AI], and in my opinion, will try to own the global optimization clearing house for the Internet of Things.“

Change Is At Hand 

The composition of the C-suite is changing to take better advantage of data. Data-savvy executives are replacing their traditional counterparts, new roles are being created, and leaders generally are finding themselves under pressure to understand the value and impact of data, analytics, and machine learning.  
'As the C-suite becomes increasingly filled with analytical minds and more data scientists are hired, a cultural shift naturally takes place. Some of the new, fast-growing executive roles [include] chief data scientist, chief marketing technology officer, [and] chief digital officer. All are aligned with the growing demand and anticipation for AI,' said David O'Flanagan, CEO and cofounder of cloud platform provider Boxever.
At many levels, non-traditional candidates are displacing traditional roles. For example, the Society of Actuarial Professionals is actively promoting the fact that although most actuaries work in the insurance industry, there are non-traditional employment opportunities, including data analytics and marketing. O'Flanagan expects more members of the workforce to have backgrounds in fields of study such as econometrics.
(Image: geralt via Pixabay)

Change Is At Hand

The composition of the C-suite is changing to take better advantage of data. Data-savvy executives are replacing their traditional counterparts, new roles are being created, and leaders generally are finding themselves under pressure to understand the value and impact of data, analytics, and machine learning.

„As the C-suite becomes increasingly filled with analytical minds and more data scientists are hired, a cultural shift naturally takes place. Some of the new, fast-growing executive roles [include] chief data scientist, chief marketing technology officer, [and] chief digital officer. All are aligned with the growing demand and anticipation for AI,“ said David O’Flanagan, CEO and cofounder of cloud platform provider Boxever.

At many levels, non-traditional candidates are displacing traditional roles. For example, the Society of Actuarial Professionals is actively promoting the fact that although most actuaries work in the insurance industry, there are non-traditional employment opportunities, including data analytics and marketing. O’Flanagan expects more members of the workforce to have backgrounds in fields of study such as econometrics.

http://www.informationweek.com/big-data/12-ways-ai-will-disrupt-your-c–suite/d/d-id/1325557

Magna may be helping Apple to build the iCAR /iKARR

apple carSamantha Lee/Business Insider
Apple $90.52
AAPL +/-+0.18 %+0.20

Disclaimer

If Apple wants to bring a car to production, it’ll likely need a good bit of help to get it there. Right now, it’s looking like some of that help will likely come from the Canada-based automotive company Magna International.

Though there aren’t yet concrete facts regarding when, how, and even if an Apple car will exist, a ton of rumors have already surfacedincluding one highly-probable tip about how Apple probably won’t be building this supposed car itself.

That’s where Magna would come in.

Magna is a massive company.

Magna is a massive company.

Markus Leodolter/AP Images

Magna first began business in the early 1950’s. By the end of the decade, they were contracted out by General Motors to make small interior parts.

By the early 1960’s, Magna had two fully-operational plants running and its shares were being publicly traded on the Toronto Stock Exchange.

Now, Magna is the original equipment manufacturer of auto parts for a ton of different car brands and it also does full assembly for a handful of cars.

Though it has thrown the idea around of operating its own automotive brand, Magna’s primary involvement in the automotive world is primarily centered around part supplying.

Magna Steyr, Magna’s „contract manufacturing“ arm, currently assembles the Mercedes-Benz G-Class and the Mini Countryman.

Magna Steyr, Magna's "contract manufacturing" arm, currently assembles the Mercedes-Benz G-Class and the Mini Countryman.

Magna

Magna Steyr has plants across Europe and Asia.

Magna Steyr has plants across Europe and Asia.

Magna

Similar to what Foxconn is to Apple currently, Magna would likely produce parts and assemble vehicles for Apple, if an Apple car was to hit production.

Similar to what Foxconn is to Apple currently, Magna would likely produce parts and assemble vehicles for Apple, if an Apple car was to hit production.

Kin Cheung/AP

The rumor is that the Apple car will be built at one of Magna’s Austrian facilities and that there’s currently research being done at a secret facility in Berlin.

The rumor is that the Apple car will be built at one of Magna's Austrian facilities and that there's currently research being done at a secret facility in Berlin.

Markus Leodolter/AP Images

[Source: Clean Technica]

For now, though, it’s still not certain the company is actually working with Apple.

For now, though, it's still not certain the company is actually working with Apple.

Magna

Apple and Magna did not immediately respond  to a request for comment.

Adam Cheyer, you just made Siri 10 times better – VIV Technologies

In the Interview with Adam Cheyer from Late 2013 TheIdea Innovation Agency asked Adam Cheyer, what’s next, we said, Viv, coming up soon. https://dieidee.eu/2013/10/30/siri-and-google-now-what-would-have-happened-to-siri-if-steve-jobs-was-still-alive/

See for yourself, how Viv is the future of Chatbots and personal digital Assistants,
Disrupt-Conference TechCrunch Siri-CEO Dag Kittlaus „Viv“ Technologies

How does it work?
It’s patented technology is called „dynamic program generation“.  The Bot does programming real-time, in the background. And it does integrate interfaces to other data sources and bots too.

The full video goes here:

Artificial intelligence assistants are taking over

It was a weeknight, after dinner, and the baby was in bed. My wife and I were alone—we thought—discussing the sorts of things you might discuss with your spouse and no one else. (Specifically, we were critiquing a friend’s taste in romantic partners.) I was midsentence when, without warning, another woman’s voice piped in from the next room. We froze.

“I HELD THE DOOR OPEN FOR A CLOWN THE OTHER DAY,” the woman said in a loud, slow monotone. It took us a moment to realize that her voice was emanating from the black speaker on the kitchen table. We stared slack-jawed as she—it—continued: “I THOUGHT IT WAS A NICE JESTER.”

“What. The hell. Was that,” I said after a moment of stunned silence. Alexa, the voice assistant whose digital spirit animates the Amazon Echo, did not reply. She—it—responds only when called by name. Or so we had believed.

We pieced together what must have transpired. Somehow, Alexa’s speech recognition software had mistakenly picked the word Alexa out of something we said, then chosen a phrase like “tell me a joke” as its best approximation of whatever words immediately followed. Through some confluence of human programming and algorithmic randomization, it chose a lame jester/gesture pun as its response.

In retrospect, the disruption was more humorous than sinister. But it was also a slightly unsettling reminder that Amazon’s hit device works by listening to everything you say, all the time. And that, for all Alexa’s human trappings—the name, the voice, the conversational interface—it’s no more sentient than any other app or website. It’s just code, built by some software engineers in Seattle with a cheesy sense of humor.

But the Echo’s inadvertent intrusion into an intimate conversation is also a harbinger of a more fundamental shift in the relationship between human and machine. Alexa—and Siri and Cortana and all of the other virtual assistants that now populate our computers, phones, and living rooms—are just beginning to insinuate themselves, sometimes stealthily, sometimes overtly, and sometimes a tad creepily, into the rhythms of our daily lives. As they grow smarter and more capable, they will routinely surprise us by making our lives easier, and we’ll steadily become more reliant on them.

Even as many of us continue to treat these bots as toys and novelties, they are on their way to becoming our primary gateways to all sorts of goods, services, and information, both public and personal. When that happens, the Echo won’t just be a cylinder in your kitchen that sometimes tells bad jokes. Alexa and virtual agents like it will be the prisms through which we interact with the online world.

It’s a job to which they will necessarily bring a set of biases and priorities, some subtler than others. Some of those biases and priorities will reflect our own. Others, almost certainly, will not. Those vested interests might help to explain why they seem so eager to become our friends.

* * *

ibmAP

In the beginning, computers spoke only computer language, and a human seeking to interact with one was compelled to do the same. First came punch cards, then typed commands such as run, print, and dir.

The 1980s brought the mouse click and the graphical user interface to the masses; the 2000s, touch screens; the 2010s, gesture control and voice. It has all been leading, gradually and imperceptibly, to a world in which we no longer have to speak computer language, because computers will speak human language—not perfectly, but well enough to get by.

Alexa and software agents like it will be the prisms through which we interact with the online world.
We aren’t there yet. But we’re closer than most people realize. And the implications—many of them exciting, some of them ominous—will be tremendous.

Like card catalogs and AOL-style portals before it, Web search will begin to fade from prominence, and with it the dominance of browsers and search engines. Mobile apps as we know them—icons on a home screen that you tap to open—will start to do the same. In their place will rise an array of virtual assistants, bots, and software agents that act more and more like people: not only answering our queries, but acting as our proxies, accomplishing tasks for us, and asking questions of us in return.

This is already beginning to happen—and it isn’t just Siri or Alexa. As of April, all five of the world’s dominant technology companies are vying to be the Google of the conversation age. Whoever wins has a chance to get to know us more intimately than any company or machine has before—and to exert even more influence over our choices, purchases, and reading habits than they already do.

So say goodbye to Web browsers and mobile home screens as our default portals to the Internet. And say hello to the new wave of intelligent assistants, virtual agents, and software bots that are rising to take their place.

No, really, say “hello” to them. Apple’s Siri, Google’s mobile search app, Amazon’s Alexa, Microsoft’s Cortana, and Facebook’s M, to name just five of the most notable, are diverse in their approaches, capabilities, and underlying technologies. But, with one exception, they’ve all been programmed to respond to basic salutations in one way or another, and it’s a good way to start to get a sense of their respective mannerisms. You might even be tempted to say they have different personalities.

Siri’s response to “hello” varies, but it’s typically chatty and familiar:

160331_CS_siriScreen.CROP.promo xlarge2.jpgSlate/Screenshot

Alexa is all business:

160331_CS_alexaSS.CROP.promo xlarge2.jpgSlate/Screenshot

Google is a bit of an idiot savant: It responds by pulling up a YouTube video of the song “Hello” by Adele, along with all the lyrics.

160331_CS_googleScreen.CROP.promo xlarge2.jpgSlate/Screenshot

Cortana isn’t interested in saying anything until you’ve handed her the keys to your life:

160331_CS_cortanaTrio.CROP.promo xlarge2.jpgSlate/Screenshot

Once those formalities are out of the way, she’s all solicitude:

160331_CS_cortanaScreen.CROP.promo xlarge2.jpgSlate/Screenshot

Then there’s Facebook M, an experimental bot, available so far only to an exclusive group of Bay Area beta-testers, that lives inside Facebook Messenger and promises to answer almost any question and fulfill almost any (legal) request. If the casual, what’s-up-BFF tone of its text messages rings eerily human, that’s because it is: M is powered by an uncanny pairing of artificial intelligence and anonymous human agents.

160331_CS_mScreen.CROP.promo xlarge2.jpgSlate/Screenshot

You might notice that most of these virtual assistants have female-sounding names and voices. Facebook M doesn’t have a voice—it’s text-only—but it was initially rumored to be called Moneypenny, a reference to a secretary from the James Bond franchise. And even Google’s voice is female by default. This is, to some extent, a reflection of societal sexism. But these bots’ apparent embrace of gender also highlights their aspiration to be anthropomorphized: They want—that is, the engineers that build them want—to interact with you like a person, not a machine. It seems to be working: Already people tend to refer to Siri, Alexa, and Cortana as “she,” not “it.”

That Silicon Valley’s largest tech companies have effectively humanized their software in this way, with little fanfare and scant resistance, represents a coup of sorts. Once we perceive a virtual assistant as human, or at least humanoid, it becomes an entity with which we can establish humanlike relations. We can like it, banter with it, even turn to it for companionship when we’re lonely. When it errs or betrays us, we can get angry with it and, ultimately, forgive it. What’s most important, from the perspective of the companies behind this technology, is that we trust it.

Should we?

* * *

Siri wasn’t the first digital voice assistant when Apple introduced it in 2011, and it may not have been the best. But it was the first to show us what might be possible: a computer that you talk to like a person, that talks back, and that attempts to do what you ask of it without requiring any further action on your part. Adam Cheyer, co-founder of the startup that built Siri and sold it to Apple in 2010, has said he initially conceived of it not as a search engine, but as a “do engine.”

If Siri gave us a glimpse of what is possible, it also inadvertently taught us about what wasn’t yet. At first, it often struggled to understand you, especially if you spoke into your iPhone with an accent, and it routinely blundered attempts to carry out your will. Its quick-witted rejoinders to select queries (“Siri, talk dirty to me”) raised expectations for its intelligence that were promptly dashed once you asked it something it hadn’t been hard-coded to answer. Its store of knowledge proved trivial compared with the vast information readily available via Google search. Siri was as much an inspiration as a disappointment.

Five years later, Siri has gotten smarter, if perhaps less so than one might have hoped. More importantly, the technology underlying it has drastically improved, fueled by a boom in the computer science subfield of machine learning. That has led to sharp improvements in speech recognition and natural language understanding, two separate but related technologies that are crucial to voice assistants.

siriReuters/Suzanne PlunkettLuke Peters demonstrates Siri, an application which uses voice recognition and detection on the iPhone 4S, outside the Apple store in Covent Garden, London Oct. 14, 2011.

If Siri gave us a glimpse of what is possible, it also inadvertently taught us about what wasn’t yet.

If a revolution in technology has made intelligent virtual assistants possible, what has made them inevitable is a revolution in our relationship to technology. Computers began as tools of business and research, designed to automate tasks such as math and information retrieval. Today they’re tools of personal communication, connecting us not only to information but to one another. They’re also beginning to connect us to all the other technologies in our lives: Your smartphone can turn on your lights, start your car, activate your home security system, and withdraw money from your bank. As computers have grown deeply personal, our relationship with them has changed. And yet the way they interact with us hasn’t quite caught up.

“It’s always been sort of appalling to me that you now have a supercomputer in your pocket, yet you have to learn to use it,” says Alan Packer, head of language technology at Facebook. “It seems actually like a failure on the part of our industry that software is hard to use.”

Packer is one of the people trying to change that. As a software developer at Microsoft, he helped to build Cortana. After it launched, he found his skills in heavy demand, especially among the two tech giants that hadn’t yet developed voice assistants of their own. One Thursday morning in December 2014, Packer was on the verge of accepting a top job at Amazon—“You would not be surprised at which team I was about to join,” he says—when Facebook called and offered to fly him to Menlo Park, California, for an interview the next day. He had an inkling of what Amazon was working on, but he had no idea why Facebook might be interested in someone with his skill set.

As it turned out, Facebook wanted Packer for much the same purpose that Microsoft and Amazon did: to help it build software that could make sense of what its users were saying and generate intelligent responses. Facebook may not have a device like the Echo or an operating system like Windows, but its own platforms are full of billions of people communicating with one another every day. If Facebook can better understand what they’re saying, it can further hone its News Feed and advertising algorithms, among other applications. More creatively, Facebook has begun to use language understanding to build artificial intelligence into its Messenger app. Now, if you’re messaging with a friend and mention sharing an Uber, a software agent within Messenger can jump in and order it for you while you continue your conversation.

In short, Packer says, Facebook is working on language understanding because Facebook is a technology company—and that’s where technology is headed. As if to underscore that point, Packer’s former employer this year headlined its annual developer conference by announcing plans to turn Cortana into a portal for conversational bots and integrate it into Skype, Outlook, and other popular applications. Microsoft CEO Satya Nadella predicted that bots will be the Internet’s next major platform, overtaking mobile apps the same way they eclipsed desktop computing.

* * *Amazon Echo DotAP

Siri may not have been very practical, but people immediately grasped what it was. With Amazon’s Echo, the second major tech gadget to put a voice interface front and center, it was the other way around. The company surprised the industry and baffled the public when it released a device in November 2014 that looked and acted like a speaker—except that it didn’t connect to anything except a power outlet, and the only buttons were for power and mute. You control the Echo solely by voice, and if you ask it questions, it talks back. It was like Amazon had decided to put Siri in a black cylinder and sell it for $179. Except Alexa, the virtual intelligence software that powers the Echo, was far more limited than Siri in its capabilities. Who, reviewers wondered, would buy such a bizarre novelty gadget?

That question has faded as Amazon has gradually upgraded and refined the Alexa software, and the five-star Amazon reviews have since poured in. In the New York Times, Farhad Manjoo recently followed up his tepid initial review with an all-out rave: The Echo “brims with profound possibility,” he wrote. Amazon has not disclosed sales figures, but the Echo ranks as the third-best-selling gadget in its electronics section. Alexa may not be as versatile as Siri—yet—but it turned out to have a distinct advantage: a sense of purpose, and of its own limitations. Whereas Apple implicitly invites iPhone users to ask Siri anything, Amazon ships the Echo with a little cheat sheet of basic queries that it knows how to respond to: “Alexa, what’s the weather?” “Alexa, set a timer for 45 minutes.” “Alexa, what’s in the news?”

The cheat sheet’s effect is to lower expectations to a level that even a relatively simplistic artificial intelligence can plausibly meet on a regular basis. That’s by design, says Greg Hart, Amazon’s vice president in charge of Echo and Alexa. Building a voice assistant that can respond to every possible query is “a really hard problem,” he says. “People can get really turned off if they have an experience that’s subpar or frustrating.” So the company began by picking specific tasks that Alexa could handle with aplomb and communicating those clearly to customers.

At launch, the Echo had just 12 core capabilities. That list has grown steadily as the company has augmented Alexa’s intelligence and added integrations with new services, such as Google Calendar, Yelp reviews, Pandora streaming radio, and even Domino’s delivery. The Echo is also becoming a hub for connected home appliances: “ ‘Alexa, turn on the living room lights’ never fails to delight people,” Hart says.

When you ask Alexa a question it can’t answer or say something it can’t quite understand, it fesses up: “Sorry, I don’t know the answer to that question.” That makes it all the more charming when you test its knowledge or capabilities and it surprises you by replying confidently and correctly. “Alexa, what’s a kinkajou?” I asked on a whim one evening, glancing up from my laptop while reading a news story about an elderly Florida woman who woke up one day with a kinkajou on her chest. Alexa didn’t hesitate: “A kinkajou is a rainforest mammal of the family Procyonidae … ” Alexa then proceeded to list a number of other Procyonidae to which the kinkajou is closely related. “Alexa, that’s enough,” I said after a few moments, genuinely impressed. “Thank you,” I added.

“You’re welcome,” Alexa replied, and I thought for a moment that she—it—sounded pleased.

As delightful as it can seem, the Echo’s magic comes with some unusual downsides. In order to respond every time you say “Alexa,” it has to be listening for the word at all times. Amazon says it only stores the commands that you say after you’ve said the word Alexa and discards the rest. Even so, the enormous amount of processing required to listen for a wake word 24/7 is reflected in the Echo’s biggest limitation: It only works when it’s plugged into a power outlet. (Amazon’s newest smart speakers, the Echo Dot and the Tap, are more mobile, but one sacrifices the speaker and the other the ability to respond at any time.)

Even if you trust Amazon to rigorously protect and delete all of your personal conversations from its servers—as it promises it will if you ask it to—Alexa’s anthropomorphic characteristics make it hard to shake the occasional sense that it’s eavesdropping on you, Big Brother–style. I was alone in my kitchen one day, unabashedly belting out the Fats Domino song “Blueberry Hill” as I did the dishes, when it struck me that I wasn’t alone after all. Alexa was listening—not judging, surely, but listening all the same. Sheepishly, I stopped singing.

* * *hal 2001Google Images

The notion that the Echo is “creepy” or “spying on us” might be the most common criticism of the device so far. But there’s a more fundamental problem. It’s one that is likely to haunt voice assistants, and those who rely on them, as the technology evolves and bores it way more deeply into our lives.

The problem is that conversational interfaces don’t lend themselves to the sort of open flow of information we’ve become accustomed to in the Google era. By necessity they limit our choices—because their function is to make choices on our behalf.

For example, a search for “news” on the Web will turn up a diverse and virtually endless array of possible sources, from Fox News to Yahoo News to CNN to Google News, which is itself a compendium of stories from other outlets. But ask the Echo, “What’s in the news?” and by default it responds by serving up a clip of NPR News’s latest hourly update, which it pulls from the streaming radio service TuneIn. Which is great—unless you don’t happen to like NPR’s approach to the news, or you prefer a streaming radio service other than TuneIn. You can change those defaults somewhere in the bowels of the Alexa app, but Alexa never volunteers that information. Most people will never even know it’s an option. Amazon has made the choice for them.

And how does Amazon make that sort of choice? The Echo’s cheat sheet doesn’t tell you that, and the company couldn’t give me a clear answer.

Alexa does take care to mention before delivering the news that it’s pulling the briefing from NPR News and TuneIn. But that isn’t always the case with other sorts of queries.

Let’s go back to our friend the kinkajou. In my pre-Echo days, my curiosity about an exotic animal might have sent me to Google via my laptop or phone. Just as likely, I might have simply let the moment of curiosity pass and not bothered with a search. Looking something up on Google involves just enough steps to deter us from doing it in a surprising number of cases. One of the great virtues of voice technology is to lower that barrier to the point where it’s essentially no trouble at all. Having an Echo in the room when you’re struck by curiosity about kinkajous is like having a friend sitting next to you who happens to be a kinkajou expert. All you have to do is say your question out loud, and Alexa will supply the answer. You literally don’t have to lift a finger.

That is voice technology’s fundamental advantage over all the human-computer interfaces that have come before it: In many settings, including the home, the car, or on a wearable gadget, it’s much easier and more natural than clicking, typing, or tapping. In the logic of today’s consumer technology industry, that makes its ascendance in those realms all but inevitable.

But consider the difference between Googling something and asking a friendly voice assistant. When I Google “kinkajou,” I get a list of websites, ranked according to an algorithm that takes into account all sorts of factors that correlate with relevance and authority. I choose the information source I prefer, then visit its website directly—an experience that could help to further shade or inform my impression of its trustworthiness. Ultimately, the answer does come not from Google, per se, but directly from some third-party authority, whose credibility I can evaluate as I wish.

A voice-based interface is different. The response comes one word at a time, one sentence at a time, one idea at a time. That makes it very easy to follow, especially for humans who have spent their whole lives interacting with one another in just this way. But it makes it very cumbersome to present multiple options for how to answer a given query. Imagine for a moment what it would sound like to read a whole Google search results page aloud, and you’ll understand no one builds a voice interface that way.

That’s why voice assistants tend to answer your question by drawing from a single source of their own choosing. Alexa’s confident response to my kinkajou question, I later discovered, came directly from Wikipedia, which Amazon has apparently chosen as the default source for Alexa’s answers to factual questions. The reasons seem fairly obvious: It’s the world’s most comprehensive encyclopedia, its information is free and public, and it’s already digitized. What it’s not, of course, is infallible. Yet Alexa’s response to my question didn’t begin with the words, “Well, according to Wikipedia … ” She—it—just launched into the answer, as if she (it) knew it off the top of her (its) head. If a human did that, we might call it plagiarism.

The sin here is not merely academic. By not consistently citing the sources of its answers, Alexa makes it difficult to evaluate their credibility. It also implicitly turns Alexa into an information source in its own right, rather than a guide to information sources, because the only entity in which we can place our trust or distrust is Alexa itself. That’s a problem if its information source turns out to be wrong.

The constraints on choice and transparency might not bother people when Alexa’s default source is Wikipedia, NPR, or TuneIn. It starts to get a little more irksome when you ask Alexa to play you music, one of the Echo’s core features. “Alexa, play me the Rolling Stones” will queue up a shuffle playlist of Rolling Stones songs available through Amazon’s own streaming music service, Amazon Prime Music—provided you’re paying the $99 a year required to be an Amazon Prime member. Otherwise, the most you’ll get out of the Echo are 20-second samples of songs available for purchase. Want to guess what one choice you’ll have as to which online retail giant to purchase those songs from?

When you say “Hello” to Alexa, you’re signing up for her party.

Amazon’s response is that Alexa does give you options and cite its sources—in the Alexa app, which keeps a record of your queries and its responses. When the Echo tells you what a kinkajou is, you can open the app on your phone and see a link to the Wikipedia article, as well as an option to search Bing. Amazon adds that Alexa is meant to be an “open platform” that allows anyone to connect to it via an API. The company is also working with specific partners to integrate their services into Alexa’s repertoire. So, for instance, if you don’t want to be limited to playing songs from Amazon Prime Music, you can now take a series of steps to link the Echo to a different streaming music service, such as Spotify Premium. Amazon Prime Music will still be the default, though: You’ll only get Spotify if you specify “from Spotify” in your voice command.

What’s not always clear is how Amazon chooses its defaults and its partners and what motivations might underlie those choices. Ahead of the 2016 Super Bowl, Amazon announced that the Echo could now order you a pizza. But that pizza would come, at least for the time being, from just one pizza-maker: Domino’s. Want a pizza from Little Caesars instead? You’ll have to order it some other way.

To Amazon’s credit, its choice of pizza source is very transparent. To use the pizza feature, you have to utter the specific command, “Alexa, open Domino’s and place my Easy Order.” The clunkiness of that command is no accident. It’s Amazon’s way of making sure that you don’t order a pizza by accident and that you know where that pizza is coming from. But it’s unlikely Domino’s would have gone to the trouble of partnering with Amazon if it didn’t think it would result in at least some number of people ordering Domino’s for their Super Bowl parties rather than Little Caesars.

None of this is to say that Amazon and Domino’s are going to conspire to monopolize the pizza industry anytime soon. There are obviously plenty of ways to order a pizza besides doing it on an Echo. Ditto for listening to the news, the Rolling Stones, a book, or a podcast. But what about when only one company’s smart thermostat can be operated by Alexa? If you come to rely on Alexa to manage your Google Calendar, what happens when Amazon and Google have a falling out?
When you say “Hello” to Alexa, you’re signing up for her party. Nominally, everyone’s invited. But Amazon has the power to ensure that its friends and business associates are the first people you meet.

* * *

google now speak now screenBusiness Insider, William Wei

These concerns might sound rather distant—we’re just talking about niche speakers connected to niche thermostats, right? The coming sea change feels a lot closer once you think about the other companies competing to make digital assistants your main portal to everything you do on your computer, in your car, and on your phone. Companies like Google.

Google may be positioned best of all to capitalize on the rise of personal A.I. It also has the most to lose. From the start, the company has built its business around its search engine’s status as a portal to information and services. Google Now—which does things like proactively checking the traffic and alerting you when you need to leave for a flight, even when you didn’t ask it to—is a natural extension of the company’s strategy.

If something is going to replace Google’s on-screen services, Google wants to be the one that does it.
As early as 2009, Google began to work on voice search and what it calls “conversational search,” using speech recognition and natural language understanding to respond to questions phrased in plain language. More recently, it has begun to combine that with “contextual search.” For instance, as Google demonstrated at its 2015 developer conference, if you’re listening to Skrillex on your Android phone, you can now simply ask, “What’s his real name?” and Google will intuit that you’re asking about the artist. “Sonny John Moore,” it will tell you, without ever leaving the Spotify app.

It’s no surprise, then, that Google is rumored to be working on two major new products—an A.I.-powered messaging app or agent and a voice-powered household gadget—that sound a lot like Facebook M and the Amazon Echo, respectively. If something is going to replace Google’s on-screen services, Google wants to be the one that does it.

So far, Google has made what seems to be a sincere effort to win the A.I. assistant race without

sacrificing the virtues—credibility, transparency, objectivity—that made its search page such a dominant force on the Web. (It’s worth recalling: A big reason Google vanquished AltaVista was that it didn’t bend its search results to its own vested interests.) Google’s voice search does generally cite its sources. And it remains primarily a portal to other sources of information, rather than a platform that pulls in content from elsewhere. The downside to that relatively open approach is that when you say “hello” to Google voice search, it doesn’t say hello back. It gives you a link to the Adele song “Hello.” Even then, Google isn’t above playing favorites with the sources of information it surfaces first: That link goes not to Spotify, Apple Music, or Amazon Prime Music, but to YouTube, which Google owns. The company has weathered antitrust scrutiny over allegations that this amounted to preferential treatment. Google’s defense was that it puts its own services and information sources first because its users prefer them.

* * *

HerYouTube

If there’s a consolation for those concerned that intelligent assistants are going to take over the world, it’s this: They really aren’t all that intelligent. Not yet, anyway.

The 2013 movie Her, in which a mobile operating system gets to know its user so well that they become romantically involved, paints a vivid picture of what the world might look like if we had the technology to carry Siri, Alexa, and the like to their logical conclusion. The experts I talked to, who are building that technology today, almost all cited Her as a reference point—while pointing out that we’re not going to get there anytime soon.

Google recently rekindled hopes—and fears—of super-intelligent A.I. when its AlphaGo software defeated the world champion in a historic Go match. As momentous as the achievement was, designing an algorithm to win even the most complex board game is trivial compared with designing one that can understand and respond appropriately to anything a person might say. That’s why, even as artificial intelligence is learning to recommend songs that sound like they were hand-picked by your best friend or navigate city streets more safely than any human driver, A.I. still has to resort to parlor tricks—like posing as a 13-year-old struggling with a foreign language—to pass as human in an extended conversation. The world is simply too vast, language too ambiguous, the human brain too complex for any machine to model it, at least for the foreseeable future.

But if we won’t see a true full-service A.I. in our lifetime, we might yet witness the rise of a system that can approximate some of its capabilities—comprising not a single, humanlike Her, but a million tiny hims carrying out small, discrete tasks handily. In January, the Verge’s Casey Newton made a compelling argument that our technological future will be filled not with websites, apps, or even voice assistants, but with conversational messaging bots. Like voice assistants, these bots rely on natural language understanding to carry on conversations with us. But they will do so via the medium that has come to dominate online interpersonal interaction, especially among the young people who are the heaviest users of mobile devices: text messaging. For example, Newton points to “Lunch Bot,” a relatively simple agent that lived in the wildly popular workplace chat program Slack and existed for a single, highly specialized purpose: to recommend the best place for employees to order their lunch from on a given day. It soon grew into a venture-backed company called Howdy.

A world of conversational machines is one in which we treat software like humans, letting them deeper into our lives and confiding in them more than ever.

I have a bot in my own life that serves a similarly specialized yet important role. While researching this story, I ran across a company called X.ai whose mission is to build the ultimate virtual scheduling assistant. It’s called Amy Ingram, and if its initials don’t tip you off, you might interact with it several times before realizing it’s not a person. (Unlike some other intelligent assistant companies, X.ai gives you the option to choose a male name for your assistant instead: Mine is Andrew Ingram.) Though it’s backed by some impressive natural language tech, X.ai’s bot does not attempt to be a know-it-all or do-it-all; it doesn’t tell jokes, and you wouldn’t want to date him. It asks for access to just one thing—your calendar. And it communicates solely by email. Just cc it on any thread in which you’re trying to schedule a meeting or appointment, and it will automatically step in and take over the back-and-forth involved in nailing down a time and place. Once it has agreed on a time with whomever you’re meeting—or, perhaps, with his or her own assistant, whether human or virtual—it will put all the relevant details on your calendar. Have your A.I. cc my A.I.

For these bots, the key to success is not growing so intelligent that they can do everything. It’s staying specialized enough that they don’t have to.

“We’ve had this A.I. fantasy for almost 60 years now,” says Dennis Mortensen, X.ai’s founder and CEO. “At every turn we thought the only outcome would be some human-level entity where we could converse with it like you and I are [conversing] right now. That’s going to continue to be a fantasy. I can’t see it in my lifetime or even my kids’ lifetime.” What is possible, Mortensen says, is “extremely specialized, verticalized A.I.s that understand perhaps only one job, but do that job very well.”

Yet those simple bots, Mortensen believes, could one day add up to something more. “You get enough of these agents, and maybe one morning in 2045 you look around and that plethora—tens of thousands of little agents—once they start to talk to each other, it might not look so different from that A.I. fantasy we’ve had.”

That might feel a little less scary. But it still leaves problems of transparency, privacy, objectivity, and trust—questions that are not new to the world of personal technology and the Internet but are resurfacing in fresh and urgent forms. A world of conversational machines is one in which we treat software like humans, letting them deeper into our lives and confiding in them more than ever. It’s one in which the world’s largest corporations know more about us, hold greater influence over our choices, and make more decisions for us than ever before. And it all starts with a friendly “Hello.”

 

www.businessinsider.com/ai-assistants-are-taking-over-2016-4

The brightest minds in AI research – Machine Learning

In AI research,  brightest minds aren’t driven by the next product cycle or profit margin – They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself.

http://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/

Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free

ElonMusk201604

THE FRIDAY AFTERNOON news dump, a grand tradition observed by politicians and capitalists alike, is usually supposed to hide bad news. So it was a little weird that Elon Musk, founder of electric car maker Tesla, and Sam Altman, president of famed tech incubator Y Combinator, unveiled their new artificial intelligence company at the tail end of a weeklong AI conference in Montreal this past December.

But there was a reason they revealed OpenAI at that late hour. It wasn’t that no one was looking. It was that everyone was looking. When some of Silicon Valley’s most powerful companies caught wind of the project, they began offering tremendous amounts of money to OpenAI’s freshly assembled cadre of artificial intelligence researchers, intent on keeping these big thinkers for themselves. The last-minute offers—some made at the conference itself—were large enough to force Musk and Altman to delay the announcement of the new startup. “The amount of money was borderline crazy,” says Wojciech Zaremba, a researcher who was joining OpenAI after internships at both Google and Facebook and was among those who received big offers at the eleventh hour.

How many dollars is “borderline crazy”? Two years ago, as the market for the latest machine learning technology really started to heat up, Microsoft Research vice president Peter Lee said that the cost of a top AI researcher had eclipsed the cost of a top quarterback prospect in the National Football League—and he meant under regular circumstances, not when two of the most famous entrepreneurs in Silicon Valley were trying to poach your top talent. Zaremba says that as OpenAI was coming together, he was offered two or three times his market value.

OpenAI didn’t match those offers. But it offered something else: the chance to explore research aimed solely at the future instead of products and quarterly earnings, and to eventually share most—if not all—of this research with anyone who wants it. That’s right: Musk, Altman, and company aim to give away what may become the 21st century’s most transformative technology—and give it away for free.

Zaremba says those borderline crazy offers actually turned him off—despite his enormous respect for companies like Google and Facebook. He felt like the money was at least as much of an effort to prevent the creation of OpenAI as a play to win his services, and it pushed him even further towards the startup’s magnanimous mission. “I realized,” Zaremba says, “that OpenAI was the best place to be.”

That’s the irony at the heart of this story: even as the world’s biggest tech companies try to hold onto their researchers with the same fierceness that NFL teams try to hold onto their star quarterbacks, the researchers themselves just want to share. In the rarefied world of AI research, the brightest minds aren’t driven by—or at least not only by—the next product cycle or profit margin. They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself.

OpenAI is a billion-dollar effort to push AI as far as it will go.
This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called “reinforcement learning”—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.

But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.

AI Everywhere
Silicon Valley is not exactly averse to hyperbole. It’s always wise to meet bold-sounding claims with skepticism. But in the field of AI, the change is real. Inside places like Google and Facebook, a technology called deep learning is already helping Internet services identify faces in photos, recognize commands spoken into smartphones, and respond to Internet search queries. And this same technology can drive so many other tasks of the future. It can help machines understand natural language—the natural way that we humans talk and write. It can create a new breed of robot, giving automatons the power to not only perform tasks but learn them on the fly. And some believe it can eventually give machines something close to common sense—the ability to truly think like a human.

But along with such promise comes deep anxiety. Musk and Altman worry that if people can build AI that can do great things, then they can build AI that can do awful things, too. They’re not alone in their fear of robot overlords, but perhaps counterintuitively, Musk and Altman also think that the best way to battle malicious AI is not to restrict access to artificial intelligence but expand it. That’s part of what has attracted a team of young, hyper-intelligent idealists to their new project.

OpenAI began one evening last summer in a private room at Silicon Valley’s Rosewood Hotel—an upscale, urban, ranch-style hotel that sits, literally, at the center of the venture capital world along Sand Hill Road in Menlo Park, California. Elon Musk was having dinner with Ilya Sutskever, who was then working on the Google Brain, the company’s sweeping effort to build deep neural networks—artificially intelligent systems that can learn to perform tasks by analyzing massive amounts of digital data, including everything from recognizing photos to writing email messages to, well, carrying on a conversation. Sutskever was one of the top thinkers on the project. But even bigger ideas were in play.

Sam Altman, whose Y Combinator helped bootstrap companies like Airbnb, Dropbox, and Coinbase, had brokered the meeting, bringing together several AI researchers and a young but experienced company builder named Greg Brockman, previously the chief technology officer at high-profile Silicon Valley digital payments startup called Stripe, another Y Combinator company. It was an eclectic group. But they all shared a goal: to create a new kind of AI lab, one that would operate outside the control not only of Google, but of anyone else. “The best thing that I could imagine doing,” Brockman says, “was moving humanity closer to building real AI in a safe way.”

Musk is one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.
Musk was there because he’s an old friend of Altman’s—and because AI is crucial to the future of his various businesses and, well, the future as a whole. Tesla needs AI for its inevitable self-driving cars. SpaceX, Musk’s other company, will need it to put people in space and keep them alive once they’re there. But Musk is also one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.

The trouble was: so many of the people most qualified to solve all those problems were already working for Google (and Facebook and Microsoft and Baidu and Twitter). And no one at the dinner was quite sure that these thinkers could be lured to a new startup, even if Musk and Altman were behind it. But one key player was at least open to the idea of jumping ship. “I felt there were risks involved,” Sutskever says. “But I also felt it would be a very interesting thing to try.”

Breaking the Cycle
Emboldened by the conversation with Musk, Altman, and others at the Rosewood, Brockman soon resolved to build the lab they all envisioned. Taking on the project full-time, he approached Yoshua Bengio, a computer scientist at the University of Montreal and one of founding fathers of the deep learning movement. The field’s other two pioneers—Geoff Hinton and Yann LeCun—are now at Google and Facebook, respectively, but Bengio is committed to life in the world of academia, largely outside the aims of industry. He drew up a list of the best researchers in the field, and over the next several weeks, Brockman reached out to as many on the list as he could, along with several others.

Many of these researchers liked the idea, but they were also wary of making the leap. In an effort to break the cycle, Brockman picked the ten researchers he wanted the most and invited them to spend a Saturday getting wined, dined, and cajoled at a winery in Napa Valley. For Brockman, even the drive into Napa served as a catalyst for the project. “An underrated way to bring people together are these times where there is no way to speed up getting to where you’re going,” he says. “You have to get there, and you have to talk.” And once they reached the wine country, that vibe remained. “It was one of those days where you could tell the chemistry was there,” Brockman says. Or as Sutskever puts it: “the wine was secondary to the talk.”

By the end of the day, Brockman asked all ten researchers to join the lab, and he gave them three weeks to think about it. By the deadline, nine of them were in. And they stayed in, despite those big offers from the giants of Silicon Valley. “They did make it very compelling for me to stay, so it wasn’t an easy decision,” Sutskever says of Google, his former employer. “But in the end, I decided to go with OpenAI, partly of because of the very strong group of people and, to a very large extent, because of its mission.”

The deep learning movement began with academics. It’s only recently that companies like Google and Facebook and Microsoft have pushed into the field, as advances in raw computing power have made deep neural networks a reality, not just a theoretical possibility. People like Hinton and LeCun left academia for Google and Facebook because of the enormous resources inside these companies. But they remain intent on collaborating with other thinkers. Indeed, as LeCun explains, deep learning research requires this free flow of ideas. “When you do research in secret,” he says, “you fall behind.”

As a result, big companies now share a lot of their AI research. That’s a real change, especially for Google, which has long kept the tech at the heart of its online empire secret. Recently, Google open sourced the software engine that drives its neural networks. But it still retains the inside track in the race to the future. Brockman, Altman, and Musk aim to push the notion of openness further still, saying they don’t want one or two large corporations controlling the future of artificial intelligence.

The Limits of Openness
All of which sounds great. But for all of OpenAI’s idealism, the researchers may find themselves facing some of the same compromises they had to make at their old jobs. Openness has its limits. And the long-term vision for AI isn’t the only interest in play. OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”

According to Brockman, the lab doesn’t pay the same astronomical salaries that AI researchers are now getting at places like Google and Facebook. But he says the lab does want to “pay them well,” and it’s offering to compensate researchers with stock options, first in Y Combinator and perhaps later in SpaceX (which, unlike Tesla, is still a private company).

Brockman insists that OpenAI won’t give special treatment to its sister companies.
Nonetheless, Brockman insists that OpenAI won’t give special treatment to its sister companies. OpenAI is a research outfit, he says, not a consulting firm. But when pressed, he acknowledges that OpenAI’s idealistic vision has its limits. The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. “Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it,” Brockman says. “We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release.”

Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. “We won’t patent anything in the near term,” Brockman says. “But we’re open to changing tactics in the long term, if we find it’s the best thing for the world.” For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.

But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI’s founders have espoused. “That’s what the patent system is about,” says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. “This makes me wonder where they’re really going.”

The Super-Intelligence Problem
When Musk and Altman unveiled OpenAI, they also painted the project as a way to neutralize the threat of a malicious artificial super-intelligence. Of course, that super-intelligence could arise out of the tech OpenAI creates, but they insist that any threat would be mitigated because the technology would be usable by everyone. “We think its far more likely that many, many AIs will work to stop the occasional bad actors,” Altman says.

But not everyone in the field buys this. Nick Bostrom, the Oxford philosopher who, like Musk, has warned against the dangers of AI, points out that if you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe. “If you have a button that could do bad things to the world,” Bostrom says, “you don’t want to give it to everyone.” If, on the other hand, OpenAI decides to hold back research to keep it from the bad guys, Bostrom wonders how it’s different from a Google or a Facebook.

If you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe.
He does say that the not-for-profit status of OpenAI could change things—though not necessarily. The real power of the project, he says, is that it can indeed provide a check for the likes of Google and Facebook. “It can reduce the probability that super-intelligence would be monopolized,” he says. “It can remove one possible reason why some entity or group would have radically better AI than everyone else.”

But as the philosopher explains in a new paper, the primary effect of an outfit like OpenAI—an outfit intent on freely sharing its work—is that it accelerates the progress of artificial intelligence, at least in the short term. And it may speed progress in the long term as well, provided that it, for altruistic reasons, “opts for a higher level of openness than would be commercially optimal.”

“It might still be plausible that a philanthropically motivated R&D funder would speed progress more by pursuing open science,” he says.

Like Xerox PARC
In early January, Brockman’s nine AI researchers met up at his apartment in San Francisco’s Mission District. The project was so new that they didn’t even have white boards. (Can you imagine?) They bought a few that day and got down to work.

Brockman says OpenAI will begin by exploring reinforcement learning, a way for machines to learn tasks by repeating them over and over again and tracking which methods produce the best results. But the other primary goal is what’s called “unsupervised learning”—creating machines that can truly learn on their own, without a human hand to guide them. Today, deep learning is driven by carefully labeled data. If you want to teach a neural network to recognize cat photos, you must feed it a certain number of examples—and these examples must be labeled as cat photos. The learning is supervised by human labelers. But like many others researchers, OpenAI aims to create neural nets that can learn without carefully labeled data.

“If you have really good unsupervised learning, machines would be able to learn from all this knowledge on the Internet—just like humans learn by looking around—or reading books,” Brockman says.

He envisions OpenAI as the modern incarnation of Xerox PARC, the tech research lab that thrived in the 1970s. Just as PARC’s largely open and unfettered research gave rise to everything from the graphical user interface to the laser printer to object-oriented programing, Brockman and crew seek to delve even deeper into what we once considered science fiction. PARC was owned by, yes, Xerox, but it fed so many other companies, most notably Apple, because people like Steve Jobs were privy to its research. At OpenAI, Brockman wants to make everyone privy to its research.

This month, hoping to push this dynamic as far as it will go, Brockman and company snagged several other notable researchers, including Ian Goodfellow, another former senior researcher on the Google Brain team. “The thing that was really special about PARC is that they got a bunch of smart people together and let them go where they want,” Brockman says. “You want a shared vision, without central control.”

Giving up control is the essence of the open source ideal. If enough people apply themselves to a collective goal, the end result will trounce anything you concoct in secret. But if AI becomes as powerful as promised, the equation changes. We’ll have to ensure that new AIs adhere to the same egalitarian ideals that led to their creation in the first place. Musk, Altman, and Brockman are placing their faith in the wisdom of the crowd. But if they’re right, one day that crowd won’t be entirely human.

Microsoft Research, Seeing AI

The Real Reason Microsoft Is Building So Many Computer Vision Apps

Turns out Microsoft isn’t as interested in rating mustaches or guessing ages as it is helping the visually impaired navigate the world.

For the past few years, Microsoft has been steadily releasing goofy little apps that use neural networks to perform tricks ranging from guessing your age and rating your mustache to describing photographs (often comically) and even telling you what kind of dog you look like.

But why? Entertaining though these apps are, they all seemed a little random—until a couple of weeks ago at Build 2016, when Microsoft revealed that these experiments are more than just a sum of their parts. In fact, they represent stepping stones on the road leading to Seeing AI, an augmented-reality project for the visually impaired that aims to give the blind the next best thing to sight: information.

Built by Microsoft Research, Seeing AI is an app that lives either on smartphones or Pivothead-brand smart glasses. It takes all of the tricks Microsoft developed using those „goofy“ machine learning apps and combines them into a digital Swiss Army knife for the blind. By helping the visually impaired user line up and snap a photograph using their device, the app can tell them what they’re „looking“ at; it can read menus or signs, tell you how old the person you’re talking to is, or even describe what’s happening right in front of you—say, that you’re in a park, watching a golden retriever catch an orange frisbee. Presumably, it has some excellent mustache detection skills, too.

This isn’t the first app for the blind,“ admits project lead Anirudh Koul. „But those apps are extremely limited.“ One app might be dedicated just to helping you know what color you’re looking at. Another might read menus and signs, or tell you what box you’re holding in the grocery store based on the barcode. There are even photography apps for the blind.

But the problem with all these apps is fragmentation. For a blind person, using them seamlessly is like having to screw in a different set of eyes every time you want to read a paper or identify a color. Seeing AI can do all of the above—and more—all within the same app.

Of course, having so much functionality introduces its own design challenges. According to Margaret Mitchell, Seeing AI’s vision-to-language guru, context is key when trying to decode visual information to text. „If you’re outside, for example, you don’t want it to describe the grass as a green carpet anymore than you want it to describe a blue ceiling as a clear sky when you’re indoors,“ she says. It’s also challenging to know how much information Seeing AI should give users at any given moment. Sometimes, it might be more useful to list what’s around a user, while other times, a scene-description is better, so knowing when to automatically switch between modes becomes important.

These are just some of the problems the Seeing AI team is trying to work out before their software becomes a consumer-facing product. But already, Seeing AI’s software is proving indispensable to Microsoft software engineer Saqib Shaikh, who lost his sight at the age of seven. He has helped the Seeing AI team test and tweak its software, as well as identify features that sighted people might not think of as useful, but which the visually impaired really need. For example: finding an empty seat in a restaurant. „His guidance has been amazing,“ says Mitchell. „He can exactly identify what we should be returning and why.“

Although apps that use its machine-learning algorithms are routinely released by Microsoft Garage, neither Koul nor Mitchell could say when Seeing AI would be available for everyone to download. They only say it is a „research project under development.“ But this isn’t just some silly web toy. When released, Seeing AI will be an app that can fundamentally change a person’s life, while continuing the grand tradition of accessibility pushing design forward in exciting directions.

www.fastcodesign.com/3058905/the-real-reason-microsoft-is-building-so-many-computer-vision-apps

Fashion Ready for the AI Revolution?

If artificial intelligence has its way, discounting could disappear, thanks to software that tells retailers exactly what and how many products to buy, and when to put them on sale to sell them at full price. Online shopping could become a conversation, where the shopper describes the dress of their dreams, and, in seconds, an AI-powered search engine tracks down the closest match. Designers, merchandisers and buyers could all work alongside AI, to predict what customers want to wear, before they even know themselves.

In the last few years, a trifecta of cheap, ubiquitous, powerful computing; big data; and the development of deep learning have triggered a revolution in artificial intelligence. The computing devices that now fill our everyday lives generate large data sets, which “deep learning” algorithms analyse to find trends, make predictions and perform specific tasks, such as identifying specific objects in an image. The more data presented to the algorithm, the more it “learns” to do a task effectively.

Earlier this year, in a blog post titled What’s Next in Computing?, Chris Dixon, partner at the venture capital firm Andreessen Horowitz, wrote, “Many of the papers, data sets, and software tools related to deep learning have been open sourced. This has had a democratising effect, allowing individuals and small organisations to build powerful applications.” As a result, AI might “finally be entering a golden age,” he wrote.

No area of life or business will be insulated from AI, in the same way that no part of society hasn’t been touched by the Internet.

These developments have provoked an AI arms race. Companies like Google and Apple are snapping up AI start-ups, and in the last year, milestones in the field have arrived faster than previously expected, such as last month, when Google’s AlphaGo program beat a human champion at Go, a strategy board game considered more complex than chess.

Already, big businesses are using AI — Kensho, a data-crunching AI software, is automating finance jobs at Goldman Sachs, while Forbes uses AI to automate basic financial news stories. IBM’s Watson — a set of algorithms and software that is the company’s core AI product — is available as a cloud service, enabling research teams to rapidly analyse large amounts of data, such as millions of scientific papers, to test hypotheses and discover patterns. By 2020, the market for machine learning applications will reach $40 billion, according to International Data Corporation, a marketing firm specialising in information technology.

“No area of life or business will be insulated from AI, in the same way that there’s no part of society that hasn’t been touched by computers or the Internet,” Kenneth Cukier, data editor at The Economist and author of books including Big Data: A Revolution that Will Transform How We Work, Live and Think, told BoF. “Today it seems shocking because it’s new. But in time, AI will fade into the background as just the way things are done.”

By presenting a cheaper, faster way of doing many tasks that companies currently employ humans to do, many predict AI will radically alter industries from transportation, to healthcare, to finance. In fashion, like in other industries, driverless trucks will likely reduce companies’ logistics costs, or software like that used by Forbes could be used to write formulaic text, such as product descriptions on e-commerce sites.

But for fashion, some of the biggest opportunities are in aligning supply and demand, scaling personal customer service, and assisting designers.

Aligning supply and demand

Currently, fashion brands and retailers work with a limited amount of data, to predict what products to order and when to discount or replenish them. If they predict wrong, the result is loss of income due to mark-downs, waste and popular items selling out. By analysing large amounts of data — say, the browsing and shopping history of every single one of a fashion brand’s online customers, as well as those of its competitors — AI can tell a retailer how to align product drops to match demand, and even how to display products in a store to sell as many as possible.

AI’s ability to make predictions like these has particular implications for a trend-driven industry like fashion. Today, the fashion market is visible online: an AI can crawl e-commerce sites to see which products are selling; it can analyse consumer data to learn which colours or materials customers in a specific country — or even city — are buying; and it can scoop up swathes of information from social media to identify trends and microtrends. This data — which was not previously available — could help brands be first to market with styles that are likely to become mainstream trends.

Edited, a data analytics company specialising in fashion, is already doing this. Edited’s software has “learned” to recognise apparel products in images, and natural language processing software, which can classify these products. Edited let this loose on a bank of data on 60 million fashion products, collected from retailers and brands in over 30 countries, in over 35 languages: the result is a searchable database of organised, structured information on each of these products.

“We can process the data in seconds. No one could ever do it manually,” says Geoff Watts, chief executive officer of the company. Brands that work with Edited “usually start by analysing their competitors’ historical pricing and assortment data to make more strategic decisions, ultimately leading to better sales, stronger inventory management and less discounting,” he says.

Ganesh Subramanian, former chief operating officer of e-commerce giant Myntra, and now co-founder of Stylumia, an AI-powered tool for fashion professionals, agrees that AI could stop fashion companies making important decisions in the dark. “A trend is nothing but a movement which has a beginning and a gradual adoption,” he says. Like Edited, Stylumia uses AI to make sense of a sea of data, from videos, e-commerce sites, social media, etc. “We can not only spot trends, but also come out with what is the relevant timing for [brands and retailers] to adopt,” he says.

Scaling personal service

In the days when luxury goods could only be bought in a few physical boutiques, one-to-one customer service was at the core of the industry. The Internet changed that dramatically, giving customers a seamless — but often impersonal — way to trawl thousands of products and purchase without exchanging a word. Could AI deliver that original one-to-one service at scale?

One way to do this is through chat bots, which can exchange messages, stories and information with humans. Already, Microsoft’s XiaoIce chatbot is being used by 40 million people on Chinese microblogging platform, Weibo. (Not all attempts to have bots interact with humans have been so successful: when Micosoft unleashed Tay, another chat bot, on Twitter last month, the bot “learned” from other users and rapidly began tweeting offensive messages.)

Machine learning can also enable brands to finely personalise their offerings to each market, or even, each individual customer. Thread, an online personal styling service, combines human stylists with machine learning algorithms. The AI crunches data like what human stylists thinks would suit an individual user, where they live and what the weather is like there, as well as the user’s ratings of products on the app, which items they click, and how customers with similar purchasing habits responded to product recommendations. The AI then trawls through 200,000 fashion products and makes a judgement on what products to recommend.

“Humans are limited in many ways,” says Thread founder and chief executive officer, Kieran O’Neill. Not only can AI process a vast amount of data — it can also “remember your preferences in a way that it’s just not practical for humans to do. A computer remembers everything,” he says. Michele Goetz, principal analyst covering cognitive computing and data at Forrester, agrees: “That’s where I think AI shines, being able to scale insight.”

IBM’s Watson — which is working with over 500 partners in industries including retail — has partnered with The North Face to offer “guided shopping” online. The AI asks shoppers questions on factors such as gender, time of year and technical product details, to deliver tailored recommendations. „Online shopping can be overwhelming. There are so many choices and products from so many different sources,” says Keith Mercier, ecosystem manager of Watson. AI, he says, “can help retailers make sense of massive amounts of unstructured data to improve and personalise the online shopping experience.“

Image recognition apps such as Snap Fashion and ASAP54 are also harnessing AI to build search engines for fashion. In theory, a user can snap a picture of someone on the street wearing a dress they like, or even something as abstract as a painting, and an image-recognition search engine will search a huge database of shoppable products and serve up similar items. When BoF tested these products, the search results were far from perfect, but Kieran O’Neill bets that “in the next three years it will become pretty good.”

AI-assisted Design

„There are AI systems today that compose music, write stories, and create artwork that no one can tell is machine-generated. So fashion design is surely not beyond AI’s capabilities,” says Pedro Domingos, author of The Master Algorithm, which predicts the revolutionary impact of machine learning, “What will likely happen, however, is not that AI will completely replace designers, but will become an indispensable tool for them.“

In the same way that the work of architects like Frank Gehry and Zaha Hadid relies on computer modelling, “Fashion designers armed with AIs will be similarly able to come up with radical new ideas: AI will amplify their creativity rather than replace it,“ reasons Domingos.

“AI will absolutely challenge and replace designers,” counters Kenneth Cukier. “Let’s get real — lots of design is trial and error or boring, repetitive work. AI can help with both by making more accurate predictions of what designs will work and taking over some of the repetitive tasks.”

Approaching AI now

Some believe fashion brands should strike early and invest. “They certainly need to have in-house AI teams, like other companies, whether by building them from scratch or by acquiring start-ups,” advises Domingos. “Those who wait and see risk falling behind, particularly in a fast-moving industry like fashion, where consumers are the main drivers and tastes are fickle.”

“The old world of personal touch is not necessarily going away, but it’s not the way you’re going to grow your brand even from a luxury standpoint,” argues Michele Forrester. When fashion brands thing about AI, she says, they need to consider the next generation of luxury customers, who were born into a world of social media, and handed at birth the ability to buy anything they want, from anywhere in the world. “They don’t have the patience for a one-on-one relationship,” she says.

Indeed, the next generation of big spenders is already using AI: GPS navigation shapes their driving habits, while algorithm-driven personalised recommendations from Spotify and Netflix influence the songs and shows they consume. “If you don’t have it, you are not aligned with the experiences they’re used to,” warns Goetz.

Others are more cautious. “The top tier brands should resist the temptation to buy into the AI world right now,” says Cukier. “Their business is being good at fashion, not smart at technology… Right now, the most promising technologies are still in the lab or in field trials, like self-driving cars. Big, smart, non-technology companies can afford to wait.”

Others agree that, for the moment, partnering with third party AI specialists is the way forward. “The smartest thing a business can do, is partner with a fashion-focused tech company with AI at its core,” says Geoff Watts of Edited. “Building AI teams from scratch, or acquiring AI start-ups and retrofitting them to have a retail focus, requires a substantial investment of time and money.”

Kieran O’Neill of Thread adds that, rather than dive straight in to AI investment, brands should build a strategy around AI, and work out on what the lowest hanging fruits are for their business. Some of the brands using Thread — such as Burberry, Jigsaw and Topman — signed up to sell on the platform, not because they needed the sales, but “because they really want to be close to the AI stuff we’re doing,” he says.

“Every company in every industry should be paying very close attention to AI,” advises Martin Ford. “There is no limit to how far it can go.”

http://www.businessoffashion.com/articles/fashion-tech/is-fashion-ready-for-the-ai-revolution