Archiv für den Monat Mai 2016

Intel’s new smartphone strategy is to quit

Atom chip cancellation puts Intel’s mobile processor plans on ice

Late on Friday night, Intel snuck out the news that it’s bailing on the smartphone market. Despite being the world’s best known processor maker, Intel was only a bit player in the mobile space dominated by Qualcomm, Apple, and Samsung, and it finally chose to cut its losses and cancel its next planned chip, Broxton. This followed downbeat quarterly earnings, 12,000 job cuts, and a major restructuring at a company that’s had a very busy April. Intel is still one of the giants of the global tech industry, but it’s no longer as healthy and sprightly as it used to be.

The bane of Intel’s existence for the past decade or so has been the transition to mobile computing. It wasn’t supposed to be that way. Having secured a commanding lead as the premier provider of desktop PC processors, Intel had a clear-eyed strategy for extending its dominance into the mobile realm.

A series of ignominious failures has left Intel reeling

With the help of Microsoft in 2006, Intel inaugurated the category of Ultra-Mobile PCs (UMPCs), which were the stylus- and touch-friendly precursors to today’s ultra-versatile tablets. They combined low-voltage Celeron and Pentium M chips with Windows Vista, and like everything else touched by Vista, they flopped. Unattainable pricing and inadequate battery life consigned the UMPC to the status of a historical footnote. The same fate befell Intel’s Mobile Internet Device (MID) initiative, which saw the chipmaker pushing and incentivizing its hardware partners to build mini internet tablets like the Nokia N810. Pervasive problems with affordability, battery life, clunky design, and ill-suited software prevented MIDs from ever becoming a mass-market success.

On the software front, Intel recognized the need for a tailored operating system to make the most of mobile PCs and sought to develop its own Linux variant titled Moblin. Moblin never convinced anyone outside of Intel, and was eventually merged with Nokia’s Maemo to produce MeeGo, which in turn merged with Samsung’s Bada and is now known as Tizen. Well, it’s only barely known, even by owners of its most successful product, the Gear S2 smartwatch. The series of post-Moblin software mergers has been merely the consolidation of repeated mobile failures.

Intel’s ventures into mobile hardware and software development show that even a great idea is only as good as its execution. The MIDs and UMPCs of yesteryear were aimed at the same usage scenarios as the phablets and pro tablets of today — but they were compromised and premature, and therefore rejected by the market.

This has cost Intel dearly, with the company lavishing billions on developing suitable processors and modems to put into its various mobile undertakings. The multibillion-dollar mobile costs have spiraled in recent years — a loss of $3.1 billion in 2013 was followed by a loss of $4.3 billion in 2014 — which eventually forced Intel to combine its mobile and PC earnings reports in order to disguise its unproductive spending.

The tragedy of Intel’s mobile failure is that the company foresaw all the threats to its business and acted to preempt them. It just didn’t do so very well. That being said, Intel’s the victim of its bad decision making almost as much as its poor execution.

Favoring WiMAX over LTE was a historically bad decision

One of the fateful choices that Intel made around 2009 was to commit itself to WiMAX as the 4G standard of the future. Qualcomm went the other way, prioritizing LTE instead, and now the latter has a significant lead in designing and integrating LTE modems, while the former is scrambling and struggling to catch up. The total defeat of WiMAX almost wiped out Sprint, its biggest US purveyor and advocate, and it put Intel on the backfoot in adopting the true 4G standard of the future, which turned out to be LTE. At this point, even if Intel were to double its already vast spending, bridging the gap of years of research, development, and experience would be practically impossible.

In spite of its unhappy mobile history, Intel persisted in trying to compete because it knew how central mobile devices were becoming to our lives. Last year, its Atom processors even looked like they had a shot at denting Qualcomm’s market dominance, thanks in large part to the Qualcomm Snapdragon 810 chip’s power and heat issues. There was a small opening, but this year’s Snapdragon 820 is an absolute beast that conclusively shuts the door on any further Intel incursions.

Read more: Intel sees itself as a ‘communications and connectivity company’

The top three smartphone vendors — Apple, Samsung, and Huawei — each produce their own processors. At Mobile World Congress this year, Xiaomi, another large-scale smartphone maker, co-branded its launch event with Qualcomm. And global names like LG, HTC, and Sony basically only shop at the Snapdragon aisle for their flagship phones. Intel’s most loyal hardware partner is Asus, which makes a habit of announcing interesting new devices at Computex in June and not shipping them until the end of the year. The most feted Intel Atom-powered smartphone to date is probably the ZenFone 2, a distinction that speaks for itself.

Without any unique advantages to its Atom CPU line and no captive market like it has on the desktop, Intel is right to bow out of the smartphone processor race. It’s a merciless competition that has already ousted big names like Nvidia and Texas Instruments, and Intel will be better off figuring out different parts of the mobile computing world where it can participate profitably. CEO Brian Krzanich put „the cloud and data center“ first atop a list of Intel’s new priorities in a recent blog post, reiterating the idea that the company will transition to facilitating connectivity as its main area of competence. Discrete Intel LTE modems will still be around, and the company seems to think it can recapture its mobile competitiveness by being a leader in the adoption of the incoming 5G wireless standards. To that end, Intel doesn’t intend to kill off Atom entirely, and still plans to offer a chip for tablets later this year, codenamed Apollo Lake.

Even Moore’s Law is hitting a wall

To its credit, Intel has always operated under the assumption that mobile computing will eventually supplant the desktop and consign the old PC boxes to niche-use status. We email on our phones, ideate on our phablets, and write and create on our tablets — as Steven Sinofsky, former boss of Windows, recently articulated with respect to the iPad Pro. The primary form of personal computer is changing, which is why ultrabooks and hybrid laptops are so prominent in Intel’s marketing and development efforts. The low-power Core M, Intel told me last year, was the most important variant of the Skylake processor family, and the company’s ongoing mission is to move with its users to more portable form factors.

Intel remains a diverse and strongly profitable company. There will always be PC gamers and video producers looking for the latest and fastest CPU. But while the core business that’s kept Intel going for so many years isn’t disappearing, its importance and primacy are being steadily eroded by the insatiable growth of mobile computing. Even Moore’s Law, the Intel co-founder’s prediction about the constant growth of processing power in chips, is hitting a wall now. Intel’s desktop CPUs are being pushed further back on the roadmap while some of its mobile ones are being deleted entirely.

It’s an uncertain future for what used to be one of the most assured companies in tech.

http://www.theverge.com/2016/5/3/11576216/intel-atom-smartphone-quit

Ford patent spoofs bigger engine sound for fuel savings

It appears there’s a new wrinkle in the downsizing and turbocharging trend that’s putting a big dent in real-world fuel savings: Drivers aren’t shifting early enough. That shouldn’t be too surprising if you’ve driven something like Ford’s 1-liter EcoBoost Fiesta. With an offbeat 3-cylinder growl and the surge of turbo boost, that engine begs to be revved. Ford’s solution, according to a recent patent, is to fool these drivers by piping in artificial engine noise to simulate an engine with more cylinders.

Ford found that many drivers „shift by ear“ rather than watching the tachometer. And with these downsized, slower-revving engines that are becoming the industry norm, the auditory clues about shift points get lost on the drivers. Ford claims that this cancels out the advantages of the down-sized turbo engines because they’re not being driven within the envelope of greatest efficiency.

The solution is to train the driver to shift earlier by piping in a low-amplitude noise that occurs between cylinder firings, which increases the cylinder count to the driver’s ear. Ford’s patent allows the virtual cylinder count to be doubled or even tripled depending on how many artificial noises are superimposed between cylinder firings.

The company imagines this will be most beneficial on turbocharged 2- and 3-cylinder engines, and since manual transmissions aren’t terribly common in American mass-market cars, this seems aimed at two groups: Europeans (who still buy small cars with manual transmissions in large numbers) and sports car buyers. Whether you like Ford’s proposed system or not, at least it’s less-invasive fuel-saving solution than cutting power or artificially limiting RPM.

 

http://www.autoblog.com/2016/05/09/ford-patent-spoofs-bigger-engine-sound-for-fuel-savings/

Adam Cheyer, you just made Siri 10 times better – VIV Technologies

In the Interview with Adam Cheyer from Late 2013 TheIdea Innovation Agency asked Adam Cheyer, what’s next, we said, Viv, coming up soon. https://dieidee.eu/2013/10/30/siri-and-google-now-what-would-have-happened-to-siri-if-steve-jobs-was-still-alive/

See for yourself, how Viv is the future of Chatbots and personal digital Assistants,
Disrupt-Conference TechCrunch Siri-CEO Dag Kittlaus „Viv“ Technologies

How does it work?
It’s patented technology is called „dynamic program generation“.  The Bot does programming real-time, in the background. And it does integrate interfaces to other data sources and bots too.

The full video goes here:

10 Principles of Change Management

Tools and techniques to help companies transform quickly.

Way back when (pick your date), senior executives in large companies had a simple goal for themselves and their organizations: stability. Shareholders wanted little more than predictable earnings growth. Because so many markets were either closed or undeveloped, leaders could deliver on those expectations through annual exercises that offered only modest modifications to the strategic plan. Prices stayed in check; people stayed in their jobs; life was good.

Market transparency, labor mobility, global capital flows, and instantaneous communications have blown that comfortable scenario to smithereens. In most industries — and in almost all companies, from giants on down — heightened global competition has concentrated management’s collective mind on something that, in the past, it happily avoided: change. Successful companies, as Harvard Business School professor Rosabeth Moss Kanter told s+b in 1999, develop “a culture that just keeps moving all the time.”

This presents most senior executives with an unfamiliar challenge. In major transformations of large enterprises, they and their advisors conventionally focus their attention on devising the best strategic and tactical plans. But to succeed, they also must have an intimate understanding of the human side of change management — the alignment of the company’s culture, values, people, and behaviors — to encourage the desired results. Plans themselves do not capture value; value is realized only through the sustained, collective actions of the thousands — perhaps the tens of thousands — of employees who are responsible for designing, executing, and living with the changed environment.

Long-term structural transformation has four characteristics: scale (the change affects all or most of the organization), magnitude (it involves significant alterations of the status quo), duration (it lasts for months, if not years), and strategic importance. Yet companies will reap the rewards only when change occurs at the level of the individual employee.

Many senior executives know this and worry about it. When asked what keeps them up at night, CEOs involved in transformation often say they are concerned about how the work force will react, how they can get their team to work together, and how they will be able to lead their people. They also worry about retaining their company’s unique values and sense of identity and about creating a culture of commitment and performance. Leadership teams that fail to plan for the human side of change often find themselves wondering why their best-laid plans have gone awry.

No single methodology fits every company, but there is a set of practices, tools, and techniques that can be adapted to a variety of situations. What follows is a “Top 10” list of guiding principles for change management. Using these as a systematic, comprehensive framework, executives can understand what to expect, how to manage their own personal change, and how to engage the entire organization in the process.

1. Address the “human side” systematically. Any significant transformation creates “people issues.” New leaders will be asked to step up, jobs will be changed, new skills and capabilities must be developed, and employees will be uncertain and resistant. Dealing with these issues on a reactive, case-by-case basis puts speed, morale, and results at risk. A formal approach for managing change — beginning with the leadership team and then engaging key stakeholders and leaders — should be developed early, and adapted often as change moves through the organization. This demands as much data collection and analysis, planning, and implementation discipline as does a redesign of strategy, systems, or processes. The change-management approach should be fully integrated into program design and decision making, both informing and enabling strategic direction. It should be based on a realistic assessment of the organization’s history, readiness, and capacity to change.

2. Start at the top. Because change is inherently unsettling for people at all levels of an organization, when it is on the horizon, all eyes will turn to the CEO and the leadership team for strength, support, and direction. The leaders themselves must embrace the new approaches first, both to challenge and to motivate the rest of the institution. They must speak with one voice and model the desired behaviors. The executive team also needs to understand that, although its public face may be one of unity, it, too, is composed of individuals who are going through stressful times and need to be supported.

Executive teams that work well together are best positioned for success. They are aligned and committed to the direction of change, understand the culture and behaviors the changes intend to introduce, and can model those changes themselves. At one large transportation company, the senior team rolled out an initiative to improve the efficiency and performance of its corporate and field staff before addressing change issues at the officer level. The initiative realized initial cost savings but stalled as employees began to question the leadership team’s vision and commitment. Only after the leadership team went through the process of aligning and committing to the change initiative was the work force able to deliver downstream results.

3. Involve every layer. As transformation programs progress from defining strategy and setting targets to design and implementation, they affect different levels of the organization. Change efforts must include plans for identifying leaders throughout the company and pushing responsibility for design and implementation down, so that change “cascades” through the organization. At each layer of the organization, the leaders who are identified and trained must be aligned to the company’s vision, equipped to execute their specific mission, and motivated to make change happen.

A major multiline insurer with consistently flat earnings decided to change performance and behavior in preparation for going public. The company followed this “cascading leadership” methodology, training and supporting teams at each stage. First, 10 officers set the strategy, vision, and targets. Next, more than 60 senior executives and managers designed the core of the change initiative. Then 500 leaders from the field drove implementation. The structure remained in place throughout the change program, which doubled the company’s earnings far ahead of schedule. This approach is also a superb way for a company to identify its next generation of leadership.

4. Make the formal case. Individuals are inherently rational and will question to what extent change is needed, whether the company is headed in the right direction, and whether they want to commit personally to making change happen. They will look to the leadership for answers. The articulation of a formal case for change and the creation of a written vision statement are invaluable opportunities to create or compel leadership-team alignment.

Three steps should be followed in developing the case: First, confront reality and articulate a convincing need for change. Second, demonstrate faith that the company has a viable future and the leadership to get there. Finally, provide a road map to guide behavior and decision making. Leaders must then customize this message for various internal audiences, describing the pending change in terms that matter to the individuals.

A consumer packaged-goods company experiencing years of steadily declining earnings determined that it needed to significantly restructure its operations — instituting, among other things, a 30 percent work force reduction — to remain competitive. In a series of offsite meetings, the executive team built a brutally honest business case that downsizing was the only way to keep the business viable, and drew on the company’s proud heritage to craft a compelling vision to lead the company forward. By confronting reality and helping employees understand the necessity for change, leaders were able to motivate the organization to follow the new direction in the midst of the largest downsizing in the company’s history. Instead of being shell-shocked and demoralized, those who stayed felt a renewed resolve to help the enterprise advance.

5. Create ownership. Leaders of large change programs must overperform during the transformation and be the zealots who create a critical mass among the work force in favor of change. This requires more than mere buy-in or passive agreement that the direction of change is acceptable. It demands ownership by leaders willing to accept responsibility for making change happen in all of the areas they influence or control. Ownership is often best created by involving people in identifying problems and crafting solutions. It is reinforced by incentives and rewards. These can be tangible (for example, financial compensation) or psychological (for example, camaraderie and a sense of shared destiny).

At a large health-care organization that was moving to a shared-services model for administrative support, the first department to create detailed designs for the new organization was human resources. Its personnel worked with advisors in cross-functional teams for more than six months. But as the designs were being finalized, top departmental executives began to resist the move to implementation. While agreeing that the work was top-notch, the executives realized they hadn’t invested enough individual time in the design process to feel the ownership required to begin implementation. On the basis of their feedback, the process was modified to include a “deep dive.” The departmental executives worked with the design teams to learn more, and get further exposure to changes that would occur. This was the turning point; the transition then happened quickly. It also created a forum for top executives to work as a team, creating a sense of alignment and unity that the group hadn’t felt before.

6. Communicate the message. Too often, change leaders make the mistake of believing that others understand the issues, feel the need to change, and see the new direction as clearly as they do. The best change programs reinforce core messages through regular, timely advice that is both inspirational and practicable. Communications flow in from the bottom and out from the top, and are targeted to provide employees the right information at the right time and to solicit their input and feedback. Often this will require overcommunication through multiple, redundant channels.

In the late 1990s, the commissioner of the Internal Revenue Service, Charles O. Rossotti, had a vision: The IRS could treat taxpayers as customers and turn a feared bureaucracy into a world-class service organization. Getting more than 100,000 employees to think and act differently required more than just systems redesign and process change. IRS leadership designed and executed an ambitious communications program including daily voice mails from the commissioner and his top staff, training sessions, videotapes, newsletters, and town hall meetings that continued through the transformation. Timely, constant, practical communication was at the heart of the program, which brought the IRS’s customer ratings from the lowest in various surveys to its current ranking above the likes of McDonald’s and most airlines.

7. Assess the cultural landscape. Successful change programs pick up speed and intensity as they cascade down, making it critically important that leaders understand and account for culture and behaviors at each level of the organization. Companies often make the mistake of assessing culture either too late or not at all. Thorough cultural diagnostics can assess organizational readiness to change, bring major problems to the surface, identify conflicts, and define factors that can recognize and influence sources of leadership and resistance. These diagnostics identify the core values, beliefs, behaviors, and perceptions that must be taken into account for successful change to occur. They serve as the common baseline for designing essential change elements, such as the new corporate vision, and building the infrastructure and programs needed to drive change.

8. Address culture explicitly. Once the culture is understood, it should be addressed as thoroughly as any other area in a change program. Leaders should be explicit about the culture and underlying behaviors that will best support the new way of doing business, and find opportunities to model and reward those behaviors. This requires developing a baseline, defining an explicit end-state or desired culture, and devising detailed plans to make the transition.

Company culture is an amalgam of shared history, explicit values and beliefs, and common attitudes and behaviors. Change programs can involve creating a culture (in new companies or those built through multiple acquisitions), combining cultures (in mergers or acquisitions of large companies), or reinforcing cultures (in, say, long-established consumer goods or manufacturing companies). Understanding that all companies have a cultural center — the locus of thought, activity, influence, or personal identification — is often an effective way to jump-start culture change.

A consumer goods company with a suite of premium brands determined that business realities demanded a greater focus on profitability and bottom-line accountability. In addition to redesigning metrics and incentives, it developed a plan to systematically change the company’s culture, beginning with marketing, the company’s historical center. It brought the marketing staff into the process early to create enthusiasts for the new philosophy who adapted marketing campaigns, spending plans, and incentive programs to be more accountable. Seeing these culture leaders grab onto the new program, the rest of the company quickly fell in line.

9. Prepare for the unexpected. No change program goes completely according to plan. People react in unexpected ways; areas of anticipated resistance fall away; and the external environment shifts. Effectively managing change requires continual reassessment of its impact and the organization’s willingness and ability to adopt the next wave of transformation. Fed by real data from the field and supported by information and solid decision-making processes, change leaders can then make the adjustments necessary to maintain momentum and drive results.

A leading U.S. health-care company was facing competitive and financial pressures from its inability to react to changes in the marketplace. A diagnosis revealed shortcomings in its organizational structure and governance, and the company decided to implement a new operating model. In the midst of detailed design, a new CEO and leadership team took over. The new team was initially skeptical, but was ultimately convinced that a solid case for change, grounded in facts and supported by the organization at large, existed. Some adjustments were made to the speed and sequence of implementation, but the fundamentals of the new operating model remained unchanged.

10. Speak to the individual. Change is both an institutional journey and a very personal one. People spend many hours each week at work; many think of their colleagues as a second family. Individuals (or teams of individuals) need to know how their work will change, what is expected of them during and after the change program, how they will be measured, and what success or failure will mean for them and those around them. Team leaders should be as honest and explicit as possible. People will react to what they see and hear around them, and need to be involved in the change process. Highly visible rewards, such as promotion, recognition, and bonuses, should be provided as dramatic reinforcement for embracing change. Sanction or removal of people standing in the way of change will reinforce the institution’s commitment.

Most leaders contemplating change know that people matter. It is all too tempting, however, to dwell on the plans and processes, which don’t talk back and don’t respond emotionally, rather than face up to the more difficult and more critical human issues. But mastering the “soft” side of change management needn’t be a mystery.

Author Profiles:

  • John Jones is a vice president with Booz Allen Hamilton in New York. Mr. Jones is a specialist in organization design, process reengineering, and change management.
  • DeAnne Aguirre (deanne.aguirre@strategyand.us.pwc.com) is an advisor to executives on organizational topics for Strategy&, PwC’s strategy consulting business, and a principal with PwC US. Based in San Francisco, she specializes in culture, leadership, talent effectiveness, and organizational change management.
  • Matthew Calderone is a senior associate with Booz Allen Hamilton in the New York Office. He specializes in organization transformation, people issues, and change management.

http://www.strategy-business.com/article/rr00006?gko=643d0

Definition of Change Management

A useful definition of change management that I use is:

‚the coordination of a structured period of transition from situation A to situation B in order to achieve lasting change within an organization‘.
(BNET Business Dictionary)

To help you in your search for a definition of change management here are others I’ve found to be useful:

The systematic approach and application of knowledge, tools and resources to deal with change. Change management means defining and adopting corporate strategies, structures, procedures and technologies to deal with changes in external conditions and the business environment.
SHRM Glossary of Human Resources Terms, http://www.shrm.org.

Change management is the process, tools and techniques to manage the people-side of business change to achieve the required business outcome, and to realize that business change effectively within the social infrastructure of the workplace.
Change Management Learning Center

Change Management: activities involved in (1) defining and instilling new values, attitudes, norms, and behaviors within an organization that support new ways of doing work and overcome resistance to change; (2) building consensus among customers and stakeholders on specific changes designed to better meet their needs; and (3) planning, testing, and implementing all aspects of the transition from one organizational structure or business process to another.
http://www.gao.gov/special.pubs/bprag/bprgloss.htm

…a systematic approach to dealing with change, both from the perspective of an organization and on the individual level…proactively addressing adapting to change, controlling change, and effecting change.
Case Western Reserve University

Change management is a systematic approach to dealing with change, both from the perspective of an organization and on the individual level.
searchsmb.com

Change Management is an organized, systematic application of the knowledge, tools, and resources of change that provides organizations with a key process to achieve their business strategy.
Lamarsh

The systematic management of a new business model integration into an organization and the ability to adapt this change into the organization so that the transformation enhances the organizational relationships with all its constituents.
bitpipe.com

Change Management: the process, tools and techniques to manage the people-side of change processes, to achieve the required outcomes, and to realize the change effectively within individuals, teams, and the wider systems.

Change management is a structured approach to transitioning individuals, teams, and organizations from a current state to a desired future state. The current definition of Change Management includes both organizational change management processes and individual change management models, which together are used to manage the people side of change.
Wikipedia

Minimizing resistance to organizational change through involvement of key players and stakeholders.
BusinessDictionary.com

Change management is a style of management that aims to encourage organizations and individuals to deal effectively with the changes taking place in their work.
English Collins Dictionary

quote: http://www.change-management-coach.com/definition-of-change-management.html

Artificial intelligence assistants are taking over

It was a weeknight, after dinner, and the baby was in bed. My wife and I were alone—we thought—discussing the sorts of things you might discuss with your spouse and no one else. (Specifically, we were critiquing a friend’s taste in romantic partners.) I was midsentence when, without warning, another woman’s voice piped in from the next room. We froze.

“I HELD THE DOOR OPEN FOR A CLOWN THE OTHER DAY,” the woman said in a loud, slow monotone. It took us a moment to realize that her voice was emanating from the black speaker on the kitchen table. We stared slack-jawed as she—it—continued: “I THOUGHT IT WAS A NICE JESTER.”

“What. The hell. Was that,” I said after a moment of stunned silence. Alexa, the voice assistant whose digital spirit animates the Amazon Echo, did not reply. She—it—responds only when called by name. Or so we had believed.

We pieced together what must have transpired. Somehow, Alexa’s speech recognition software had mistakenly picked the word Alexa out of something we said, then chosen a phrase like “tell me a joke” as its best approximation of whatever words immediately followed. Through some confluence of human programming and algorithmic randomization, it chose a lame jester/gesture pun as its response.

In retrospect, the disruption was more humorous than sinister. But it was also a slightly unsettling reminder that Amazon’s hit device works by listening to everything you say, all the time. And that, for all Alexa’s human trappings—the name, the voice, the conversational interface—it’s no more sentient than any other app or website. It’s just code, built by some software engineers in Seattle with a cheesy sense of humor.

But the Echo’s inadvertent intrusion into an intimate conversation is also a harbinger of a more fundamental shift in the relationship between human and machine. Alexa—and Siri and Cortana and all of the other virtual assistants that now populate our computers, phones, and living rooms—are just beginning to insinuate themselves, sometimes stealthily, sometimes overtly, and sometimes a tad creepily, into the rhythms of our daily lives. As they grow smarter and more capable, they will routinely surprise us by making our lives easier, and we’ll steadily become more reliant on them.

Even as many of us continue to treat these bots as toys and novelties, they are on their way to becoming our primary gateways to all sorts of goods, services, and information, both public and personal. When that happens, the Echo won’t just be a cylinder in your kitchen that sometimes tells bad jokes. Alexa and virtual agents like it will be the prisms through which we interact with the online world.

It’s a job to which they will necessarily bring a set of biases and priorities, some subtler than others. Some of those biases and priorities will reflect our own. Others, almost certainly, will not. Those vested interests might help to explain why they seem so eager to become our friends.

* * *

ibmAP

In the beginning, computers spoke only computer language, and a human seeking to interact with one was compelled to do the same. First came punch cards, then typed commands such as run, print, and dir.

The 1980s brought the mouse click and the graphical user interface to the masses; the 2000s, touch screens; the 2010s, gesture control and voice. It has all been leading, gradually and imperceptibly, to a world in which we no longer have to speak computer language, because computers will speak human language—not perfectly, but well enough to get by.

Alexa and software agents like it will be the prisms through which we interact with the online world.
We aren’t there yet. But we’re closer than most people realize. And the implications—many of them exciting, some of them ominous—will be tremendous.

Like card catalogs and AOL-style portals before it, Web search will begin to fade from prominence, and with it the dominance of browsers and search engines. Mobile apps as we know them—icons on a home screen that you tap to open—will start to do the same. In their place will rise an array of virtual assistants, bots, and software agents that act more and more like people: not only answering our queries, but acting as our proxies, accomplishing tasks for us, and asking questions of us in return.

This is already beginning to happen—and it isn’t just Siri or Alexa. As of April, all five of the world’s dominant technology companies are vying to be the Google of the conversation age. Whoever wins has a chance to get to know us more intimately than any company or machine has before—and to exert even more influence over our choices, purchases, and reading habits than they already do.

So say goodbye to Web browsers and mobile home screens as our default portals to the Internet. And say hello to the new wave of intelligent assistants, virtual agents, and software bots that are rising to take their place.

No, really, say “hello” to them. Apple’s Siri, Google’s mobile search app, Amazon’s Alexa, Microsoft’s Cortana, and Facebook’s M, to name just five of the most notable, are diverse in their approaches, capabilities, and underlying technologies. But, with one exception, they’ve all been programmed to respond to basic salutations in one way or another, and it’s a good way to start to get a sense of their respective mannerisms. You might even be tempted to say they have different personalities.

Siri’s response to “hello” varies, but it’s typically chatty and familiar:

160331_CS_siriScreen.CROP.promo xlarge2.jpgSlate/Screenshot

Alexa is all business:

160331_CS_alexaSS.CROP.promo xlarge2.jpgSlate/Screenshot

Google is a bit of an idiot savant: It responds by pulling up a YouTube video of the song “Hello” by Adele, along with all the lyrics.

160331_CS_googleScreen.CROP.promo xlarge2.jpgSlate/Screenshot

Cortana isn’t interested in saying anything until you’ve handed her the keys to your life:

160331_CS_cortanaTrio.CROP.promo xlarge2.jpgSlate/Screenshot

Once those formalities are out of the way, she’s all solicitude:

160331_CS_cortanaScreen.CROP.promo xlarge2.jpgSlate/Screenshot

Then there’s Facebook M, an experimental bot, available so far only to an exclusive group of Bay Area beta-testers, that lives inside Facebook Messenger and promises to answer almost any question and fulfill almost any (legal) request. If the casual, what’s-up-BFF tone of its text messages rings eerily human, that’s because it is: M is powered by an uncanny pairing of artificial intelligence and anonymous human agents.

160331_CS_mScreen.CROP.promo xlarge2.jpgSlate/Screenshot

You might notice that most of these virtual assistants have female-sounding names and voices. Facebook M doesn’t have a voice—it’s text-only—but it was initially rumored to be called Moneypenny, a reference to a secretary from the James Bond franchise. And even Google’s voice is female by default. This is, to some extent, a reflection of societal sexism. But these bots’ apparent embrace of gender also highlights their aspiration to be anthropomorphized: They want—that is, the engineers that build them want—to interact with you like a person, not a machine. It seems to be working: Already people tend to refer to Siri, Alexa, and Cortana as “she,” not “it.”

That Silicon Valley’s largest tech companies have effectively humanized their software in this way, with little fanfare and scant resistance, represents a coup of sorts. Once we perceive a virtual assistant as human, or at least humanoid, it becomes an entity with which we can establish humanlike relations. We can like it, banter with it, even turn to it for companionship when we’re lonely. When it errs or betrays us, we can get angry with it and, ultimately, forgive it. What’s most important, from the perspective of the companies behind this technology, is that we trust it.

Should we?

* * *

Siri wasn’t the first digital voice assistant when Apple introduced it in 2011, and it may not have been the best. But it was the first to show us what might be possible: a computer that you talk to like a person, that talks back, and that attempts to do what you ask of it without requiring any further action on your part. Adam Cheyer, co-founder of the startup that built Siri and sold it to Apple in 2010, has said he initially conceived of it not as a search engine, but as a “do engine.”

If Siri gave us a glimpse of what is possible, it also inadvertently taught us about what wasn’t yet. At first, it often struggled to understand you, especially if you spoke into your iPhone with an accent, and it routinely blundered attempts to carry out your will. Its quick-witted rejoinders to select queries (“Siri, talk dirty to me”) raised expectations for its intelligence that were promptly dashed once you asked it something it hadn’t been hard-coded to answer. Its store of knowledge proved trivial compared with the vast information readily available via Google search. Siri was as much an inspiration as a disappointment.

Five years later, Siri has gotten smarter, if perhaps less so than one might have hoped. More importantly, the technology underlying it has drastically improved, fueled by a boom in the computer science subfield of machine learning. That has led to sharp improvements in speech recognition and natural language understanding, two separate but related technologies that are crucial to voice assistants.

siriReuters/Suzanne PlunkettLuke Peters demonstrates Siri, an application which uses voice recognition and detection on the iPhone 4S, outside the Apple store in Covent Garden, London Oct. 14, 2011.

If Siri gave us a glimpse of what is possible, it also inadvertently taught us about what wasn’t yet.

If a revolution in technology has made intelligent virtual assistants possible, what has made them inevitable is a revolution in our relationship to technology. Computers began as tools of business and research, designed to automate tasks such as math and information retrieval. Today they’re tools of personal communication, connecting us not only to information but to one another. They’re also beginning to connect us to all the other technologies in our lives: Your smartphone can turn on your lights, start your car, activate your home security system, and withdraw money from your bank. As computers have grown deeply personal, our relationship with them has changed. And yet the way they interact with us hasn’t quite caught up.

“It’s always been sort of appalling to me that you now have a supercomputer in your pocket, yet you have to learn to use it,” says Alan Packer, head of language technology at Facebook. “It seems actually like a failure on the part of our industry that software is hard to use.”

Packer is one of the people trying to change that. As a software developer at Microsoft, he helped to build Cortana. After it launched, he found his skills in heavy demand, especially among the two tech giants that hadn’t yet developed voice assistants of their own. One Thursday morning in December 2014, Packer was on the verge of accepting a top job at Amazon—“You would not be surprised at which team I was about to join,” he says—when Facebook called and offered to fly him to Menlo Park, California, for an interview the next day. He had an inkling of what Amazon was working on, but he had no idea why Facebook might be interested in someone with his skill set.

As it turned out, Facebook wanted Packer for much the same purpose that Microsoft and Amazon did: to help it build software that could make sense of what its users were saying and generate intelligent responses. Facebook may not have a device like the Echo or an operating system like Windows, but its own platforms are full of billions of people communicating with one another every day. If Facebook can better understand what they’re saying, it can further hone its News Feed and advertising algorithms, among other applications. More creatively, Facebook has begun to use language understanding to build artificial intelligence into its Messenger app. Now, if you’re messaging with a friend and mention sharing an Uber, a software agent within Messenger can jump in and order it for you while you continue your conversation.

In short, Packer says, Facebook is working on language understanding because Facebook is a technology company—and that’s where technology is headed. As if to underscore that point, Packer’s former employer this year headlined its annual developer conference by announcing plans to turn Cortana into a portal for conversational bots and integrate it into Skype, Outlook, and other popular applications. Microsoft CEO Satya Nadella predicted that bots will be the Internet’s next major platform, overtaking mobile apps the same way they eclipsed desktop computing.

* * *Amazon Echo DotAP

Siri may not have been very practical, but people immediately grasped what it was. With Amazon’s Echo, the second major tech gadget to put a voice interface front and center, it was the other way around. The company surprised the industry and baffled the public when it released a device in November 2014 that looked and acted like a speaker—except that it didn’t connect to anything except a power outlet, and the only buttons were for power and mute. You control the Echo solely by voice, and if you ask it questions, it talks back. It was like Amazon had decided to put Siri in a black cylinder and sell it for $179. Except Alexa, the virtual intelligence software that powers the Echo, was far more limited than Siri in its capabilities. Who, reviewers wondered, would buy such a bizarre novelty gadget?

That question has faded as Amazon has gradually upgraded and refined the Alexa software, and the five-star Amazon reviews have since poured in. In the New York Times, Farhad Manjoo recently followed up his tepid initial review with an all-out rave: The Echo “brims with profound possibility,” he wrote. Amazon has not disclosed sales figures, but the Echo ranks as the third-best-selling gadget in its electronics section. Alexa may not be as versatile as Siri—yet—but it turned out to have a distinct advantage: a sense of purpose, and of its own limitations. Whereas Apple implicitly invites iPhone users to ask Siri anything, Amazon ships the Echo with a little cheat sheet of basic queries that it knows how to respond to: “Alexa, what’s the weather?” “Alexa, set a timer for 45 minutes.” “Alexa, what’s in the news?”

The cheat sheet’s effect is to lower expectations to a level that even a relatively simplistic artificial intelligence can plausibly meet on a regular basis. That’s by design, says Greg Hart, Amazon’s vice president in charge of Echo and Alexa. Building a voice assistant that can respond to every possible query is “a really hard problem,” he says. “People can get really turned off if they have an experience that’s subpar or frustrating.” So the company began by picking specific tasks that Alexa could handle with aplomb and communicating those clearly to customers.

At launch, the Echo had just 12 core capabilities. That list has grown steadily as the company has augmented Alexa’s intelligence and added integrations with new services, such as Google Calendar, Yelp reviews, Pandora streaming radio, and even Domino’s delivery. The Echo is also becoming a hub for connected home appliances: “ ‘Alexa, turn on the living room lights’ never fails to delight people,” Hart says.

When you ask Alexa a question it can’t answer or say something it can’t quite understand, it fesses up: “Sorry, I don’t know the answer to that question.” That makes it all the more charming when you test its knowledge or capabilities and it surprises you by replying confidently and correctly. “Alexa, what’s a kinkajou?” I asked on a whim one evening, glancing up from my laptop while reading a news story about an elderly Florida woman who woke up one day with a kinkajou on her chest. Alexa didn’t hesitate: “A kinkajou is a rainforest mammal of the family Procyonidae … ” Alexa then proceeded to list a number of other Procyonidae to which the kinkajou is closely related. “Alexa, that’s enough,” I said after a few moments, genuinely impressed. “Thank you,” I added.

“You’re welcome,” Alexa replied, and I thought for a moment that she—it—sounded pleased.

As delightful as it can seem, the Echo’s magic comes with some unusual downsides. In order to respond every time you say “Alexa,” it has to be listening for the word at all times. Amazon says it only stores the commands that you say after you’ve said the word Alexa and discards the rest. Even so, the enormous amount of processing required to listen for a wake word 24/7 is reflected in the Echo’s biggest limitation: It only works when it’s plugged into a power outlet. (Amazon’s newest smart speakers, the Echo Dot and the Tap, are more mobile, but one sacrifices the speaker and the other the ability to respond at any time.)

Even if you trust Amazon to rigorously protect and delete all of your personal conversations from its servers—as it promises it will if you ask it to—Alexa’s anthropomorphic characteristics make it hard to shake the occasional sense that it’s eavesdropping on you, Big Brother–style. I was alone in my kitchen one day, unabashedly belting out the Fats Domino song “Blueberry Hill” as I did the dishes, when it struck me that I wasn’t alone after all. Alexa was listening—not judging, surely, but listening all the same. Sheepishly, I stopped singing.

* * *hal 2001Google Images

The notion that the Echo is “creepy” or “spying on us” might be the most common criticism of the device so far. But there’s a more fundamental problem. It’s one that is likely to haunt voice assistants, and those who rely on them, as the technology evolves and bores it way more deeply into our lives.

The problem is that conversational interfaces don’t lend themselves to the sort of open flow of information we’ve become accustomed to in the Google era. By necessity they limit our choices—because their function is to make choices on our behalf.

For example, a search for “news” on the Web will turn up a diverse and virtually endless array of possible sources, from Fox News to Yahoo News to CNN to Google News, which is itself a compendium of stories from other outlets. But ask the Echo, “What’s in the news?” and by default it responds by serving up a clip of NPR News’s latest hourly update, which it pulls from the streaming radio service TuneIn. Which is great—unless you don’t happen to like NPR’s approach to the news, or you prefer a streaming radio service other than TuneIn. You can change those defaults somewhere in the bowels of the Alexa app, but Alexa never volunteers that information. Most people will never even know it’s an option. Amazon has made the choice for them.

And how does Amazon make that sort of choice? The Echo’s cheat sheet doesn’t tell you that, and the company couldn’t give me a clear answer.

Alexa does take care to mention before delivering the news that it’s pulling the briefing from NPR News and TuneIn. But that isn’t always the case with other sorts of queries.

Let’s go back to our friend the kinkajou. In my pre-Echo days, my curiosity about an exotic animal might have sent me to Google via my laptop or phone. Just as likely, I might have simply let the moment of curiosity pass and not bothered with a search. Looking something up on Google involves just enough steps to deter us from doing it in a surprising number of cases. One of the great virtues of voice technology is to lower that barrier to the point where it’s essentially no trouble at all. Having an Echo in the room when you’re struck by curiosity about kinkajous is like having a friend sitting next to you who happens to be a kinkajou expert. All you have to do is say your question out loud, and Alexa will supply the answer. You literally don’t have to lift a finger.

That is voice technology’s fundamental advantage over all the human-computer interfaces that have come before it: In many settings, including the home, the car, or on a wearable gadget, it’s much easier and more natural than clicking, typing, or tapping. In the logic of today’s consumer technology industry, that makes its ascendance in those realms all but inevitable.

But consider the difference between Googling something and asking a friendly voice assistant. When I Google “kinkajou,” I get a list of websites, ranked according to an algorithm that takes into account all sorts of factors that correlate with relevance and authority. I choose the information source I prefer, then visit its website directly—an experience that could help to further shade or inform my impression of its trustworthiness. Ultimately, the answer does come not from Google, per se, but directly from some third-party authority, whose credibility I can evaluate as I wish.

A voice-based interface is different. The response comes one word at a time, one sentence at a time, one idea at a time. That makes it very easy to follow, especially for humans who have spent their whole lives interacting with one another in just this way. But it makes it very cumbersome to present multiple options for how to answer a given query. Imagine for a moment what it would sound like to read a whole Google search results page aloud, and you’ll understand no one builds a voice interface that way.

That’s why voice assistants tend to answer your question by drawing from a single source of their own choosing. Alexa’s confident response to my kinkajou question, I later discovered, came directly from Wikipedia, which Amazon has apparently chosen as the default source for Alexa’s answers to factual questions. The reasons seem fairly obvious: It’s the world’s most comprehensive encyclopedia, its information is free and public, and it’s already digitized. What it’s not, of course, is infallible. Yet Alexa’s response to my question didn’t begin with the words, “Well, according to Wikipedia … ” She—it—just launched into the answer, as if she (it) knew it off the top of her (its) head. If a human did that, we might call it plagiarism.

The sin here is not merely academic. By not consistently citing the sources of its answers, Alexa makes it difficult to evaluate their credibility. It also implicitly turns Alexa into an information source in its own right, rather than a guide to information sources, because the only entity in which we can place our trust or distrust is Alexa itself. That’s a problem if its information source turns out to be wrong.

The constraints on choice and transparency might not bother people when Alexa’s default source is Wikipedia, NPR, or TuneIn. It starts to get a little more irksome when you ask Alexa to play you music, one of the Echo’s core features. “Alexa, play me the Rolling Stones” will queue up a shuffle playlist of Rolling Stones songs available through Amazon’s own streaming music service, Amazon Prime Music—provided you’re paying the $99 a year required to be an Amazon Prime member. Otherwise, the most you’ll get out of the Echo are 20-second samples of songs available for purchase. Want to guess what one choice you’ll have as to which online retail giant to purchase those songs from?

When you say “Hello” to Alexa, you’re signing up for her party.

Amazon’s response is that Alexa does give you options and cite its sources—in the Alexa app, which keeps a record of your queries and its responses. When the Echo tells you what a kinkajou is, you can open the app on your phone and see a link to the Wikipedia article, as well as an option to search Bing. Amazon adds that Alexa is meant to be an “open platform” that allows anyone to connect to it via an API. The company is also working with specific partners to integrate their services into Alexa’s repertoire. So, for instance, if you don’t want to be limited to playing songs from Amazon Prime Music, you can now take a series of steps to link the Echo to a different streaming music service, such as Spotify Premium. Amazon Prime Music will still be the default, though: You’ll only get Spotify if you specify “from Spotify” in your voice command.

What’s not always clear is how Amazon chooses its defaults and its partners and what motivations might underlie those choices. Ahead of the 2016 Super Bowl, Amazon announced that the Echo could now order you a pizza. But that pizza would come, at least for the time being, from just one pizza-maker: Domino’s. Want a pizza from Little Caesars instead? You’ll have to order it some other way.

To Amazon’s credit, its choice of pizza source is very transparent. To use the pizza feature, you have to utter the specific command, “Alexa, open Domino’s and place my Easy Order.” The clunkiness of that command is no accident. It’s Amazon’s way of making sure that you don’t order a pizza by accident and that you know where that pizza is coming from. But it’s unlikely Domino’s would have gone to the trouble of partnering with Amazon if it didn’t think it would result in at least some number of people ordering Domino’s for their Super Bowl parties rather than Little Caesars.

None of this is to say that Amazon and Domino’s are going to conspire to monopolize the pizza industry anytime soon. There are obviously plenty of ways to order a pizza besides doing it on an Echo. Ditto for listening to the news, the Rolling Stones, a book, or a podcast. But what about when only one company’s smart thermostat can be operated by Alexa? If you come to rely on Alexa to manage your Google Calendar, what happens when Amazon and Google have a falling out?
When you say “Hello” to Alexa, you’re signing up for her party. Nominally, everyone’s invited. But Amazon has the power to ensure that its friends and business associates are the first people you meet.

* * *

google now speak now screenBusiness Insider, William Wei

These concerns might sound rather distant—we’re just talking about niche speakers connected to niche thermostats, right? The coming sea change feels a lot closer once you think about the other companies competing to make digital assistants your main portal to everything you do on your computer, in your car, and on your phone. Companies like Google.

Google may be positioned best of all to capitalize on the rise of personal A.I. It also has the most to lose. From the start, the company has built its business around its search engine’s status as a portal to information and services. Google Now—which does things like proactively checking the traffic and alerting you when you need to leave for a flight, even when you didn’t ask it to—is a natural extension of the company’s strategy.

If something is going to replace Google’s on-screen services, Google wants to be the one that does it.
As early as 2009, Google began to work on voice search and what it calls “conversational search,” using speech recognition and natural language understanding to respond to questions phrased in plain language. More recently, it has begun to combine that with “contextual search.” For instance, as Google demonstrated at its 2015 developer conference, if you’re listening to Skrillex on your Android phone, you can now simply ask, “What’s his real name?” and Google will intuit that you’re asking about the artist. “Sonny John Moore,” it will tell you, without ever leaving the Spotify app.

It’s no surprise, then, that Google is rumored to be working on two major new products—an A.I.-powered messaging app or agent and a voice-powered household gadget—that sound a lot like Facebook M and the Amazon Echo, respectively. If something is going to replace Google’s on-screen services, Google wants to be the one that does it.

So far, Google has made what seems to be a sincere effort to win the A.I. assistant race without

sacrificing the virtues—credibility, transparency, objectivity—that made its search page such a dominant force on the Web. (It’s worth recalling: A big reason Google vanquished AltaVista was that it didn’t bend its search results to its own vested interests.) Google’s voice search does generally cite its sources. And it remains primarily a portal to other sources of information, rather than a platform that pulls in content from elsewhere. The downside to that relatively open approach is that when you say “hello” to Google voice search, it doesn’t say hello back. It gives you a link to the Adele song “Hello.” Even then, Google isn’t above playing favorites with the sources of information it surfaces first: That link goes not to Spotify, Apple Music, or Amazon Prime Music, but to YouTube, which Google owns. The company has weathered antitrust scrutiny over allegations that this amounted to preferential treatment. Google’s defense was that it puts its own services and information sources first because its users prefer them.

* * *

HerYouTube

If there’s a consolation for those concerned that intelligent assistants are going to take over the world, it’s this: They really aren’t all that intelligent. Not yet, anyway.

The 2013 movie Her, in which a mobile operating system gets to know its user so well that they become romantically involved, paints a vivid picture of what the world might look like if we had the technology to carry Siri, Alexa, and the like to their logical conclusion. The experts I talked to, who are building that technology today, almost all cited Her as a reference point—while pointing out that we’re not going to get there anytime soon.

Google recently rekindled hopes—and fears—of super-intelligent A.I. when its AlphaGo software defeated the world champion in a historic Go match. As momentous as the achievement was, designing an algorithm to win even the most complex board game is trivial compared with designing one that can understand and respond appropriately to anything a person might say. That’s why, even as artificial intelligence is learning to recommend songs that sound like they were hand-picked by your best friend or navigate city streets more safely than any human driver, A.I. still has to resort to parlor tricks—like posing as a 13-year-old struggling with a foreign language—to pass as human in an extended conversation. The world is simply too vast, language too ambiguous, the human brain too complex for any machine to model it, at least for the foreseeable future.

But if we won’t see a true full-service A.I. in our lifetime, we might yet witness the rise of a system that can approximate some of its capabilities—comprising not a single, humanlike Her, but a million tiny hims carrying out small, discrete tasks handily. In January, the Verge’s Casey Newton made a compelling argument that our technological future will be filled not with websites, apps, or even voice assistants, but with conversational messaging bots. Like voice assistants, these bots rely on natural language understanding to carry on conversations with us. But they will do so via the medium that has come to dominate online interpersonal interaction, especially among the young people who are the heaviest users of mobile devices: text messaging. For example, Newton points to “Lunch Bot,” a relatively simple agent that lived in the wildly popular workplace chat program Slack and existed for a single, highly specialized purpose: to recommend the best place for employees to order their lunch from on a given day. It soon grew into a venture-backed company called Howdy.

A world of conversational machines is one in which we treat software like humans, letting them deeper into our lives and confiding in them more than ever.

I have a bot in my own life that serves a similarly specialized yet important role. While researching this story, I ran across a company called X.ai whose mission is to build the ultimate virtual scheduling assistant. It’s called Amy Ingram, and if its initials don’t tip you off, you might interact with it several times before realizing it’s not a person. (Unlike some other intelligent assistant companies, X.ai gives you the option to choose a male name for your assistant instead: Mine is Andrew Ingram.) Though it’s backed by some impressive natural language tech, X.ai’s bot does not attempt to be a know-it-all or do-it-all; it doesn’t tell jokes, and you wouldn’t want to date him. It asks for access to just one thing—your calendar. And it communicates solely by email. Just cc it on any thread in which you’re trying to schedule a meeting or appointment, and it will automatically step in and take over the back-and-forth involved in nailing down a time and place. Once it has agreed on a time with whomever you’re meeting—or, perhaps, with his or her own assistant, whether human or virtual—it will put all the relevant details on your calendar. Have your A.I. cc my A.I.

For these bots, the key to success is not growing so intelligent that they can do everything. It’s staying specialized enough that they don’t have to.

“We’ve had this A.I. fantasy for almost 60 years now,” says Dennis Mortensen, X.ai’s founder and CEO. “At every turn we thought the only outcome would be some human-level entity where we could converse with it like you and I are [conversing] right now. That’s going to continue to be a fantasy. I can’t see it in my lifetime or even my kids’ lifetime.” What is possible, Mortensen says, is “extremely specialized, verticalized A.I.s that understand perhaps only one job, but do that job very well.”

Yet those simple bots, Mortensen believes, could one day add up to something more. “You get enough of these agents, and maybe one morning in 2045 you look around and that plethora—tens of thousands of little agents—once they start to talk to each other, it might not look so different from that A.I. fantasy we’ve had.”

That might feel a little less scary. But it still leaves problems of transparency, privacy, objectivity, and trust—questions that are not new to the world of personal technology and the Internet but are resurfacing in fresh and urgent forms. A world of conversational machines is one in which we treat software like humans, letting them deeper into our lives and confiding in them more than ever. It’s one in which the world’s largest corporations know more about us, hold greater influence over our choices, and make more decisions for us than ever before. And it all starts with a friendly “Hello.”

 

www.businessinsider.com/ai-assistants-are-taking-over-2016-4