Archiv der Kategorie: Gadgets

Iphone 7 remove of the headphone jack off phones is user-hostile and stupid

Another day, another rumor that Apple is going to ditch the headphone jack on the next iPhone in favor of sending out audio over Lightning. Or another phone beats Apple to the punch by ditching the headphone jack in favor of passing out audio over USB-C. What exciting times for phones! We’re so out of ideas that actively making them shittier and more user-hostile is the only innovation left.

Look, I know you’re going to tell me that the traditional TRS headphone jack is a billion years old and prone to failure and that life is about progress and whatever else you need to repeat deliriously into your bed of old HTC extUSB dongles and insane magnetic Palm adapters to sleep at night. But just face facts: ditching the headphone jack on phones makes them worse, in extremely obvious ways. Let’s count them!

(Also, here is a list of reasons you might actually prefer Lightning headphones, by my friend Vlad Savov, but let’s be clear that my list is the superior one.)

1. Digital audio means DRM audio

Oh look, I won this argument in one shot. For years the entertainment industry has decried what they call the „analog loophole“ of headphone jacks, and now we’re making their dreams come true by closing it.

Restricting audio output to a purely digital connection means that music publishers and streaming companies can start to insist on digital copyright enforcement mechanisms. We moved our video systems to HDMI and got HDCP, remember? Copyright enforcement technology never stops piracy and always hurts the people who most rely on legal fair use, but you can bet the music industry is going to start cracking down on „unauthorized“ playback and recording devices anyway. We deal with DRM when it comes to video because we generally don’t rewatch and take TV shows and movies with us, but you will rue the day Apple decided to make the iPhone another 1mm thinner the instant you get a „playback device not supported“ message. Winter is coming.

2. Wireless headphones and speakers are fine, not great

I am surrounded by wireless speaker systems. (I work at The Verge, after all.) And while they mostly work fine, sometimes they crackle out and fail. It sucks to share a wireless speaker among multiple devices. Bluetooth headphones require me to charge yet another battery. You haven’t known pain until you’ve chosen to use Bluetooth audio in a car instead of an aux jack.

3. Dongles are stupid, especially when they require other dongles

Shut up, you say. All of your complaints will be handled by this charming $29 dongle that converts digital audio to a standard headphone jack!

To which I will respond: here is a photo of Dieter Bohn and his beloved single-port MacBook, living his fullest #donglelife during our WWDC liveblog:

macbook with a bunch of dongles

Everything is going to be great when you want to use your expensive headphones andcharge your phone at the same time. You are going to love everything about that situation. You are going to hold your 1mm thinner phone and sincerely believe that the small reduction in thickness is definitely worth carrying multiple additional dongles.

Also, they’re called fucking dongles. Let’s not do this to ourselves. Have some dignity.

4. Ditching a deeply established standard will disproportionately impact accessibility

The traditional headphone jack is a standard for a reason — it works. It works so well that an entire ecosystem of other kinds of devices has built up around it, and millions of people have access to compatible devices at every conceivable price point. The headphone jack might be less good on some metrics than Lightning or USB-C audio, but it is spectacularly better than anything else in the world at being accessible, enabling, open, and democratizing. A change that will cost every iPhone user at least $29 extra for a dongle (or more for new headphones) is not a change designed to benefit everyone. And you don’t need to get rid of the headphone jack to make a phone waterproof; plenty of waterproof phones have shipped with headphone jacks already.

5. Making Android and iPhone headphones incompatible is so incredibly arrogant and stupid there’s not even explanatory text under this one

6. No one is asking for this

Raise your hand if the thing you wanted most from your next phone was either fewer ports or more dongles.

I didn’t think so. You wanted better battery life, didn’t you? Everyone just wants better battery life.

Vote with your dollars.


lower-cost gadgetry that lasts a lot longer could be a dire omen for high-margin hardware companies like Apple

This week, Intel CEO Brian Krzanich announced that people are keeping their PCs a lot longer before upgrading: The average has increased from four years to as many as six.

The tablet-refresh cycle isn’t much shorter than that, to Apple’s eternal chagrin. Even iPhone sales have started to taper off, partly because people are keeping their phones longer or choosing cheaper Android phones.

What’s happening is pretty simple. The hardware and the software running on any device itself have become way less interesting than the web apps and services, like the ones that Google and Amazon have made the core of their business.

Why buy a $700 iPhone when a $200 Android phone can access the same YouTube or Amazon Music as everyone else? All you need to do to get new Facebook features is refresh your browser or update your app. You don’t need a high-performance device to participate in the 21st century.

It’s a stark contrast with the traditional model for consumer electronics, where you’re expected to upgrade the hardware to keep pace with the new features they release.

And it could be a dire omen for high-margin hardware companies like Apple.

Meanwhile, web-first companies like Amazon and Google are more than happy to exploit this, even as our notions of what a computer actually is continue to shift. Just look at devices like Google Chromecast and the Amazon Echo.

Chromecast, Echo, case in point

Since 2013, Google has sold 25 million Chromecast devices — the completely amazing $35 dongles that turn any TV into a smart TV. That’s right, $35.

The real brilliance of the Chromecast lies in what it isn’t, rather than what it is. It doesn’t have an interface of its own. You just push a button on your phone and have whatever YouTube video you’re watching or Spotify album you’re listening to appear on your TV screen.

A nice side effect: It’s relatively simple to take an existing smartphone app and add Chromecast streaming capabilities, and literally tens of thousands of apps have done that integration.

You don’t have to think about it or learn a new interface; you just click and go.Mike George Amazon VP of EchoGettyAmazon VP of Echo Mike George.

It means that every single day, I get more return on the initial $35 investment in the Chromecast I bought in 2014. But since all of the good stuff is happening in the apps, not the Chromecast itself, it’s extremely unlikely that I will ever have to replace this Chromecast, barring a hardware malfunction.

You could probably say the same thing about the Amazon Echo home voice assistant. Developers have released almost 1,000 „skills“ for the Amazon Echo’s Alexa platform, including the ability to call an Uber, play Spotify music, or order a Domino’s pizza.

These gadgets are getting better, not worse, the longer they stay on shelves. And while there may be periodic minor hardware improvements, they’re way more minor than the gap between an iPhone 5 and an iPhone 6, and far less necessary to keep getting maximum value from the device.

The pressure is on

This move is going to keep putting pressure on hardware-first manufacturers — especially those who rely on high margins, like Apple.

The Chromecast and the Echo are relatively cheap gadgets — because all the important, useful stuff about them lives in the cloud, they’re optimized to be small, efficient, and unobtrusive.

Tesla autopilotTeslaTesla’s autopilot mode scanning the road.

Amazon doesn’t need to make money on the Echo itself, as long as it drives more commerce to its retail business. Same with Google: as long as the Chromecast gets more people to watch YouTube videos and download more stuff from Google Play, they don’t have to make money from the gadget itself.

And you’re seeing more of this all over, like when Tesla made thousands of its electric cars partially self-driving with an overnight software update. The gadget Tesla drivers already owned — in this case a car — suddenly got way more useful.

This trend isn’t going to kill off the smartphone, or the PC, or the tablet. But it means lower-cost gadgetry that lasts a lot longer. We’re only seeing the early stages of this shift now, but it has a lot of potential to shake up how we think about and how we buy our devices.

Apple will open Siri to developers

Apple has big plans for Siri that will make the company’s famous assistant a lot more useful, according to a new report.

The company will soon open up Siri to developers with new software tools that will allow Siri to tap into more third-party services, according to a new report in The Information. Apple is also working on a new pice of a hardware, an Amazon Echo rival that will work with Apple’s smart home platform.

Apple plans to put Siri in the hands of developers with a new software development kit (SDK) that will reportedly be called the Siri SDK. The Siri SDK could launch at next months’s World Wide Developer Conference, where Apple typically previews the newest version of IOS and its latest developer tools.

The SDK will require „some work“ by developers to make their apps accessible by Siri, the report says.

Siri already works with a few third-party services, like Yelp and Bing, but hasn’t been widely available to developers since it was acquired by Apple in 2010. Prior to its acquisition, Siri worked with many third-party services. (Some of the original Siri team is now working on a new AI Assistant called Viv, which will also work with third-parties like Uber.)

Apple is also reportedly working on a new speaker that allows people to use voice commands to play music and control HomeKit-enabled smart home devices, like lights, locks and thermostats. It’s unclear if the speaker will also be unveiled at WWDC in June, as Apple typically reserves new hardware for other events.

Though the new smart speaker sounds a lot like Amazon’s Echo and Google’s recently unveiled Google Assistant, Apple’s device predates both, according to The Information’s sources.

The report is just the latest sign that Apple has big plans for Siri next month. Earlier reports have suggested Apple will bring Siri to the Mac and — in what could very well be a hint of a Siri-themed WWDC — the company used Siri to reveal the dates of this year’s developer conference.

Artificial intelligence assistants are taking over

It was a weeknight, after dinner, and the baby was in bed. My wife and I were alone—we thought—discussing the sorts of things you might discuss with your spouse and no one else. (Specifically, we were critiquing a friend’s taste in romantic partners.) I was midsentence when, without warning, another woman’s voice piped in from the next room. We froze.

“I HELD THE DOOR OPEN FOR A CLOWN THE OTHER DAY,” the woman said in a loud, slow monotone. It took us a moment to realize that her voice was emanating from the black speaker on the kitchen table. We stared slack-jawed as she—it—continued: “I THOUGHT IT WAS A NICE JESTER.”

“What. The hell. Was that,” I said after a moment of stunned silence. Alexa, the voice assistant whose digital spirit animates the Amazon Echo, did not reply. She—it—responds only when called by name. Or so we had believed.

We pieced together what must have transpired. Somehow, Alexa’s speech recognition software had mistakenly picked the word Alexa out of something we said, then chosen a phrase like “tell me a joke” as its best approximation of whatever words immediately followed. Through some confluence of human programming and algorithmic randomization, it chose a lame jester/gesture pun as its response.

In retrospect, the disruption was more humorous than sinister. But it was also a slightly unsettling reminder that Amazon’s hit device works by listening to everything you say, all the time. And that, for all Alexa’s human trappings—the name, the voice, the conversational interface—it’s no more sentient than any other app or website. It’s just code, built by some software engineers in Seattle with a cheesy sense of humor.

But the Echo’s inadvertent intrusion into an intimate conversation is also a harbinger of a more fundamental shift in the relationship between human and machine. Alexa—and Siri and Cortana and all of the other virtual assistants that now populate our computers, phones, and living rooms—are just beginning to insinuate themselves, sometimes stealthily, sometimes overtly, and sometimes a tad creepily, into the rhythms of our daily lives. As they grow smarter and more capable, they will routinely surprise us by making our lives easier, and we’ll steadily become more reliant on them.

Even as many of us continue to treat these bots as toys and novelties, they are on their way to becoming our primary gateways to all sorts of goods, services, and information, both public and personal. When that happens, the Echo won’t just be a cylinder in your kitchen that sometimes tells bad jokes. Alexa and virtual agents like it will be the prisms through which we interact with the online world.

It’s a job to which they will necessarily bring a set of biases and priorities, some subtler than others. Some of those biases and priorities will reflect our own. Others, almost certainly, will not. Those vested interests might help to explain why they seem so eager to become our friends.

* * *


In the beginning, computers spoke only computer language, and a human seeking to interact with one was compelled to do the same. First came punch cards, then typed commands such as run, print, and dir.

The 1980s brought the mouse click and the graphical user interface to the masses; the 2000s, touch screens; the 2010s, gesture control and voice. It has all been leading, gradually and imperceptibly, to a world in which we no longer have to speak computer language, because computers will speak human language—not perfectly, but well enough to get by.

Alexa and software agents like it will be the prisms through which we interact with the online world.
We aren’t there yet. But we’re closer than most people realize. And the implications—many of them exciting, some of them ominous—will be tremendous.

Like card catalogs and AOL-style portals before it, Web search will begin to fade from prominence, and with it the dominance of browsers and search engines. Mobile apps as we know them—icons on a home screen that you tap to open—will start to do the same. In their place will rise an array of virtual assistants, bots, and software agents that act more and more like people: not only answering our queries, but acting as our proxies, accomplishing tasks for us, and asking questions of us in return.

This is already beginning to happen—and it isn’t just Siri or Alexa. As of April, all five of the world’s dominant technology companies are vying to be the Google of the conversation age. Whoever wins has a chance to get to know us more intimately than any company or machine has before—and to exert even more influence over our choices, purchases, and reading habits than they already do.

So say goodbye to Web browsers and mobile home screens as our default portals to the Internet. And say hello to the new wave of intelligent assistants, virtual agents, and software bots that are rising to take their place.

No, really, say “hello” to them. Apple’s Siri, Google’s mobile search app, Amazon’s Alexa, Microsoft’s Cortana, and Facebook’s M, to name just five of the most notable, are diverse in their approaches, capabilities, and underlying technologies. But, with one exception, they’ve all been programmed to respond to basic salutations in one way or another, and it’s a good way to start to get a sense of their respective mannerisms. You might even be tempted to say they have different personalities.

Siri’s response to “hello” varies, but it’s typically chatty and familiar: xlarge2.jpgSlate/Screenshot

Alexa is all business: xlarge2.jpgSlate/Screenshot

Google is a bit of an idiot savant: It responds by pulling up a YouTube video of the song “Hello” by Adele, along with all the lyrics. xlarge2.jpgSlate/Screenshot

Cortana isn’t interested in saying anything until you’ve handed her the keys to your life: xlarge2.jpgSlate/Screenshot

Once those formalities are out of the way, she’s all solicitude: xlarge2.jpgSlate/Screenshot

Then there’s Facebook M, an experimental bot, available so far only to an exclusive group of Bay Area beta-testers, that lives inside Facebook Messenger and promises to answer almost any question and fulfill almost any (legal) request. If the casual, what’s-up-BFF tone of its text messages rings eerily human, that’s because it is: M is powered by an uncanny pairing of artificial intelligence and anonymous human agents. xlarge2.jpgSlate/Screenshot

You might notice that most of these virtual assistants have female-sounding names and voices. Facebook M doesn’t have a voice—it’s text-only—but it was initially rumored to be called Moneypenny, a reference to a secretary from the James Bond franchise. And even Google’s voice is female by default. This is, to some extent, a reflection of societal sexism. But these bots’ apparent embrace of gender also highlights their aspiration to be anthropomorphized: They want—that is, the engineers that build them want—to interact with you like a person, not a machine. It seems to be working: Already people tend to refer to Siri, Alexa, and Cortana as “she,” not “it.”

That Silicon Valley’s largest tech companies have effectively humanized their software in this way, with little fanfare and scant resistance, represents a coup of sorts. Once we perceive a virtual assistant as human, or at least humanoid, it becomes an entity with which we can establish humanlike relations. We can like it, banter with it, even turn to it for companionship when we’re lonely. When it errs or betrays us, we can get angry with it and, ultimately, forgive it. What’s most important, from the perspective of the companies behind this technology, is that we trust it.

Should we?

* * *

Siri wasn’t the first digital voice assistant when Apple introduced it in 2011, and it may not have been the best. But it was the first to show us what might be possible: a computer that you talk to like a person, that talks back, and that attempts to do what you ask of it without requiring any further action on your part. Adam Cheyer, co-founder of the startup that built Siri and sold it to Apple in 2010, has said he initially conceived of it not as a search engine, but as a “do engine.”

If Siri gave us a glimpse of what is possible, it also inadvertently taught us about what wasn’t yet. At first, it often struggled to understand you, especially if you spoke into your iPhone with an accent, and it routinely blundered attempts to carry out your will. Its quick-witted rejoinders to select queries (“Siri, talk dirty to me”) raised expectations for its intelligence that were promptly dashed once you asked it something it hadn’t been hard-coded to answer. Its store of knowledge proved trivial compared with the vast information readily available via Google search. Siri was as much an inspiration as a disappointment.

Five years later, Siri has gotten smarter, if perhaps less so than one might have hoped. More importantly, the technology underlying it has drastically improved, fueled by a boom in the computer science subfield of machine learning. That has led to sharp improvements in speech recognition and natural language understanding, two separate but related technologies that are crucial to voice assistants.

siriReuters/Suzanne PlunkettLuke Peters demonstrates Siri, an application which uses voice recognition and detection on the iPhone 4S, outside the Apple store in Covent Garden, London Oct. 14, 2011.

If Siri gave us a glimpse of what is possible, it also inadvertently taught us about what wasn’t yet.

If a revolution in technology has made intelligent virtual assistants possible, what has made them inevitable is a revolution in our relationship to technology. Computers began as tools of business and research, designed to automate tasks such as math and information retrieval. Today they’re tools of personal communication, connecting us not only to information but to one another. They’re also beginning to connect us to all the other technologies in our lives: Your smartphone can turn on your lights, start your car, activate your home security system, and withdraw money from your bank. As computers have grown deeply personal, our relationship with them has changed. And yet the way they interact with us hasn’t quite caught up.

“It’s always been sort of appalling to me that you now have a supercomputer in your pocket, yet you have to learn to use it,” says Alan Packer, head of language technology at Facebook. “It seems actually like a failure on the part of our industry that software is hard to use.”

Packer is one of the people trying to change that. As a software developer at Microsoft, he helped to build Cortana. After it launched, he found his skills in heavy demand, especially among the two tech giants that hadn’t yet developed voice assistants of their own. One Thursday morning in December 2014, Packer was on the verge of accepting a top job at Amazon—“You would not be surprised at which team I was about to join,” he says—when Facebook called and offered to fly him to Menlo Park, California, for an interview the next day. He had an inkling of what Amazon was working on, but he had no idea why Facebook might be interested in someone with his skill set.

As it turned out, Facebook wanted Packer for much the same purpose that Microsoft and Amazon did: to help it build software that could make sense of what its users were saying and generate intelligent responses. Facebook may not have a device like the Echo or an operating system like Windows, but its own platforms are full of billions of people communicating with one another every day. If Facebook can better understand what they’re saying, it can further hone its News Feed and advertising algorithms, among other applications. More creatively, Facebook has begun to use language understanding to build artificial intelligence into its Messenger app. Now, if you’re messaging with a friend and mention sharing an Uber, a software agent within Messenger can jump in and order it for you while you continue your conversation.

In short, Packer says, Facebook is working on language understanding because Facebook is a technology company—and that’s where technology is headed. As if to underscore that point, Packer’s former employer this year headlined its annual developer conference by announcing plans to turn Cortana into a portal for conversational bots and integrate it into Skype, Outlook, and other popular applications. Microsoft CEO Satya Nadella predicted that bots will be the Internet’s next major platform, overtaking mobile apps the same way they eclipsed desktop computing.

* * *Amazon Echo DotAP

Siri may not have been very practical, but people immediately grasped what it was. With Amazon’s Echo, the second major tech gadget to put a voice interface front and center, it was the other way around. The company surprised the industry and baffled the public when it released a device in November 2014 that looked and acted like a speaker—except that it didn’t connect to anything except a power outlet, and the only buttons were for power and mute. You control the Echo solely by voice, and if you ask it questions, it talks back. It was like Amazon had decided to put Siri in a black cylinder and sell it for $179. Except Alexa, the virtual intelligence software that powers the Echo, was far more limited than Siri in its capabilities. Who, reviewers wondered, would buy such a bizarre novelty gadget?

That question has faded as Amazon has gradually upgraded and refined the Alexa software, and the five-star Amazon reviews have since poured in. In the New York Times, Farhad Manjoo recently followed up his tepid initial review with an all-out rave: The Echo “brims with profound possibility,” he wrote. Amazon has not disclosed sales figures, but the Echo ranks as the third-best-selling gadget in its electronics section. Alexa may not be as versatile as Siri—yet—but it turned out to have a distinct advantage: a sense of purpose, and of its own limitations. Whereas Apple implicitly invites iPhone users to ask Siri anything, Amazon ships the Echo with a little cheat sheet of basic queries that it knows how to respond to: “Alexa, what’s the weather?” “Alexa, set a timer for 45 minutes.” “Alexa, what’s in the news?”

The cheat sheet’s effect is to lower expectations to a level that even a relatively simplistic artificial intelligence can plausibly meet on a regular basis. That’s by design, says Greg Hart, Amazon’s vice president in charge of Echo and Alexa. Building a voice assistant that can respond to every possible query is “a really hard problem,” he says. “People can get really turned off if they have an experience that’s subpar or frustrating.” So the company began by picking specific tasks that Alexa could handle with aplomb and communicating those clearly to customers.

At launch, the Echo had just 12 core capabilities. That list has grown steadily as the company has augmented Alexa’s intelligence and added integrations with new services, such as Google Calendar, Yelp reviews, Pandora streaming radio, and even Domino’s delivery. The Echo is also becoming a hub for connected home appliances: “ ‘Alexa, turn on the living room lights’ never fails to delight people,” Hart says.

When you ask Alexa a question it can’t answer or say something it can’t quite understand, it fesses up: “Sorry, I don’t know the answer to that question.” That makes it all the more charming when you test its knowledge or capabilities and it surprises you by replying confidently and correctly. “Alexa, what’s a kinkajou?” I asked on a whim one evening, glancing up from my laptop while reading a news story about an elderly Florida woman who woke up one day with a kinkajou on her chest. Alexa didn’t hesitate: “A kinkajou is a rainforest mammal of the family Procyonidae … ” Alexa then proceeded to list a number of other Procyonidae to which the kinkajou is closely related. “Alexa, that’s enough,” I said after a few moments, genuinely impressed. “Thank you,” I added.

“You’re welcome,” Alexa replied, and I thought for a moment that she—it—sounded pleased.

As delightful as it can seem, the Echo’s magic comes with some unusual downsides. In order to respond every time you say “Alexa,” it has to be listening for the word at all times. Amazon says it only stores the commands that you say after you’ve said the word Alexa and discards the rest. Even so, the enormous amount of processing required to listen for a wake word 24/7 is reflected in the Echo’s biggest limitation: It only works when it’s plugged into a power outlet. (Amazon’s newest smart speakers, the Echo Dot and the Tap, are more mobile, but one sacrifices the speaker and the other the ability to respond at any time.)

Even if you trust Amazon to rigorously protect and delete all of your personal conversations from its servers—as it promises it will if you ask it to—Alexa’s anthropomorphic characteristics make it hard to shake the occasional sense that it’s eavesdropping on you, Big Brother–style. I was alone in my kitchen one day, unabashedly belting out the Fats Domino song “Blueberry Hill” as I did the dishes, when it struck me that I wasn’t alone after all. Alexa was listening—not judging, surely, but listening all the same. Sheepishly, I stopped singing.

* * *hal 2001Google Images

The notion that the Echo is “creepy” or “spying on us” might be the most common criticism of the device so far. But there’s a more fundamental problem. It’s one that is likely to haunt voice assistants, and those who rely on them, as the technology evolves and bores it way more deeply into our lives.

The problem is that conversational interfaces don’t lend themselves to the sort of open flow of information we’ve become accustomed to in the Google era. By necessity they limit our choices—because their function is to make choices on our behalf.

For example, a search for “news” on the Web will turn up a diverse and virtually endless array of possible sources, from Fox News to Yahoo News to CNN to Google News, which is itself a compendium of stories from other outlets. But ask the Echo, “What’s in the news?” and by default it responds by serving up a clip of NPR News’s latest hourly update, which it pulls from the streaming radio service TuneIn. Which is great—unless you don’t happen to like NPR’s approach to the news, or you prefer a streaming radio service other than TuneIn. You can change those defaults somewhere in the bowels of the Alexa app, but Alexa never volunteers that information. Most people will never even know it’s an option. Amazon has made the choice for them.

And how does Amazon make that sort of choice? The Echo’s cheat sheet doesn’t tell you that, and the company couldn’t give me a clear answer.

Alexa does take care to mention before delivering the news that it’s pulling the briefing from NPR News and TuneIn. But that isn’t always the case with other sorts of queries.

Let’s go back to our friend the kinkajou. In my pre-Echo days, my curiosity about an exotic animal might have sent me to Google via my laptop or phone. Just as likely, I might have simply let the moment of curiosity pass and not bothered with a search. Looking something up on Google involves just enough steps to deter us from doing it in a surprising number of cases. One of the great virtues of voice technology is to lower that barrier to the point where it’s essentially no trouble at all. Having an Echo in the room when you’re struck by curiosity about kinkajous is like having a friend sitting next to you who happens to be a kinkajou expert. All you have to do is say your question out loud, and Alexa will supply the answer. You literally don’t have to lift a finger.

That is voice technology’s fundamental advantage over all the human-computer interfaces that have come before it: In many settings, including the home, the car, or on a wearable gadget, it’s much easier and more natural than clicking, typing, or tapping. In the logic of today’s consumer technology industry, that makes its ascendance in those realms all but inevitable.

But consider the difference between Googling something and asking a friendly voice assistant. When I Google “kinkajou,” I get a list of websites, ranked according to an algorithm that takes into account all sorts of factors that correlate with relevance and authority. I choose the information source I prefer, then visit its website directly—an experience that could help to further shade or inform my impression of its trustworthiness. Ultimately, the answer does come not from Google, per se, but directly from some third-party authority, whose credibility I can evaluate as I wish.

A voice-based interface is different. The response comes one word at a time, one sentence at a time, one idea at a time. That makes it very easy to follow, especially for humans who have spent their whole lives interacting with one another in just this way. But it makes it very cumbersome to present multiple options for how to answer a given query. Imagine for a moment what it would sound like to read a whole Google search results page aloud, and you’ll understand no one builds a voice interface that way.

That’s why voice assistants tend to answer your question by drawing from a single source of their own choosing. Alexa’s confident response to my kinkajou question, I later discovered, came directly from Wikipedia, which Amazon has apparently chosen as the default source for Alexa’s answers to factual questions. The reasons seem fairly obvious: It’s the world’s most comprehensive encyclopedia, its information is free and public, and it’s already digitized. What it’s not, of course, is infallible. Yet Alexa’s response to my question didn’t begin with the words, “Well, according to Wikipedia … ” She—it—just launched into the answer, as if she (it) knew it off the top of her (its) head. If a human did that, we might call it plagiarism.

The sin here is not merely academic. By not consistently citing the sources of its answers, Alexa makes it difficult to evaluate their credibility. It also implicitly turns Alexa into an information source in its own right, rather than a guide to information sources, because the only entity in which we can place our trust or distrust is Alexa itself. That’s a problem if its information source turns out to be wrong.

The constraints on choice and transparency might not bother people when Alexa’s default source is Wikipedia, NPR, or TuneIn. It starts to get a little more irksome when you ask Alexa to play you music, one of the Echo’s core features. “Alexa, play me the Rolling Stones” will queue up a shuffle playlist of Rolling Stones songs available through Amazon’s own streaming music service, Amazon Prime Music—provided you’re paying the $99 a year required to be an Amazon Prime member. Otherwise, the most you’ll get out of the Echo are 20-second samples of songs available for purchase. Want to guess what one choice you’ll have as to which online retail giant to purchase those songs from?

When you say “Hello” to Alexa, you’re signing up for her party.

Amazon’s response is that Alexa does give you options and cite its sources—in the Alexa app, which keeps a record of your queries and its responses. When the Echo tells you what a kinkajou is, you can open the app on your phone and see a link to the Wikipedia article, as well as an option to search Bing. Amazon adds that Alexa is meant to be an “open platform” that allows anyone to connect to it via an API. The company is also working with specific partners to integrate their services into Alexa’s repertoire. So, for instance, if you don’t want to be limited to playing songs from Amazon Prime Music, you can now take a series of steps to link the Echo to a different streaming music service, such as Spotify Premium. Amazon Prime Music will still be the default, though: You’ll only get Spotify if you specify “from Spotify” in your voice command.

What’s not always clear is how Amazon chooses its defaults and its partners and what motivations might underlie those choices. Ahead of the 2016 Super Bowl, Amazon announced that the Echo could now order you a pizza. But that pizza would come, at least for the time being, from just one pizza-maker: Domino’s. Want a pizza from Little Caesars instead? You’ll have to order it some other way.

To Amazon’s credit, its choice of pizza source is very transparent. To use the pizza feature, you have to utter the specific command, “Alexa, open Domino’s and place my Easy Order.” The clunkiness of that command is no accident. It’s Amazon’s way of making sure that you don’t order a pizza by accident and that you know where that pizza is coming from. But it’s unlikely Domino’s would have gone to the trouble of partnering with Amazon if it didn’t think it would result in at least some number of people ordering Domino’s for their Super Bowl parties rather than Little Caesars.

None of this is to say that Amazon and Domino’s are going to conspire to monopolize the pizza industry anytime soon. There are obviously plenty of ways to order a pizza besides doing it on an Echo. Ditto for listening to the news, the Rolling Stones, a book, or a podcast. But what about when only one company’s smart thermostat can be operated by Alexa? If you come to rely on Alexa to manage your Google Calendar, what happens when Amazon and Google have a falling out?
When you say “Hello” to Alexa, you’re signing up for her party. Nominally, everyone’s invited. But Amazon has the power to ensure that its friends and business associates are the first people you meet.

* * *

google now speak now screenBusiness Insider, William Wei

These concerns might sound rather distant—we’re just talking about niche speakers connected to niche thermostats, right? The coming sea change feels a lot closer once you think about the other companies competing to make digital assistants your main portal to everything you do on your computer, in your car, and on your phone. Companies like Google.

Google may be positioned best of all to capitalize on the rise of personal A.I. It also has the most to lose. From the start, the company has built its business around its search engine’s status as a portal to information and services. Google Now—which does things like proactively checking the traffic and alerting you when you need to leave for a flight, even when you didn’t ask it to—is a natural extension of the company’s strategy.

If something is going to replace Google’s on-screen services, Google wants to be the one that does it.
As early as 2009, Google began to work on voice search and what it calls “conversational search,” using speech recognition and natural language understanding to respond to questions phrased in plain language. More recently, it has begun to combine that with “contextual search.” For instance, as Google demonstrated at its 2015 developer conference, if you’re listening to Skrillex on your Android phone, you can now simply ask, “What’s his real name?” and Google will intuit that you’re asking about the artist. “Sonny John Moore,” it will tell you, without ever leaving the Spotify app.

It’s no surprise, then, that Google is rumored to be working on two major new products—an A.I.-powered messaging app or agent and a voice-powered household gadget—that sound a lot like Facebook M and the Amazon Echo, respectively. If something is going to replace Google’s on-screen services, Google wants to be the one that does it.

So far, Google has made what seems to be a sincere effort to win the A.I. assistant race without

sacrificing the virtues—credibility, transparency, objectivity—that made its search page such a dominant force on the Web. (It’s worth recalling: A big reason Google vanquished AltaVista was that it didn’t bend its search results to its own vested interests.) Google’s voice search does generally cite its sources. And it remains primarily a portal to other sources of information, rather than a platform that pulls in content from elsewhere. The downside to that relatively open approach is that when you say “hello” to Google voice search, it doesn’t say hello back. It gives you a link to the Adele song “Hello.” Even then, Google isn’t above playing favorites with the sources of information it surfaces first: That link goes not to Spotify, Apple Music, or Amazon Prime Music, but to YouTube, which Google owns. The company has weathered antitrust scrutiny over allegations that this amounted to preferential treatment. Google’s defense was that it puts its own services and information sources first because its users prefer them.

* * *


If there’s a consolation for those concerned that intelligent assistants are going to take over the world, it’s this: They really aren’t all that intelligent. Not yet, anyway.

The 2013 movie Her, in which a mobile operating system gets to know its user so well that they become romantically involved, paints a vivid picture of what the world might look like if we had the technology to carry Siri, Alexa, and the like to their logical conclusion. The experts I talked to, who are building that technology today, almost all cited Her as a reference point—while pointing out that we’re not going to get there anytime soon.

Google recently rekindled hopes—and fears—of super-intelligent A.I. when its AlphaGo software defeated the world champion in a historic Go match. As momentous as the achievement was, designing an algorithm to win even the most complex board game is trivial compared with designing one that can understand and respond appropriately to anything a person might say. That’s why, even as artificial intelligence is learning to recommend songs that sound like they were hand-picked by your best friend or navigate city streets more safely than any human driver, A.I. still has to resort to parlor tricks—like posing as a 13-year-old struggling with a foreign language—to pass as human in an extended conversation. The world is simply too vast, language too ambiguous, the human brain too complex for any machine to model it, at least for the foreseeable future.

But if we won’t see a true full-service A.I. in our lifetime, we might yet witness the rise of a system that can approximate some of its capabilities—comprising not a single, humanlike Her, but a million tiny hims carrying out small, discrete tasks handily. In January, the Verge’s Casey Newton made a compelling argument that our technological future will be filled not with websites, apps, or even voice assistants, but with conversational messaging bots. Like voice assistants, these bots rely on natural language understanding to carry on conversations with us. But they will do so via the medium that has come to dominate online interpersonal interaction, especially among the young people who are the heaviest users of mobile devices: text messaging. For example, Newton points to “Lunch Bot,” a relatively simple agent that lived in the wildly popular workplace chat program Slack and existed for a single, highly specialized purpose: to recommend the best place for employees to order their lunch from on a given day. It soon grew into a venture-backed company called Howdy.

A world of conversational machines is one in which we treat software like humans, letting them deeper into our lives and confiding in them more than ever.

I have a bot in my own life that serves a similarly specialized yet important role. While researching this story, I ran across a company called whose mission is to build the ultimate virtual scheduling assistant. It’s called Amy Ingram, and if its initials don’t tip you off, you might interact with it several times before realizing it’s not a person. (Unlike some other intelligent assistant companies, gives you the option to choose a male name for your assistant instead: Mine is Andrew Ingram.) Though it’s backed by some impressive natural language tech,’s bot does not attempt to be a know-it-all or do-it-all; it doesn’t tell jokes, and you wouldn’t want to date him. It asks for access to just one thing—your calendar. And it communicates solely by email. Just cc it on any thread in which you’re trying to schedule a meeting or appointment, and it will automatically step in and take over the back-and-forth involved in nailing down a time and place. Once it has agreed on a time with whomever you’re meeting—or, perhaps, with his or her own assistant, whether human or virtual—it will put all the relevant details on your calendar. Have your A.I. cc my A.I.

For these bots, the key to success is not growing so intelligent that they can do everything. It’s staying specialized enough that they don’t have to.

“We’ve had this A.I. fantasy for almost 60 years now,” says Dennis Mortensen,’s founder and CEO. “At every turn we thought the only outcome would be some human-level entity where we could converse with it like you and I are [conversing] right now. That’s going to continue to be a fantasy. I can’t see it in my lifetime or even my kids’ lifetime.” What is possible, Mortensen says, is “extremely specialized, verticalized A.I.s that understand perhaps only one job, but do that job very well.”

Yet those simple bots, Mortensen believes, could one day add up to something more. “You get enough of these agents, and maybe one morning in 2045 you look around and that plethora—tens of thousands of little agents—once they start to talk to each other, it might not look so different from that A.I. fantasy we’ve had.”

That might feel a little less scary. But it still leaves problems of transparency, privacy, objectivity, and trust—questions that are not new to the world of personal technology and the Internet but are resurfacing in fresh and urgent forms. A world of conversational machines is one in which we treat software like humans, letting them deeper into our lives and confiding in them more than ever. It’s one in which the world’s largest corporations know more about us, hold greater influence over our choices, and make more decisions for us than ever before. And it all starts with a friendly “Hello.”

What’s Coming From Apple in 2016: Apple Watch 2, iPhone 6c, iPhone 7, Skylake MacBooks, and More

With the launch of the Apple Watch, the iPhone 6s and the 6s Plus, the new Apple TV, and the iPad Pro, 2015 was a major year for Apple. The Apple Watch introduced a whole new category, the iPhone 6s and 6s Plus saw the debut of 3D Touch, and the iPad Pro brought Apple’s largest iOS device yet.

iOS 9, watchOS 2, and OS X 10.11 El Capitan brought refinements to Apple’s operating systems, and the fourth-generation Apple TV came with a brand new operating system, tvOS. 2015 saw a huge number of new products and software updates, and 2016 promises to be just as exciting.

A second-generation Apple Watch is in the works and could launch in early 2016, while new flagship iPhones, the iPhone 7 and the iPhone 7 Plus, are coming in late 2016. Those who love smaller devices will be excited to hear a 4-inch iPhone 6c may be coming early in 2016, and Apple’s Mac lineup is expected to gain Skylake chip updates.

New software, including iOS 10, OS X 10.12, watchOS 3, and an upgraded version of tvOS are all expected in 2016, and Apple will undoubtedly work on improving services like HomeKit, Apple Pay, and Apple Music.

As we did for 2014 and 2015, we’ve highlighted Apple’s prospective 2016 product plans, outlining what we might see from Apple over the course of the next 12 months based on current rumors, past releases, and logical upgrade choices.

Apple Watch 2 (Early 2016)

A second-generation Apple Watch is rumored to be debuting in March of 2016, approximately one year after the launch of the first Apple Watch. A March event could see the introduction of the device, with shipments beginning in April 2016.

Early rumors suggest the Apple Watch 2 will perhaps include some of the sensors that were nixed from the first version, including skin conductivity, blood oxygen level, and blood pressure. The device may be thinner than the first Apple Watch, and it could include features like a FaceTime camera to allow Apple Watch users to make and receive FaceTime calls and an upgraded Wi-Fi chip that may allow the Apple Watch to do more without an iPhone.


The Apple Watch 2 could be thinner than the existing Apple Watch, with new sensors and a camera.
It is not clear if the new Apple Watch will continue to use the same lugs and bands as the first-generation Apple Watch, but given the large number of bands owned by Apple Watch users, it seems likely the device won’t require users to purchase all new hardware. There have been no rumors on the prospective hardware, aside from early analyst predictions pointing towards the thinner size.

Regardless, the second-generation Apple Watch is likely to be accompanied by the launch of bands in new colors and designs as Apple has set a precedent of changing the available bands multiple times per year.

Full Apple Watch roundup

iPhone 7 and 7 Plus (Late 2016)

The iPhone 7 and the iPhone 7 Plus will come at the tail end of 2016, likely making their debut in September in line with past iPhone launches. Apple is expected to continue offering the phones in 4.7 and 5.5-inch sizes, but we can count on a redesigned external chassis because 2016 marks a major upgrade year.

Details about the exterior of the phone and its internal updates are largely unknown at this early date, but based on past upgrades, we can expect a thinner body, an improved processor, and a better camera. Flagship features like 3D Touch and Touch ID will continue to be available, and Apple likely has additional features planned to make its latest iPhone stand out.

Taking into account past rumors and acquisitions, the camera is one area that could see significant improvements, perhaps incorporating a dual-lens system that offers DSLR quality in a compact size. Some of these rumors were originally attached to the iPhone 6s, but could have been delayed for later devices especially given the 2015 acquisition of Israeli camera company LinX.

iphone 6s_6s_plus_featured

The current iPhone 6s and 6s Plus. The iPhone 7 is rumored to be slimmer with no antenna bands and a new material composition.
Apple is expected to continue using in-cell display panels for the iPhone 7, which will allow it to shrink the thickness of the device, perhaps making it as thin as the 6.1mm iPod touch. The iPhone 7 is also likely to include a TFT-LCD display as the AMOLED technology Apple is rumored to be working on is not yet ready for use in iOS devices.

Analyst Ming-Chi Kuo, who often accurately predicts Apple’s plans, has said RAM could be a differentiating factor between the two iPhone 7 models. The smaller 4.7-inch iPhone 7 may continue to ship with 2GB RAM, while the larger 5.5-inch iPhone 7 Plus may ship with 3GB RAM.

Other rumors about the iPhone 7 have pointed towards the removal of the headphone jack in favor of headphones that attach to the device using the Lightning port, a change that may also help Apple shave 1mm off of the thickness of the iPhone.

Some early rumors out of the Asian supply chain have suggested the iPhone 7 may include a strengthened, waterproof frame that ditches Apple’s traditional aluminum casing for an all new material and does away with the prominent rear antenna bands on the iPhone 6, iPhone 6 Plus, iPhone 6s, and iPhone 6s Plus. The rumors of a waterproof, dust-proof casing are from somewhat unreliable sources and should not be viewed as fact until further evidence becomes available.

Full iPhone 7 roundup

iPhone 6c (Early 2016)

Since the launch of the larger-screened iPhone 6 and iPhone 6 Plus, Apple has been rumored to be working on an upgraded 4-inch iPhone for customers who prefer smaller screens. The „iPhone 6c“ is rumored to be launching during the first months of 2016, and it’s another device that could potentially make an appearance at Apple’s rumored March event. If the 4-inch iPhone launches in early 2016, it will be the first iPhone to launch outside of the fall months since 2011.

Apple’s 4-inch iPhone is described as a cross between an iPhone 5s and an iPhone 6, with an aluminum body and iPhone 6-style curved cover glass. There have been some sketchy rumors suggesting it will come in multiple colors like the iPod touch, but that has not yet been confirmed. KGI Securities analyst Ming-Chi Kuo has pointed towards „two or three“ color options for the device, but he did not specify which colors.

Rumors have disagreed over whether the iPhone 6c will include an A8 processor or an A9 processor, but Kuo believes Apple will use the same A9 processor that’s used in the iPhone 6s. Other rumors out of the Asian supply chain suggest Apple could also include 2GB RAM in the device, and with an A9 processor and 2GB RAM, the iPhone 6c could be on par with the iPhone 6s when it comes to raw performance.

Other features rumored for the iPhone 6c include a 1,642 mAh battery that’s somewhat larger than the battery used in the iPhone 5s, an 8-megapixel rear-facing camera with an ƒ/2.2 aperture, a 1.2-megapixel front-facing camera, 802.11ac Wi-Fi, and Bluetooth 4.1. The iPhone 6c is not expected to include 3D Touch, as it is a flagship feature of the iPhone 6s, but it is likely to include NFC to enable Apple Pay functionality.

Full iPhone 6c roundup

iPad Air 3 (Early-to-Mid 2016)

Since the iPad launched in 2010, Apple has upgraded the tablet on a yearly basis, producing a new version each fall. In 2015, Apple did not upgrade the iPad Air 2, instead focusing on releasing the iPad Pro and the iPad mini 4. Combined with the minor update the iPad mini 2 received in 2014, Apple may be signaling its intention to update its iPads on an 18-month to two-year schedule going forward.

Recent rumors have suggested that Apple is developing an iPad Air 3 that will launch during the first half of 2016. Little is known about the third-generation iPad Air at this time, but it will include an upgraded processor to improve performance. It may also offer RAM upgrades and camera improvements, but it will not include the 3D Touch feature introduced with the iPhone 6s and the iPhone 6s Plus due to manufacturing difficulties expanding the technology to a larger screen size.

Apple likely has something planned to make the iPad Air 3 stand out, but it is not yet clear what that might be.

Full iPad Air roundup

MacBook Air (Early-to-Mid 2016)

Following the launch of the Retina MacBook in April of 2015, the future of the MacBook Air became uncertain. There has been speculation that the MacBook line will subsume the MacBook Air line as component prices decrease, but some recent rumors have led to hope that the MacBook Air will continue to exist alongside the Retina MacBook and the Retina MacBook Pro, offering a compromise between performance, portability, and cost.

Though it lacks the power of the Retina MacBook Pro and the Retina display of the MacBook, the MacBook Air continues to be popular with consumers for its low price point.

Current rumors suggest Apple will continue producing the MacBook Air, with plans to launch 13 and 15-inch MacBook Air models during the third quarter of 2016, perhaps unveiling the machines around the annual Worldwide Developers Conference.

The MacBook Air’s design has remained unchanged since 2010, so a 2016 redesign that focuses on a slimmer chassis with bigger screens and revamped internals is not out of the realm of possibility. Apple has been increasing the sizes of its devices, introducing a larger 5.5-inch iPhone and a 12.9-inch iPad Pro, so a 15-inch MacBook Air also seems reasonable. The rumor does not mention an 11-inch MacBook Air, suggesting it will potentially be phased out in favor of larger screen sizes and to let the 12-inch Retina MacBook stand out as the sole ultraportable machine.


Current 11 and 13-inch MacBook Air compared to 15-inch Retina MacBook Pro
If Apple does introduce a 2016 MacBook Air, it will likely include Intel’s next-generation Skylake chips, which will offer 10 percent faster CPU performance, 34 percent faster Intel HD graphics, and 1.4 hours of additional battery life compared to the equivalent Broadwell chips in current MacBook Air models. Skylake U-Series 15-watt chips appropriate for the MacBook Air will be shipping in early 2016.

While the current rumor has suggested the new MacBook Air models will launch in the third quarter of 2016, they could potentially be ready to debut earlier in the year. The last MacBook Air update was in March of 2015 and Apple may not want to wait more than a full year before introducing a refresh.

As there haven’t been many rumors about a new MacBook Air at this time, an update should not be viewed as a sure thing. Supply chain information is not always accurate, and there’s a chance the information shared about the alleged 13 and 15-inch MacBook Air could instead apply to the Retina MacBook Pro.

Full MacBook Air roundup

Retina MacBook Pro (Early-to-Mid 2016)

Over the course of the past two years, Intel’s chip delays have significantly impacted Apple’s Retina MacBook Pro release plans, especially for the 15-inch model. Broadwell delays resulted in staggered update timelines for 13 and 15-inch models, which were last updated in March and May of 2015, respectively.

While the 13-inch Retina MacBook Pro was updated with Broadwell chips, the 15-inch machine has continued to offer Haswell processors, and Apple’s upgrade path for the 15-inch Retina MacBook Pro isn’t quite clear.

Broadwell chips appropriate for a 15-inch Retina MacBook Pro update became available in June of 2015, so Apple could release an updated 15-inch Retina MacBook Pro in early 2016 using these chips. Alternatively, and more likely, Apple could bypass Broadwell altogether in favor of a Skylake update for both the 13 and 15-inch Retina MacBook Pro.

Skylake U-Series 28-watt chips appropriate for the 13-inch Retina MacBook Pro will begin shipping from Intel in early 2016, as will 45-watt H-Series chips with Intel Iris Pro graphics appropriate for the 15-inch Retina MacBook Pro. Exact shipping timelines for the chips are not yet known, but with an early 2016 release timeline, new Retina MacBook Pro models could come within the first few months of the year, perhaps being unveiled at the aforementioned rumored March event. Should the chips come at different times, Apple could stagger the 2016 MacBook Pro updates as it did in 2015.

Aside from prospective chip updates, little is known about the next-generation Retina MacBook Pro. Given that it’s been four years since the machine was redesigned, it’s possible we could see a refreshed, slimmer body and an improved Retina display, but there have been no rumors to suggest this is the case.

Full Retina MacBook Pro roundup

MacBook (Early-to-Mid 2016)

Skylake Core M chips appropriate for a second-generation Retina MacBook are already available, meaning refreshed Retina MacBook could be introduced at any moment. The new Core M chips offer 10 hours of battery life and 10 to 20 percent faster CPU performance compared to the Broadwell chips used in the first-generation machine.

The most notable upgrade in a second-generation Retina MacBook that uses Skylake chips would come in the form of graphics improvements, as the Skylake Core M chips offer up to 40 percent faster graphics performance.

Beyond Skylake chips, it is not known what other improvements Apple might offer in a second-generation Retina MacBook. Given that the design was just introduced in April of 2015, the new machine will undoubtedly use the same chassis, but a Rose Gold color option to match the new Rose Gold iPhone 6s is a possibility.

If Apple is planning to introduce new Macs at a rumored Apple Watch-centric event in March, that may be when the new Retina MacBook will debut.

Full MacBook roundup

iMac (Late 2016)

Apple’s iMac, like its MacBook Pro, has been impacted by Intel’s chip delays. Current higher-end models already use Skylake graphics but lower-end models continue to use Broadwell chips. Given that the iMac lineup was just refreshed in October of 2015, another update may not come until late in 2016.

Apple’s future chip plans for the iMac are difficult to decipher, as Intel does not plan to introduce desktop class socketed Skylake chips with integrated Iris or Iris Pro graphics that would be appropriate for lower-end iMacs that use integrated graphics instead of discrete graphics.

With no prospective chips available for the lower-end iMacs, it is not clear what Apple is going to do in terms of processor upgrades, making it nearly impossible to predict when we might see the next iMac update or what it might include. Intel plans to release Kaby Lake processors in late 2016, but details on Kaby Lake chips appropriate for the iMac are not available, and it’s possible Kaby Lake could see delays.

There are also no rumors on other features that could be included with a next-generation iMac update, but going forward, Apple may fully drop non-Retina 21.5-inch models as hardware prices come down in favor of an all-Retina lineup.

Full iMac roundup

Software Updates

iOS 10 (Late 2016)
ios10logoEach September, Apple launches an updated version of iOS to accompany its latest iPhones. In 2016, the company is expected to debut iOS 10, the successor to iOS 9. iOS 8 and iOS 9 both focused more on features than design, so it is quite possible iOS 10 will be an update that introduces more significant design changes, similar to iOS 7.

Because iOS 9 just launched three and a half months ago, iOS 10 rumors have not yet begun. As the year progresses, we’ll get a glimpse at what to expect in September, but for now, all we know is that there’s an update coming.

Full iOS 9 roundup

OS X 10.12 (Late 2016)
osx1012mockupAlong with iOS, OS X is also updated on a yearly basis, with an update coming each fall around September or October. In 2016, we expect to see the debut of OS X 10.12, the followup to OS X 10.11 El Capitan.

El Capitan was an update designed to introduce bug fixes and build on the features that debuted with OS X 10.10 Yosemite, so it’s likely OS X 10.12 will be a bigger standalone update that includes design tweaks and new features.

Full OS X 10.11 El Capitan roundup

watchOS 3 (Early 2016)
watchos3watchOS is the software that runs on the Apple Watch, and in 2016, Apple is expected to launch a third version of the software. watchOS debuted alongside of the Apple Watch in April, while watchOS 2 came out just months later in September with iOS 9.

Apple has thus far tied its watchOS releases to iOS releases, but it’s quite possible that watchOS 3 will launch alongside an updated second-generation Apple Watch rather than alongside iOS 10 in September. A second-generation Apple Watch will potentially require some significant software updates if major hardware changes like new sensors or cameras are introduced.

New versions of the iPhone ship with new versions of iOS, so it’s logical to expect the same thing to happen with the Apple Watch, but thus far there are no rumors about the watchOS 3 update or what features might be included.

Full watchOS roundup

tvOS 10?
Apple TV software traditionally has not seen the same major software updates as iOS devices and the Apple Watch, so Apple’s plans for tvOS are not clear. So far, there have been some minor tvOS updates, but it is not yet known if Apple will push major version upgrades with new features and design changes on a yearly basis.

If Apple is planning to offer iOS-style updates for tvOS, the first major tvOS software update could come in the fall, perhaps alongside iOS 10.

Other Possibilities

Fifth-generation Apple TV
Shortly after the launch of the fourth-generation Apple TV, there was a sketchy rumor suggesting development and production had already begun on a fifth-generation Apple TV with an upgraded CPU. While it’s possible Apple has plans to release an updated Apple TV in 2016, it’s highly unlikely such a device is already in production and it’s equally unlikely Apple would release it before the fall of 2016.

Prior to the launch of the fourth-generation Apple TV, the set-top box went multiple years without a significant update. It is not clear how often Apple will update the Apple TV now that a new version has been released, so we will need to wait until later in the year for more information on the Apple TV upgrade schedule.

Full Apple TV roundup

iPad Pro 2
The iPad Pro was released in November of 2015 and Apple’s plans for a second-generation device are not yet known. For several years, Apple was updating its iPads on a yearly basis, but its more recent update timelines suggest it is potentially moving to an 18 month or 24 month upgrade cycle for iPads, making it unclear when we might see an iPad Pro 2.

With the iPad Air line, for example, Apple introduced an iPad Air 2 in 2014 but neglected to upgrade it to an iPad Air 3 in 2015. The iPad mini 2 update was similar, with a 2014 update introducing only Touch ID to the 2013 model, while the 2015 iPad mini 4 featured a more significant revamp.

An iPad Pro 2 could potentially debut in 2016 with an updated processor and other improved features, but it’s also just as likely Apple will wait until mid-to-late 2017 to introduce a second-generation iPad Pro. More information on Apple’s iPad Pro plans will come later in 2016, firming up potential release timelines.

Full iPad Pro roundup

iPad mini 5
Apple introduced the iPad mini 4 in late 2015, following the launch of the iPad mini 2 in 2013 and the minor iPad mini 3 update in 2014. With Apple seemingly shifting away from a yearly upgrade cycle for its iPad lineup, we may not see an iPad mini 5 in 2016.

Instead, 2016 may see the launch of an updated iPad Air 3, followed by an iPad mini update in 2017. Apple’s iPad sales have been flagging in recent years as customers do not update their tablets as often as their phones, which has led Apple to try different upgrade strategies and cycles. With Apple’s shifting plans, it is not yet clear when the iPad mini will see another update.

Ahead of the launch of the iPad mini 4, there were some rumors that Apple would discontinue its smallest tablet, but with the iPad mini 4, Apple has signaled its intention to continue offering the iPad in three screen sizes to meet different customer needs.

Full iPad mini roundup

Mac Pro
The Mac Pro launched in late 2013, and since then, it has not seen an update. It’s quite possible 2016 will be the year Apple will refresh the machine, as potential references to an updated Mac Pro were discovered in OS X El Capitan.

Grantley Xeon E5 V3 Haswell-EP processors appropriate for a high-end Mac Pro upgrade were introduced in 2014, but Apple may be waiting on E5 V4 Broadwell-EP chips for the top-of-the-line Mac Pro that are set to launch in the first half of 2016. E3 V4 chips appropriate for lower-end machines are already available, as are Skylake E3 V5 chips.

If this is the case, a Mac Pro launch will happen after the chips become available, with the machine perhaps seeing a mid-to-late 2016 debut.

Updated AMD FirePro graphics cards were introduced in 2015, as were cards built on AMD’s Fury platform, both of which could potentially be used in a next-generation Mac Pro. Fury graphics are more likely, and an updated Mac Pro could also include faster memory, improved storage, and Thunderbolt 3 connectivity introduced through a shift to USB-C.

In the past, prior to its 2013 redesign, the Mac Pro was updated in 2006, 2008, 2009, 2010, and 2012.

Full Mac Pro roundup

Mac mini
The Mac mini was last updated in 2014, introducing Haswell processors and features like 802.11ac WiFi and Thunderbolt 2. Given that it’s now been two years since the update, Apple could introduce new Mac mini models with Skylake processors in 2016. Two years is the longest the Mac mini has gone without a refresh.

Apple’s Mac mini line uses the same U-Series chips that are found in the MacBook Air and the 13-inch Retina MacBook Pro, and Skylake chips appropriate for an updated Mac mini will be shipping in the first months of 2016. A new Mac mini may debut in early-to-mid 2015 alongside a refreshed MacBook Air and MacBook Pro.

In the past, the Mac mini saw upgrades in 2006, 2007, 2009, 2010, 2011, and 2012 before going sans upgrade for two years after a late 2012 update.

Is Your Messaging App Encrypted?

Is Your Messaging App Encrypted?


Justin Sullivan / Getty Images

One Plus X applies Artificial Shortage Marketing Strategy to Reach Economic Equilibrium



The first Apple Watch may not be for you — but someday soon, it will change your world

Further reading:


A column from Farhad Manjoo that examines how technology is changing

It took three days — three long, often confusing and frustrating days — for me to fall for the Apple Watch. But once I fell, I fell hard.

First there was a day to learn the device’s initially complex user interface. Then another to determine how it could best fit it into my life. And still one more to figure out exactly what Apple’s first major new product in five years is trying to do — and, crucially, what it isn’t.

It was only on Day 4 that I began appreciating the ways in which the elegant $650 computer on my wrist was more than just another screen. By notifying me of digital events as soon as they happened, and letting me act on them instantly, without having to fumble for my phone, the Watch became something like a natural extension of my body — a direct link, in a way that I’ve never felt before, from the digital world to my brain. The effect was so powerful that people who’ve previously commented on my addiction to my smartphone started noticing a change in my behavior; my wife told me that I seemed to be getting lost in my phone less than in the past. She found that a blessing.



With a selection of stylish leather and metallic bands, the Apple Watch starts at $350 and goes all the way up to $17,000. Credit Apple

The Apple Watch is far from perfect, and, starting at $350 and going all the way up to $17,000, it isn’t cheap. Though it looks quite smart, with a selection of stylish leather and metallic bands that make for a sharp departure from most wearable devices, the Apple Watch works like a first-generation device, with all the limitations and flaws you’d expect of brand-new technology.

What’s more, unlike previous breakthrough Apple products, the Watch’s software requires a learning curve that may deter some people. There’s a good chance it will not work perfectly for most consumers right out of the box, because it is best after you fiddle with various software settings to personalize use. Indeed, to a degree unusual for a new Apple device, the Watch is not suited for tech novices. It is designed for people who are inundated with notifications coming in through their phones, and for those who care to think about, and want to try to manage, the way the digital world intrudes on their lives.

Still, even if it’s not yet for everyone, Apple is on to something with the device. The Watch is just useful enough to prove that the tech industry’s fixation on computers that people can wear may soon bear fruit. In that way, using the Apple Watch over the last week reminded me of using the first iPhone. Apple’s first smartphone was revolutionary not just because it did what few other phones could do, but also because it showed off the possibilities of a connected mobile computer. As the iPhone and its copycats became more powerful and ubiquitous, the mobile computer became the basis of a wide range of powerful new tech applications, from messaging to ride-sharing to payments.

Similarly, the most exciting thing about the Apple Watch isn’t the device itself, but the new tech vistas that may be opened by the first mainstream wearable computer. On-body devices have obvious uses in health care and payments. As the tech analyst Tim Bajarin has written, Apple also seems to be pushing a vision of the Watch as a general-purpose remote control for the real world, a nearly bionic way to open your hotel room, board a plane, call up an Uber or otherwise have the physical world respond to your desires nearly automatically.



Credit Stuart Goldenberg

These situations suggest that the Watch may push us to new heights of collective narcissism. Yet in my week with the device, I became intrigued by the opposite possibility — that it could address some of the social angst wrought by smartphones. The Apple Watch’s most ingenious feature is its “taptic engine,” which alerts you to different digital notifications by silently tapping out one of several distinct patterns on your wrist. As you learn the taps over time, you will begin to register some of them almost subconsciously: incoming phone calls and alarms feel throbbing and insistent, a text feels like a gentle massage from a friendly bumblebee, and a coming calendar appointment is like the persistent pluck of a harp. After a few days, I began to get snippets of information from the digital world without having to look at the screen — or, if I had to look, I glanced for a few seconds rather than minutes.

If such on-body messaging systems become more pervasive, wearable devices can become more than a mere flashy accessory to the phone. The Apple Watch could usher in a transformation of social norms just as profound as those we saw with its brother, the smartphone — except, amazingly, in reverse.

For now, the dreams are hampered by the harsh realities of a new device. The Watch is not an iPhone on your wrist. It has a different set of input mechanisms — there’s the digital crown, a knob used for scrolling and zooming, and a touch screen that can be pressed down harder for extra options. There is no full on-screen keyboard, so outbound messages are confined to a set of default responses, emoji and, when you’re talking to other Watch users, messages that you can draw or tap.

The Watch also relies heavily on voice dictation and the voice assistant Siri, which is more useful on your wrist than on your phone, but still just as hit-or-miss. I grew used to calling on Siri to set kitchen timers or reminders while I was cooking, or to look up the weather while I was driving. And I also grew used to her getting these requests wrong almost as often as she got them right.



An Apple Watch app allows hotel guests to open the door to their room by touching the watch face to the door. Credit Michael Appleton for The New York Times

The Watch also has a completely different software design from a smartphone. Though it has a set of apps, interactions are driven more by incoming notifications as well as a summary view of some apps, known as glances. But because there isn’t much room on the watch’s screen for visual cues indicating where you are — in an app, a notification or a glance — in the early days, you’ll often find yourself lost, and something that works in one place won’t work in another.

Finding nirvana with the watch involves adjusting your notification settings on your phone so that your wrist does not constantly buzz with information that doesn’t make sense on the Watch — like Facebook status updates, messages from Snapchat, or every single email about brownies in the office kitchen. Apple’s notification settings have long been unduly laborious; battling them while your hand is buzzing off the hook is an extra level of discomfort.

Other problems: Third-party apps are mostly useless right now. The Uber app didn’t load for me, the Twitter app is confusing and the app for Starwood hotels mysteriously deleted itself and then hung up on loading when I reinstalled it. In the end, though, it did let me open a room at the W Hotel in Manhattan just by touching the watch face to the door.

I also used the Watch to pay for New York cabs and groceries at Whole Foods, and to present my boarding pass to security agents at the airport. When these encounters worked, they were magical, like having a secret key to unlock the world right on my arm. What’s most thrilling about the Apple Watch, unlike other smartwatches I’ve tried, is the way it invests a user with a general sense of empowerment. If Google brought all of the world’s digital information to our computers, and the iPhone brought it to us everywhere, the Watch builds the digital world directly into your skin. It takes some time getting used to, but once it clicks, this is a power you can’t live without.

The New York Times announced last week that it had created “one-sentence stories” for the Apple Watch, so let me end this review with a note that could fit on the watch’s screen: The first Apple Watch may not be for you — but someday soon, it will change your world.

WhatsApp Call: Details zum kostenlosen VoIP-Telefonie-Dienst

WhatsApp Call ist offiziell gestartet. zeigt Ihnen den kostenlosen VoIP-Dienst des Smartphone-Messengers in Bildern:
welche neuen Funktionen und Menüs die Android-Anwendung von WhatsApp ab sofort mit sich bringt

WhatsApp Call: So sieht die Nutzung in der Praxis aus

Nachdem ein WhatsApp Call angenommen wurde, lässt sich das Gespräch wie bei einem herkömmlichen Telefonat über das Mobilfunknetz führen. Die Übertragungsqualität ist abhängig vom verwendeten Smartphone und natürlich auch vom Internet-Zugang, der während der Verbindung am Smartphone zur Verfügung steht.

Während des Anrufs werden Name und Profilbild des Gesprächspartners angezeigt. Mit der virtuellen roten Taste lässt sich das Telefonat beenden. Dazu können die Freisprech-Funktion ein- und ausgeschaltet werden, das eigene Mikrofon lässt sich deaktivieren und wieder abschalten und es besteht auch die Möglichkeit, vorübergehend ins Chat-Fenster zu wechseln, um eine Textnachricht zu übermitteln. Diese bekommt der Gesprächspartner aber nicht sofort angezeigt, sondern erst dann, wenn er ebenfalls den Chat aufruft.

Auf der fünften Seite sehen Sie, was passiert, wenn Sie einen WhatsApp-Kunden anrufen möchten, der bereits einen WhatsApp Call führt.

WhatsApp Call während einer Sprachverbindung

Apple Watch Event: Uhrsache (sic!) und Wirkung

Der Spiegel Online analysiert knallhart:

„Erst eingehende Tests werden zeigen, ob die Benutzung der neuen Uhren tatsächlich so intuitiv und angenehm ist, wie Cook und sein Team das bei der Vorstellung ein ums andere Mal betont haben. Sicher ist, dass Apple bis heute einen Vertrauensvorsprung hat, wenn es um die Einführung neuer Geräte geht. Steve Jobs versprach einst: Wenn wir etwas anfassen, dann machen wir es so, dass die Kunden es lieben werden. Löst die Apple Watch dieses Versprechen ein, dann kann sie einmal mehr einer Gerätekategorie zum Durchbruch verhelfen, bei denen andere die undankbare Vorreiterrolle übernommen haben. So wie das bei MP3-Playern, Touchscreen-Handys oder tragbaren Touch-Computern schon der Fall war.“

Und subsummiert, die Ängste, aller Beteiligten, Mitarbeiter, Fan-Boys, überzeugten Innovationsliebhabern, und Aktionären:

„Erweist sich die Apple Watch aber als überflüssiger Schnickschnack, als allzu klobiges Anhängsel mit zu wenig echtem Mehrwert für seinen Preis, dann kann die Uhr das Gegenteil bewirken: Wenn der Konzern nur einmal unter Beweis stellt, dass nicht jedes seiner Produkte automatisch zum unverzichtbaren Alltagsgegenstand wird, könnte das der Beginn eines rapiden Abstiegs werden.“



Spiegel Online resümmiert:

„Die Ankündigung mit der vermutlich nachhaltigsten Wirkung aber ist die zugleich am wenigsten spektakuläre. Der berührungslose Bezahldienst Apple Pay ist einmal mehr eine aufpolierte Kopie bereits im Markt befindlicher Angebote, man denke nur an Google Wallet. Android-Handys mit NFC-Chips gibt es längst, das Zahlen per Handy aber hat sich bislang nirgends durchgesetzt. Apple aber hat im Smartphone-Bereich in den USA bis heute einen Marktanteil von 40 Prozent – und Cooks Mannschaft hat es offenbar verstanden, sich mit vielen großen Laden- und Restaurantketten zu verbünden.

Schafft Apple es, mit seinen neuen Geräten schnell große Kundenzahlen zu erreichen – und die Geschichte legt nahe, dass das klappen könnte, – könnte mit einem Mal auch das Zahlen mit dem Handy – oder der Uhr – zur Alltagsgeste werden.

Für Ladenketten könnte die Anschaffung der entsprechenden Hardware mit einer ausreichend großen, zahlungskräftigen Klientel plötzlich doch interessant werden, und genau das sind Apples Kunden. Und stehen die Scanner erst einmal an den Ladenkassen, sind auch die NFC-Chips in allen anderen Handyfabrikaten plötzlich wieder im Spiel. Wenn das geschieht, wenn unsere digitalen Alltagsbegleiter auch zu unserem bevorzugten Zahlungsmittel werden, ist das zwar bequem – es bringt aber auch völlig neue Datenschutz– und Sicherheitsprobleme mit sich.“

Derstandard ergänzt:

„Das US-Magazin „Fortune“ würdigte Cook seinerzeit als „das Genie hinter Steve“. Als Zuständiger für das operative Geschäft sorgte er dafür, dass nach Umsetzung der kühnen Visionen schwarze Zahlen in den Büchern standen. Jetzt muss Cook mit der Computeruhr beweisen, dass sein Apple die gleiche visionäre Kraft wie zu Zeiten von Jobs hat. Dieses Image hilft dem Konzern, weltweit Millionen seiner teuren Premium-Smartphones und Tablets zu verkaufen.“

Original-Zitate nachzulesen bei: und