Tired of traffic jams and tail gas? The design of electric „straddling buses“ lets cars drive underneath them, and can help reduce air pollution. Also known as land airbus, the new invention is less costly than subway systems.
Tired of traffic jams and tail gas? The design of electric „straddling buses“ lets cars drive underneath them, and can help reduce air pollution. Also known as land airbus, the new invention is less costly than subway systems.

Apple has big plans for Siri that will make the company’s famous assistant a lot more useful, according to a new report.
The company will soon open up Siri to developers with new software tools that will allow Siri to tap into more third-party services, according to a new report in The Information. Apple is also working on a new pice of a hardware, an Amazon Echo rival that will work with Apple’s smart home platform.
Apple plans to put Siri in the hands of developers with a new software development kit (SDK) that will reportedly be called the Siri SDK. The Siri SDK could launch at next months’s World Wide Developer Conference, where Apple typically previews the newest version of IOS and its latest developer tools.
The SDK will require „some work“ by developers to make their apps accessible by Siri, the report says.
Siri already works with a few third-party services, like Yelp and Bing, but hasn’t been widely available to developers since it was acquired by Apple in 2010. Prior to its acquisition, Siri worked with many third-party services. (Some of the original Siri team is now working on a new AI Assistant called Viv, which will also work with third-parties like Uber.)
Apple is also reportedly working on a new speaker that allows people to use voice commands to play music and control HomeKit-enabled smart home devices, like lights, locks and thermostats. It’s unclear if the speaker will also be unveiled at WWDC in June, as Apple typically reserves new hardware for other events.
Though the new smart speaker sounds a lot like Amazon’s Echo and Google’s recently unveiled Google Assistant, Apple’s device predates both, according to The Information’s sources.
The report is just the latest sign that Apple has big plans for Siri next month. Earlier reports have suggested Apple will bring Siri to the Mac and — in what could very well be a hint of a Siri-themed WWDC — the company used Siri to reveal the dates of this year’s developer conference.
The MacBook Pro is reportedly getting a major redesign.
Ming-Chi Kuo, a normally reliable Apple analyst at KGI Securities, has put out a new research note claiming that Apple is planning to incorporate a second screen into the high-end Apple laptop, 9to5Mac reports.
This OLED screen would be touch-sensitive and sit above the keyboard — replacing the current function buttons (F1, F2, F3, etc.).
It could provide more flexibility and room for customisation for Apple and for users, who could potentially add shortcuts and apps they frequently use to the screen.
The report has also been corroborated by Mark Gurman, a reporter over at 9to5Mac, who has an extremely good track record at breaking news on coming Apple products.
„Source confirms Apple prepping new MacBook Pro with 2nd display for functions above the keyboard, Touch ID/Apple Pay,“ Gurman wrote on Twitter.
Pete Markham/Flickr (CC)Here’s where that second screen will apparently sit.
What else is coming to the new MacBook Pros? They are also said to be „thinner and lighter,“ with 9to5Mac reporting that they will take „design cues“ from the 12-inch MacBook that Apple first introduced in May 2015 in an overhaul of the traditional MacBook line.
They will also reportedly have Touch ID support — suggesting that you will be able to unlock your MacBook and authorise purchases using your fingerprint, as you already can on the iPhone.
The MacBook Pro line has been has gone largely unchanged (apart from internal upgrades) since it introduced Retina Displays in 2012 and phased out optical drives.
Both the 13-inch and 15-inch MacBook Pros will apparently be updated with the new design, which is expected to launch in the fourth quarter of 2016.
Maurizo Pesce/Flickr (CC)The MacBook Pro will reportedly take „design cues“ from the newly redesigned MacBook.
If it’s true, we may see more leaks and rumours in the weeks and months ahead — but don’t expect any confirmation from Apple. The company never comments on rumoured products before they are officially announced.
Imagine, throughout your day, you could know exactly what your body chemistry was up to. More specifically, imagine if the information from your body could instantly go to your doctor and he could make a diagnosis of what your body was doing or what was wrong.
It’s nearly here. Today at CES 2016, a company called Profusa demonstrated a wearable biointegrated sensor, Lumee, that allows for long-term continuous monitoring of your body chemistry. This wearable smart tech device provides actionable data on your body’s key chemistry in one continuous data stream which changes the way we will monitor our health.
Lumee, a biointegrated wearable sensor.
“In between annual physicals we really don’t know what’s going on in our body,” said Ben Hwang, Ph.D., CEO, Profusa. “While fitness trackers and other wearables provide insights into our heart rate, respiration and other physical measures, they don’t provide information on the most important aspect of our health: our body’s chemistry. What if there was a better way of knowing how you’re doing — how you’re really doing?”
According to Statista, the digital health market is expected to reach $233.3 billion by 2020, and that market is being led by the mobile health market.
Since the iPhone hit it big in 2007, consumers and physicians alike (52%) use their smartphone to search for advice, drugs, therapies, etc, and 80% of physicians use smartphones and medical apps. With wearables, physicians can now collect long-term and specialized data that’s much easier to obtain and track patient health behaviors over longer periods of time. This has already changed our relationship with our health care providers and their relationships with us.
“Profusa’s Lumee is a bold attempt at one of the holy grails of personalized medicine, continuous, real time, non-invasive glucose and oxygen monitoring, it’s applications are vast,” said Ryan Bethencourt, Program Director and Venture Partner at Indie.Bio, a bio-tech accelerator in San Francisco. “From Type 1 and Type 2 diabetes monitoring through to fitness and finding optimal training patterns for your body, with data that’s currently impossible to acquire any other way continously. I’m rarely this optimistic about a new medical device, especially one that will require implantation approval from the FDA but in this case, I think the optical biosensor technology and device design warrant the optimism.”
This is why Profusa hopes their tiny (3-5 mm) bioengineered biosensors will enable real-time detection of our body’s unique chemistry in order to give greater insight into a person’s overall health status. Dr. Hwang believes Lumee can be applied to consumer health and wellness applications but also to the management of chronic diseases like Peripheral ArteryDisease (PAD), diabetes and Chronic Obstructive Pulmonary Disease (COPD).
“You’re never too old to reinvent yourself.“
Design is no longer riding back seat to business. The two are inextricably linked, as evidenced at almost every major technology company in existence today. This week at Semi Permanent, Australia’s premier arts and design conference, some of the world’s best design talent will take the stage to talk about what makes design so integral to their success. Below, we asked seven leaders at some of the biggest companies how design impacts their business—here’s what they had to say.

Credit: Facebook

Credit: Dantley Davis

Credit: Google

Credit: Hector Ouilhet

Credit: Renata Steiner

Credit: Lindsay Byrnes

Credit: Atlassian
http://www.wired.com/2016/05/design-shaping-future-tech-giants-like-netflix
Hyperloop One
If you are like most, you probably find air travel to be a stressful experience.
There’s the commute to the airport, the long lines to check your bag, the security check, and then once you’re finally on the plane, there’s the tight squeeze of sitting for several hours with barely any legroom.
And yet, air travel is really our only option for traveling hundreds of miles quickly. But the Hyperloop could change that.
The Hyperloop is a tubular transport system that carries passengers in capsules at speeds reaching more than 700 miles per hour. Tesla and SpaceX CEO Elon Musk first proposed the idea in a white paper published in 2013 and made his research public so others could pursue developing the concept. The LA-based startup Hyperloop One is doing just that.
But Hyperloop One doesn’t just want to build a system that is as fast as a plane. The company wants to create an entirely new travel experience, one that is a lot less stressful and a lot more convenient.
„It’s not about getting somewhere, it’s about being somewhere. We’re not trying to optimize the transportation experience, we are trying to eliminate it,“ Brogan BamBrogan, Hyperloop One’s co-founder and chief technology officer, said at a company event earlier this month.
How exactly does it plan on doing this?
BamBrogan shared with Tech Insider four ways the Hyperloop will revolutionize all aspects of transportation.
http://www.businessinsider.com/4-ways-the-hyperloop-will-change-how-we-travel-2016-5
Google has announced three new communication apps this week: Spaces, Allo and Duo. That’s in addition to the three it already has. To understand why it’s doing this, and why it’ll do it again, we only need to look to its past.
Twelve years ago, Google began its shift from being „just“ the world’s most popular search engine to something much more: It released Gmail. Soon, the company was offering several options for communication. By 2009 Google users had a pretty robust set of tools at their disposal. Gmail for email, Talk for real-time text and voice chats, Voice for VoIP calling, and Android to facilitate everything else. Unfortunately, this simple delineation would quickly disappear as the company launched more and more services.
Google Wave was the first addition. Announced in mid-2009, it mashed together elements of bulletin boards, instant messaging and collaborative editing to pretty awesome effect. It grew a small but fervent community — I was a big fan — until Google halted development.
Then came Buzz. Launched in 2010, it was Google’s first attempt at a bona fide social network. It failed miserably, not least due to complaints about the way Google forced it upon users and some valid privacy concerns. Although neither Wave nor Buzz really competed with what the company was already offering, that would change when Google launched its next attempt at a social network, Google+.
In addition to standard social networking, Google+ also had two features that facilitated direct communication with individuals and groups: Hangouts and Huddles. Not to be mistaken with the current app, Hangouts at the time offered multiuser video chat for people in the same Circle. Huddle, on the other hand, was an instant messaging app for talking with other Google+ users.
Huddle would soon become Google+ Messenger, offering the same functionality as Google Talk, while Hangouts would expand to seriously encroach on Google Voice. Within a year, Google had added the ability to make „audio-only“ calls by inviting users to join Hangouts over a regular phone line.
Google now had two apps for everything, coupled with the problem that many users — even on its Android platform — were still using SMS to communicate on the go. It began work to rectify this and unify its disparate platforms. In 2013 we got an all-new Hangouts, available cross-platform and on the web. It merged the functionality of Hangouts and Messenger, and it also replaced Talk within Gmail if you opted to upgrade. Voice was still out in the cold and SMS wasn’t integrated, but the company was moving in the right direction.
In late 2013, Google added SMS to Hangouts, and in Android 4.4 it replaced Messaging as the OS default for texting. By Oct. 2014 Google had integrated VoIP into Hangouts as well. It finally had one app for everything.
You could assert that Hangouts was a better app because of the confusing mess that preceded it. Google tried lots of things and put the best elements from all of its offerings into a single app.
That arguably should have been the end of the story, but it’s not. For whatever reason — probably because it figured out that a lot of Android users didn’t use Hangouts — Google released another app in Nov. 2014 called Messenger. This Messenger had nothing to do with Google+ but instead was a simple app focused on SMS and MMS. Hangouts could and can still handle your texts, but Messenger is now standard on Nexus phones and can be installed on any Android phone from the Play Store. This confusing muddle means that if you have, say, a new flagship Samsung phone, you’ll have two apps capable of handling your SMS (Samsung’s app and Hangouts), with the possibility of adding a third with Messenger.
Hangouts, for the most part, has been doing a fine job.
Still, SMS isn’t exactly a burning priority for most people, and Hangouts, for the most part, has been doing a fine job. I can’t say I use it that often — my conversations are mostly through Facebook Messenger and WhatsApp, because that’s where my friends are — but when I do, it’s a pleasant-enough experience. The same can be said for Google+: It’s actually a great social network now, aside from the fact that barely anyone uses it.
That’s the issue that Google faces today and the reason why these new apps exist. More people are using Facebook Messenger than Hangouts. More people are using WhatsApp than Hangouts. More people are using Snapchat than Hangouts. And everyone uses everything other than Google+.
So we now have three new apps from Google, each performing pretty different tasks. The first is Spaces. Think of it as Google+ redux redux redux. It takes the service’s fresh focus on communities and collections and puts it into an app that exists outside the social network. The end result is a mashup of Slack, Pinterest, Facebook Groups and Trello. It’s promising, but, as of writing, it’s very much a work in progress.
Next up is Allo, a reaction to Facebook Messenger and Microsoft’s efforts in the chatbot space. It uses machine learning to streamline conversations with auto replies and also offers a virtual assistant that’ll book restaurants for you, answer questions and do other chatbotty things. Just like Spaces exists outside Google+, Allo exists outside Hangouts. You don’t even need a Google account to sign up, just a phone number — much like how WhatsApp doesn’t require a Facebook account.
Finally we have Duo, which is by far the most focused of the three. It basically duplicates Hangouts‘ original function: video calling. According to the PR, it makes mobile video calls „fast“ and „simple,“ and it’s only going to be available on Android and iOS. Both Duo and Allo also have the distinction of offering end-to-end encryption — although Allo doesn’t do so by default — the absence of which has been something privacy advocates have hated about Hangouts.
This summer, when Duo and Allo become available, Google users will be at another confusing impasse. Want to send a message to a friend? Pick from Hangouts, Allo or Messenger. Want to make a video call? Hangouts or Duo. Group chat? Hangouts, Allo or Spaces. It’s not user-friendly, and it’s not sustainable.
Sure, Facebook sustains two chat services (WhatsApp and its own Messenger) just fine, but it bought WhatsApp as a fully independent, hugely popular app and has barely changed a thing. Google doesn’t have that luxury. Instead, it’ll borrow another Facebook play: Test new features on a small audience and integrate. Over the past couple of years Facebook has released Slingshot, Rooms, Paper, Riff, Strobe, Shout, Selfied and Moments. I’m probably missing a few.
All of these apps were essentially built around a single feature: private chats, ephemeral messaging, a prettier news feed, selfies, etc. The vast majority won’t get traction on their own, but their features might prove useful enough to fold into the main Facebook and Messenger apps. And if one of them takes off, no problem, you’ve got another successful app.
This has to be Google’s strategy for Allo, Duo and Spaces. We don’t know what Google’s communication offerings will look like at the end of this year, let alone 2017. But chances are that Google will continue to float new ideas before eventually merging the best of them into a single, coherent application, as it did with Hangouts. And then it’ll start the process again. In the meantime, Google will spend money developing x number of duplicate apps, and users will have to deal with a confusing mess of applications on their home screens.
http://www.engadget.com/2016/05/19/why-google-cant-stop-making-messaging-apps/

BEFORE THE INVENTION of the computer, most experimental psychologists thought the brain was an unknowable black box. You could analyze a subject’s behavior—ring bell, dog salivates—but thoughts, memories, emotions? That stuff was obscure and inscrutable, beyond the reach of science. So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades.
Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses. They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn’t a black box at all. It was more like a computer.
The so-called cognitive revolution started small, but as computers became standard equipment in psychology labs across the country, it gained broader acceptance. By the late 1970s, cognitive psychology had overthrown behaviorism, and with the new regime came a whole new language for talking about mental life. Psychologists began describing thoughts as programs, ordinary people talked about storing facts away in their memory banks, and business gurus fretted about the limits of mental bandwidth and processing power in the modern workplace.
This story has repeated itself again and again. As the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work. Technology always does this. During the Enlightenment, Newton and Descartes inspired people to think of the universe as an elaborate clock. In the industrial age, it was a machine with pistons. (Freud’s idea of psychodynamics borrowed from the thermodynamics of steam engines.) Now it’s a computer. Which is, when you think about it, a fundamentally empowering idea. Because if the world is a computer, then the world can be coded.
Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age. As software has eaten the world, to paraphrase venture capitalist Marc Andreessen, we have surrounded ourselves with machines that convert our actions, thoughts, and emotions into data—raw material for armies of code-wielding engineers to manipulate. We have come to see life itself as something ruled by a series of instructions that can be discovered, exploited, optimized, maybe even rewritten. Companies use code to understand our most intimate ties; Facebook’s Mark Zuckerberg has gone so far as to suggest there might be a “fundamental mathematical law underlying human relationships that governs the balance of who and what we all care about.” In 2013, Craig Venter announced that, a decade after the decoding of the human genome, he had begun to write code that would allow him to create synthetic organisms. “It is becoming clear,” he said, “that all living cells that we know of on this planet are DNA-software-driven biological machines.” Even self-help literature insists that you can hack your own source code, reprogramming your love life, your sleep routine, and your spending habits.
In this world, the ability to write code has become not just a desirable skill but a language that grants insider status to those who speak it. They have access to what in a more mechanical age would have been called the levers of power. “If you control the code, you control the world,” wrote futurist Marc Goodman. (In Bloomberg Businessweek, Paul Ford was slightly more circumspect: “If coders don’t run the world, they run the things that run the world.” Tomato, tomahto.)
But whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand.
Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.
This approach is not new—it’s been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed, and Google Photos uses it to identify faces. Machine learning runs Microsoft’s Skype Translator, which converts speech to different languages in real time. Self-driving cars use machine learning to avoid accidents. Even Google’s search engine—for so many years a towering edifice of human-written rules—has begun to rely on these deep neural networks. In February the company replaced its longtime head of search with machine-learning expert John Giannandrea, and it has initiated a major program to retrain its engineers in these new techniques. “By building learning systems,” Giannandrea told reporters this fall, “we don’t have to write these rules anymore.”
But here’s the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box. And as these black boxes assume responsibility for more and more of our daily digital tasks, they are not only going to change our relationship to technology—they are going to change how we think about ourselves, our world, and our place within it.
If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.
ANDY RUBIN IS an inveterate tinkerer and coder. The cocreator of the Android operating system, Rubin is notorious in Silicon Valley for filling his workplaces and home with robots. He programs them himself. “I got into computer science when I was very young, and I loved it because I could disappear in the world of the computer. It was a clean slate, a blank canvas, and I could create something from scratch,” he says. “It gave me full control of a world that I played in for many, many years.”
Now, he says, that world is coming to an end. Rubin is excited about the rise of machine learning—his new company, Playground Global, invests in machine-learning startups and is positioning itself to lead the spread of intelligent devices—but it saddens him a little too. Because machine learning changes what it means to be an engineer.
Google is releasing a new ecosystem to put virtual reality into the hands of everyone.
Today at Google I/O, the company revealed a new virtual reality standard that may finally bring VR to the masses. It’s called Daydream, and it’s an open-source headset and motion controller that’s compatible with souped-up Android phones. Daydream will arrive this September for an unknown, but likely not that expensive, price.
If Google pulls it off, Daydream will be both cheaper and easier to use than its fancy VR counterparts, because Google has reimagined Android software to work in VR—and every Android phone of the future could be built VR-ready at its core.

When you think about VR, what’s the metaphor you use? The Matrix? Lawnmower Man? It’s all dark imagery of headsets, body suits, black metal and plastic, like someone designed a commando knife for your face, then tethered it to a 1980s PC with more wires than your surround sound home theater system. It became the HTC Vive or Oculus Rift.
That’s one end of the spectrum. On the other, you have Google Cardboard. It’s literally the prize from a cereal box, stuck to your face. It’s cheap, fun, and inherently kind of crap. Nobody wants to unwind from a stressful day by climbing into a cardboard cube.
With Daydream, Google has landed on a happy medium. It’s a headset that’s built with soft materials like fabric. It takes only a moment to pop your phone inside, then it clasps shut like the lever on a self-corking bottle, and you’re in VR. By swinging what looks like a TV remote, you can do things like grab objects, flip pancakes, or go fishing in the virtual world.
In Daydream, you can walk the sidewalks of Paris with Streetview or watch YouTube clips on an IMAX screen, and it probably won’t be that expensive (we’d ballpark $100 or less, given the comparative price of the Samsung Gear VR).
How? Because Google created the Daydream headset and controller as a reference spec that’s open for any manufacturer to make—and potentially even compete with each other to drive down the price. And Google being as influential as it is, the company convinced Android phone manufacturers to build their phones differently. So your next Android phone may double as one of the best VR headsets in the world—one that you may actually want to use.

But why will Daydream be any better than the mobile VR offerings of Cardboard, or Samsung’s (admittedly superb) Gear VR? In short, it’s the software, and it’s the hardware.
On the software side, Google has built Android N (that’s the next version of Android coming out this September) to accommodate VR. Developers can code their software to take full advantage of the phone’s processing cores to push the sorts of specs you need for VR, like high frame rates. For users, Android will be designed to feel welcoming to people in VR. Upon putting on the headset, Android users will enter a new app called Daydream Home that looks like a virtual 3-D environment, crossed with an app manager. Android N will also support several VR-enabled core apps on the phone. That means you can buy items inside a VR version of Google Play, or use Streetview and YouTube in VR. Right now, if you use a VR headset on an Android phone, you feel sequestered to a select few apps. Android N seems to invite Daydream VR as an alternate way of using the phone itself.
On the hardware side, Google has convinced phone manufacturers to build what are being called „Daydream Ready“ phones. The list of partners is long and impressive, including Samsung, HTC, Huawei, Xiaomi, LG, and HTC. Google doesn’t go into a lot of detail on what constitutes a Daydream Ready phone, but they appear to be certified to share basic specs of performance (processors, GPUs, and RAM), a few extra sensors, and similarly designed screens that can allow the phone to slip into a special set of lenses to transport you into high-performance VR. (Samsung’s Gear VR works so well because it has extra sensors inside. Daydream distributes all of the technology to the phone itself, so many VR experiences could perform just as well with a super simple, lens-only Cardboard style headset.)
How good will the Daydream experience be? It’s hard to know without actually trying it, and Google isn’t offering demos at I/O. Daydream still won’t have the full six-axis tracking that the highest-end headsets, the Oculus Rift and HTC Vive, do. That means in Daydream, you can still look around up, down, and in a 360-degree circle, but when you poke your head forward or lean back, your perspective doesn’t change. It’s a compromise of immersion, but from my experience, VR can still leave you in awe without all six axes involved.

Alongside their Daydream headset, Google also introduced what looks like a littler version of Nintendo’s Wiimote controller. They’re not sharing much in terms of technical specs, but it appears to be a motion-sensitive remote that enables all those gestures you might remember from the Nintendo Wii. (Tennis, anyone?) It also features a touchpad on top, so you can flick or swipe.
The importance of this little remote to the future of VR can’t be overstated. It’s Google’s attempt to bring control parity to the mobile VR industry, all via an approachable bit of industrial design that shouldn’t freak people out like a pair of these. And Google seems to want their headset and remote to feel comfortable and intuitive above all else.
Thus far, mobile VR has relied entirely on an aim-your-head, tap-one-button on your temple, control experience. That’s a literal pain in the neck after about 20 minutes. There has been limited support for gamepads and other controllers, but the problem is that, with countless hardware manufacturers developing their own weird configurations, there’s no baseline for all of the app developers to design to. So even an app that technically supports a gamepad might not play very well, because the tiniest bits of finesse with an analog stick are lost to a developer coding for 10,000 different possible controllers. Meanwhile, Google has created one remote to rule them all.
Google could be successful because the company is really thinking through the whole ecosystem of VR—hardware, apps, the UX, and even the phones that could power a mobile VR revolution. However, Google still faces one big hurdle: Google. The company’s broad strategy makes a lot of sense, but that doesn’t change the fact that it has an unsteady track record when stepping into the world of hardware. There’s no guarantee that any piece of hardware designed by Google will be a hit. Google’s own Nexus smartphones (technically made by third parties) are not the most popular Android phones, and Google abandoned projects like the Nexus Q and Google TV from lack of interest. If Daydream flops out of the gate, what then?
But I can’t help but consider the brass tacks: Five million Google Cardboard headsets have sold to date. One million people are using Samsung’s Gear VR. These numbers are respectable in a world that’s only trending more in the direction of mobile. Meanwhile, there are 1.5 billion active Android phones in the world. They can’t, and won’t, support Daydream today. The next 1.5 billion, however? If the standards are in place to make the experience both decent and affordable, why not? With Daydream, Google just gave us a VR standard that could unite the mobile VR world. And increasingly, it’s looking like the mobile VR world may be the only one that matters.
All Images: courtesy Google
https://www.fastcodesign.com/3059928/daydream-googles-ambitious-new-bid-to-bring-vr-to-the-masses