Archiv der Kategorie: Artificial Intelligence

Will IBM be your AI and machine learning platform?

Here’s how IBM got its start in artificial intelligence, and what it brings to the table for your business or organization.

watsonceo.jpg
IBM CEO Ginni Rometty and IBM Watson Group Senior VP Mike Rhodin.

Image: IBM

Of all the tech giants throwing their weight behind artificial intelligence (AI) and machine learning, few receive the kind of attention garnered by IBM. After its seminal Jeopardy win in 2011, IBM Watson became synonymous with technologies such as cognitive computing and AI.

Upon losing to Watson, former Jeopardy champion Ken Jennings famously wrote „I, for one, welcome our new computer overlords“ under one of his responses. All of a sudden, Watson was a household name, igniting conversations about what could be accomplished with AI.

While Watson is a major part of IBM’s approach to AI solutions, it’s only a piece of the puzzle. Here’s a deeper look at the bigger picture, so businesses can determine if IBM is the right AI vendor for their needs.

The history

IBM Research, the company’s research division, dates back to 1945, when it opened the Watson Scientific Computing Laboratory at Columbia University. IBM’s work in AI began in the 1950s, according to its website. Around that time, an IBM employee named Arthur Samuel wrote a self-learning program for playing checkers, and would later be recognized as a pioneer in AI and machine learning.

In the 1970s, IBM built its first robot, and advanced its work in the field in the 1980s with the IBM RS 1. In the 1990s, IBM Researcher Gerry Tesauro used reinforcement learning (RL) to create a self-learning game of backgammon. Then, in 1997, IBM’s Deep Blue computer famously beat World Chess Champion Garry Kasparov at chess.

The company’s development of actual AI products began more recently. Mike Gualtieri, of Forrester Research, said that IBM’s journey toward AI solutions began in 2009, when it acquired two companies: ILOG and SPSS. ILOG is a business rules engine, Gualtieri said, which used to be called an expert system, while SPSS provides advanced analytics. Both of these purchases helped jumpstart IBM’s work on AI solutions for businesses.

Today, IBM’s AI initiatives are centered around the Watson platform. IBM has Watson solutions for analytics and machine learning, data search and discovery, and conversation tools like chat bots.

The vision

IBM views AI as „augmented intelligence,“ said Guru Banavar, vice president and chief science officer for cognitive computing at IBM Research. To take that concept a little further, 451 Research analyst Nick Patience explained augmented intelligence as „AI — and machine learning in particular — acting as a force multiplier for humans.“

Currently, Banavar said, there are „thousands“ of engineers working on the Watson platform. On a high level, the team is split into two very distinctive camps. One on end of the spectrum is a group working on „very concrete, commercial development and deployment,“ which happens usually on a weekly, monthly, or quarterly basis.

„Then, at the other end of the spectrum, we have teams of people that are working on advanced new technologies — some of which are being invented — all the way up to mathematicians who are developing the underlying techniques for them,“ Banavar said.

After Watson won Jeopardy, Banavar said the team at IBM was focused on building custom systems for specific clients or industry niches. However, they recently had to make a conscious strategic decision to move away from that model to focus on APIs.

Banavar said that IBM realized it wouldn’t be possible to build out all of the applications they wanted to with their existing strategy. So, they turned some of their capabilities into a platform with open APIs, „in order to attract and nurture a larger ecosystem of developers that can build many applications that IBM cannot build by ourselves,“ Banavar said.

Those APIs are being put to use in areas like retail, finance, law, and even fantasy football. But healthcare is one of the primary focuses for Watson solutions.

„I could see a vision where every hospital, every clinical group, had this Watson service. It becomes as essential as an X-ray, as essential as an MRI,“ Gualtieri said. „So, I think that’s their vision. They’re putting a lot against that.“

Strengths

To understand whether or not IBM would be a good fit for your organization, you must weigh your company’s needs against IBM’s strengths. On the technical side of things, Banavar said, these strengths start with Watson’s language capabilities.

„Watson has image processing capabilities, speech processing capabilities, regular numerical data analytics capabilities across the board — we have the entire spectrum,“ Banavar said. „But, if you ask me what is a really unique, and probably the most advanced, capability in Watson, it is language processing.“

When it comes to business, IBM’s AI strength comes from three key elements: IBM Research, its acquisition prowess, and its consultants.

IBM Research may not always produce a breakthrough, but it does give the company a distinctive edge, said Gualtieri. „The advantage of that is that the largest companies in the world — who IBM wants to sell to — want that edge,“ he added.

The ability to acquire the right companies to broaden its portfolio of offerings is a key differentiator for IBM. Banavar noted that the company has also been leveraging open-source libraries and toolkits to make use of new techniques in neural networking, word embedding, and more.

In addition to its research prowess and acquisition budget, IBM has a large network of consultants. According to Patience, that is key, „because a lot of the early machine learning opportunities involve taking enabling technologies such as machine learning algorithms and turning them into enhanced business processes and applications; something IBM understands well.“

Challenges

One of the biggest challenges facing IBM is managing the expectations that come from terms like cognitive computing and AI. This is further compounded by the public-facing nature of Watson, especially in the wake of its Jeopardy win, and confusion around the capabilities of AI as well.

„Everyone thinks that we’re on the verge of Star Trek, like next week,“ Gualtieri said. So, IBM must have a grand and transformative vision about the future of AI, but they also have to keep the expectations in check so customers don’t regret moving forward, Gualtieri noted.

On the question of safety and ethics, many would share Gualtieri’s view that the technology „is not even close to getting to the point where ethical issues are really a serious concern.“ However, Banavar said that ethical challenges are still something the IBM team must consider.

The first crucial issue that must be addressed, Banavar said, is the idea of explainability. If a doctor or financial advisor uses Watson to make a decision, for example, they must be able to understand why Watson chose a particular solution or set of options.

The other ethical consideration is bias. With machine learning systems, the models are built with training data — but the data has to faithfully represent what you’re trying to model, or it could be biased, Banavar said. Because of that, selecting the proper training data set is an ethical decision of the utmost importance. This is made even more important by how broad a potential impact Banavar sees for AI technologies.

„At the end of the day, I do think that cognitive computing is necessary for us to solve the world’s big problems,“ Banavar said.

http://www.zdnet.com/article/should-ibm-be-your-ai-and-machine-learning-platform

Samsung Plans To Give Galaxy S8 An AI Digital Assistant

All the cool companies have them: digital assistants. Apple has Siri, Microsoft has Cortana, and Google  has the cleverly named Google Assistant. Now, Samsung plans to bring its own iteration of a virtual assistant in the Galaxy S8 next spring, according to a new report from Reuters.

The assistant will be based on work by Viv Labs, a San Jose-based AI company that Samsung acquired this October (the move immediately fueled speculation that Samsung was moving into the AI space). The founders of Viv Labs already have a strong track record in the field as the creators of Siri, which Apple bought in 2010.

Samsung appears to be tapping into Viv’s existing strengths rather than aiming to revamp the platform. One of Viv’s hallmarks is that it is designed to be a one-stop-shop that works seamlessly with third-party services. “Developers can attach and upload services to our agent,” Samsung Executive Vice President Rhee In-jong said during a briefing, according to Reuters. “Even if Samsung doesn’t do anything on its own, the more services that get attached the smarter this agent will get, learn more new services and provide them to end-users with ease.”

If the digital assistant is a hit, it could help Samsung make up for its financial losses over the Galaxy Note 7 recall, which is projected to cost the company at least $5.4 billion. It could also rebuild consumer confidence after the Note 7 debacle and, more recently, a recall of a Samsung top-loading washing machine due to “impact injuries.”

But the company is entering a crowded market. Apple paved the way with Siri, though its early lead is shrinking after the launch of Google’s Assistant, which can tap into Google’s well-established knowledge graph and search capabilities. And there’s always Amazon Alexa, which already has a home in the smart-home devices the Echo, Dot and Tap.

„Every door can be unlocked.“  Ellen Fondiler

http://www.forbes.com/sites/shelbycarpenter/2016/11/06/samsung-plan-galaxy-s8-ai-digital-assistant

MIT’s Moral Machine

It might save your life: MIT’s Moral Machine asks you to answer moral dilemmas

 

We humans err and err often. If it is not a small mistake like leaving the keys in the fridge, then it is a deadly one like leaving the oven on all day. We tend to be reckless, forgetful, overconfident, easily distracted — dangerous traits when steering a two-ton, metal machine across lanes at 70 mph. Four out of the top five causes for car crashes are the result of human error.

Computers, on the other hand, have purely pragmatic minds. They sense data and react in programmed, calculated ways. Self-driving cars already seem to be safer than humans behind the wheel. The rate of progress in artificial intelligence over the past few years has some experts claiming that driving a car will be made illegal by 2030.

“Machine intelligence may have to deal with situations where someone has to die so someone else can live.”

But, as most drivers know, driving can require split-second decisions with no obvious right answer. A squirrel darts into the road — do you swerve and risk hitting other cars or drive straight and hope the squirrel survives. How would you react if a dog ran into the road? Or a criminal? Or a child? Which lives are worth risking? These questions are being asked by teams of researchers around the world. Now they are looking to you for answers.

“Self-driving cars are now practically inevitable,” Massachusetts Institute of Technology graduate student and research assistant Sohan Dsouza told Digital Trends. “That is a good thing, generally, because they would help save countless lives now being lost daily due to human driver error and can offer independent mobility to countless others who cannot drive.”

To that end, Dsouza, Edmond Awad, and their team at the MIT Media Lab developed Moral Machine, a platform that engages users in moral dilemmas and asks them how the self-driving car should respond. A handful of factors play into each scenario, including the age and gender of victims, their social status, and whether they are breaking the law. Participants are asked to make decisions in 13 dilemmas. The results then pooled as crowdsourced data and may one day be used to guide the development of ethical machines. After judging the dilemmas, users can compare their outcomes to others’ and even design their own for others to answer.

“One of our primary goals is provoking debate among the public,” Dsouza said, “and especially dialogue among users, manufacturers, insurers, and transport authorities.

“Not all crashes can be avoided, and the possibility remains that machine intelligence piloting vehicles may have to deal with situations where someone has to die so someone else can live — rather like the classic philosophical thought experiment known as the trolley problem.”

The trolley problem has been pondered for nearly 50 years. In it, a train car is en route to hit five people down the track. You have a switch that can steer the trolley down another set of tracks, where it will hit only one person. Would you intervene or do nothing?

“There are very few experiment-based studies regarding this possibility,” Dsouza said. “Hence, we needed to create a platform that would be able to generate large numbers of multi-factor scenarios and present them to users in an easy-to-understand, easy-to-use, and engaging way, so as to build a model of how people perceive the morality of machine-made decisions.”

“One of our primary goals is provoking debate among the public.”

Moral Machine has gathered answers on more than 11 million scenarios so far. Although the team has yet to perform a deep analysis, they are noticing regional trends that hint at the rocky road ahead. “On average, respondents from western countries place a relatively higher value on minimizing the number of overall casualties — that is, they approve more utilitarian choices — compared to respondents from eastern countries,” Dsouza said.

Revealing these cultural discrepancies fosters debate and dialogue, which is essential to making progress. “We believe we have already made an impact,” Dsouza said. “This dialogue will eventually help the stakeholders in this scene reach an equilibrium of legislation, liability assessment, moral comfort, and public safety.”

http://www.digitaltrends.com/cars/mit-moral-machine/

Google Hits a Samsung Roadblock With New AI Assistant – Viv & Adam Cheyer

Google just debuted a digital assistant, which it hopes to place inside smartphones, watches, cars and every other imaginable internet-connected device. It’s already hit a snag.

The Alphabet division launched new smartphones last week with the artificially intelligent assistant deeply embedded. It also rolled out a speaker with the feature at its core and announced plans to let other companies tie their apps and services to the assistant.

A day later, Samsung, which just announced it was ending production of its problematic Galaxy Note 7 smartphones, said it was acquiring Viv Labs, a startup building its own AI voice-based assistant.

At first, the deal looked like a counter-punch to Samsung rival Apple — Viv is run by the creators of Apple’s Siri assistant. But buying Viv may be more of a problem for Google, because Samsung is the biggest maker of phones running Google’s Android mobile operating system.

Google strategy is now centered on the assistant, rather than its search engine, because it’s a more natural way for people to interact with smartphones and other connected devices. Getting all Android phone makers to put the Google assistant on their devices would get the technology into millions of hands quickly. But Samsung’s Viv deal suggests assistants are too important for phone makers to let other companies supply this feature.

Last week, despite the Note 7 crisis, Samsung executive Injong Rhee said the company plans to put Viv’s technology in its smartphones next year and then embed it into other electronics and home appliances. A Samsung representative and a Google spokeswoman declined to comment.

That’s a necessity for Samsung, according to some analysts and industry insiders.

„As AI is becoming more sophisticated and valuable to the consumer, there’s no question it will be important for hardware companies,“ said Kirt McMaster, executive chairman of Cyanogen, a startup that makes Android software. Mr. McMaster, a frequent Google critic, said other Android handset makers will likely follow Samsung’s move.

„If you don’t have an AI asset, you’re not going to have a brain,“ he added.

Google may already have known that some Android phone makers — known as original equipment manufacturers, or OEMs — were reluctant to embrace its assistant.

„Other OEMs may want to differentiate“ Google’s Android chief Hiroshi Lockheimer told Bloomberg before it released its own smartphones. „They may want to do their own thing — their own assistant, for example.“

Samsung and Google have sparred in the past over distribution. Google requires Android handset makers to pre-install 11 apps, yet Samsung often puts its own services on its phones. And the South Korean company has released devices that run on its own operating system, called Tizen, not Android.

Viv was frequently on the short-list of startups that could help larger tech companies build assistant technology. Founded four-years ago by Dag Kittlaus, Adam Cheyer and Chris Brigham, the startup was working on voice technology to handle more complex queries than existing offerings.

While it drummed up considerable attention and investment, Viv has not yet released its product to the public. And some analysts are skeptical of Samsung’s ability to convert the technology into a credible service, given its mixed record with software applications.

„It will be very hard to compete with Google’s strength in data and their AI acquisitions,“ said Jitendra Waral, senior analyst with Bloomberg Intelligence. „Samsung would need to prove that its AI solutions are superior to that of Google’s. They are handicapped in this race.“

Samsung is also focused on handling the fallout from its exploding Galaxy Note 7 phones, potentially taking management time away from its Viv integration.

But it’s a race Samsung has to join. In recent years, Samsung acquired mobile-payments and connected-device startups to keep up with Apple, Google and Amazon. Digital voice-based assistants may be more important, if they become the main way people interact with devices.

Silicon Valley titans are rushing into the space because of this potential. Amazon is trying to sign up developers for its Alexa voice technology. Apple has recently touted more Siri capabilities and opened the technology to other developers. And now Google, considered the leader in artificial intelligence, is making its own push.

„I don’t ever remember a time when every single major consumer tech company — and even enterprise companies — have been singularly focused on an identical strategy,“ said Tim Tuttle, chief executive officer of MindMeld Inc., a startup working on voice interaction software. „They’re all following the exact same playbook.“

 

http://adage.com/article/digital/google-hits-a-roadblock-ai-assistant/306244/

Google hired writers from Pixar and The Onion to make Assistant more personable

Google wants its Assistant to be more than just an order-taking robot — so it hired some clever writers from outside the company to help make it happen.

A new story from the Wall Street Journal’s Christopher Mims details the advancements of different artificial intelligence devices like Amazon Echo and Google’s rival product Home, and how they’re comforting for those who live alone thanks to how personable the AI’s have become.

For Google, that friendly personality is thanks to a team of writers from Pixar and The Onion who helped make the Assistant — which powers Google’s Home device — sound more like a human and less like a robot, according to the Journal. Google’s eventual goal is to help users build an emotional connection with the Assistant, the Journal reports.

Google unveiled its Assistant-enabled Home device last week, a direct competitor to other AI-powered hardware devices like Amazon’s Echo. The Assistant itself is similar to Alexa, which powers the Echo: It has voice-recognition software, natural language recognition, and it gets smarter over time.

You can ask the Assistant to tell you a joke, give you the weather or set a timer, but you can also ask it to do things like remember your favorite sports team or the city you live in. Much like other AI — like Alexa or Apple’s Siri — the Assistant can be equal parts sweet and sassy, which is what helps it seem more relatable and more human. The Assistant lives inside Google Home, but it’s also enabled in Google’s new messaging app, Allo and its new Pixel smartphone.

 

http://www.businessinsider.de/google-assistant-pixar-the-onion-2016-10

Dieter Zetsche beschwört den Wandel der Autobranche

Betont lässig: Daimler-Chef Dieter Zetsche brach in Paris mit den Traditionen der Autobranche. Er führte Freizeitmode vor und verkündete das neue Leitbild einer agilen Organisation.

(Foto: Daimler)

Dabei bremste Daimler bislang bei den alternativen Antrieben. Auch die jüngste Elektroauto-Studie geriet mutlos. Gelingt dennoch die Transformation zum Tech-Konzern?

Analyse von Joachim Becker

Die Schuhe sind Teil der Inszenierung: Wenn der 63-jährige Chef eines Weltunternehmens mit Jeans und Turnschuhen rumläuft, dann befindet er sich für gewöhnlich im Urlaub oder in der Midlife-Crisis. Dieter Zetsche will augenscheinlich nicht zum alten Eisen gehören. Doch sein Problem ist weniger privater als unternehmerischer Natur: Der Daimler-Boss will das Flaggschiff der deutschen Autoindustrie zur Tech-Company umbauen.

Vor einer Gründerzeit im Neckar-Valley muss er einige Altlasten bewältigen. Zum Beispiel den Erfolg des bewährten Geschäftsmodells: Trotz Rekordabsatzzahlen fordert Zetsche ein radikales Umdenken seiner Mitarbeiter. Statt sprudelnde Erlöse zu feiern, sollen sie sich an einer Revolution beteiligen. Ausgang offen.

Bisher stand Mercedes auf der Bremse

Was Zetsche auf dem Pariser Autosalon verkündet, ist eine Revolution von oben: „Wir wollen nicht nur die Verwandlung unserer Produkte vorantreiben, sondern auch die Verwandlung unserer Organisation signifikant beschleunigen.“ Bisher standen die Stuttgarter nicht nur bei alternativen Antrieben auf der Bremse. Kurz nach einer Welttournee mit Wasserstofffahrzeugen wurde 2013 die angekündigte Serienproduktion abgesagt. Im selben Jahr überließ man BMW i die Vorfahrt bei komplett neuen Elektroautos. Die ersten E-Smarts mit Hochtemperaturzellen hatten 2007 bloß Forschungscharakter. Später half Tesla auch bei der Mercedes-B-Klasse-e-cell mit Batterien nach. Trotzdem oder gerade deshalb zögerte der Elektroingenieur Zetsche, Milliarden auf eine ungewisse Elektro-Zukunft zu wetten.

Anzeige

Zetsche ist kein junger Wilder wie Elon Musk, der als New-Age-Guru einer emissionsfreien Zukunft auftritt. Der Erfolg von Tesla und vor allem die Geschwindigkeit, mit der sich das Start-up weiterentwickelt, sorgen im Daimler-Vorstand allerdings für Stirnrunzeln. Der Elektro-Pionier punktet mit Software-Updates, die neue Funktionen ins Auto bringen. Trotz gravierender Rückschläge wie beim Autopiloten will Tesla das erste autonome Auto auf den Markt bringen.

Die Serienversion des Generation EQ kommt 2018

Bei dem halsbrecherischen Technologietempo gibt es eine Reihe von Unwägbarkeiten: „Unser Zielkorridor für den Elektroabsatz im Jahr 2025 liegt zwischen 15 und 25 Prozent. Genauer können wir es einfach nicht prognostizieren“, gesteht Mercedes-Vertriebsvorstand Ola Källenius. Trotzdem legt Daimler jetzt den Schalter für die neue Elektro-Submarke EQ um. Die Serienversion des Pariser Showcars „Generation EQ“ wird ab 2018 zum Preis eines „vernünftig ausgestatteten Mercedes GLC“ (also für rund 60 000 Euro) angeboten. Mindestens neun weitere reine E-Mobile vom Kompaktauto bis zum Supersportler sollen bis 2025 folgen.

Mercedes will bis 2025 Tesla als Marktführer bei Premium-Elektrofahrzeugen ablösen. Die leistungsstarken Stromer werden aber schon Ende dieses Jahrzehnts Standard sein – als Unterscheidungsmerkmal einer Marke taugen sie dann nicht mehr. Deshalb stürzen sich die Blechbieger in weitere Abenteuer: „Viele Autohersteller wollen heute Mobilitätsanbieter werden. Das ist schön und gut. Aber die Transformation der Branche ist noch viel grundlegender“, warnt der Daimler-Boss.

Daimlers Erfolgsgeschichte geschieht zu langsam

130 Jahre lang definierte sich die Autoindustrie über Hardware. Daimler ist das beste Beispiel, wie schwierig nun das Umdenken ist: Die Stuttgarter haben zwar 2007 das flexible Einweg-Carsharing mit vollvernetzten Smarts erfunden. Doch es dauerte zehn Jahre, um Car2go auf zwei Millionen Nutzer zu bringen. Was Daimler als Erfolgsgeschichte verkauft, geschieht letztlich zu langsam, um mit neuen Wettbewerbern zu konkurrieren. Maßgeschneiderte, automatisierte Mobilitäts-Services könnten dem Verkauf von Privat-Pkw in Zukunft mehr und mehr Konkurrenz machen. Niemand weiß aber, wie und wann sich der Wandel genau vollziehen wird.

Bisher sind die Entwicklungsabteilungen der Autohersteller entlang von neuen Produkten aufgestellt. Genauso wichtig werden allerdings innovative Geschäftsmodelle sein. Daimler will den Technologiewandel vom Kundenerlebnis her neu denken: „Wir erwarten, dass sich das Auto von einem Produkt in eine ultimative Plattform verwandelt. Das ist ein fundamentaler Perspektivenwechsel“, sagt Dieter Zetsche. Diese Plattform ruhe auf vier Säulen: Vernetzung (Connected), Autonomes Fahren, Sharing und Elektromobilität. Zusammen ergeben die Anfangsbuchstaben das Wort Case. „Wir haben gerade einen neuen Unternehmensbereich mit diesem Namen gegründet, um diese Themenfelder zusammenbringen“, so Zetsche.

Noch ist unklar, was die Kunden wollen

Den Kunden in den Mittelpunkt zu stellen, ist eine prima Idee. Das Problem ist nur: Kaum ein Mercedes-Käufer hat bisher nach Elektromobilen gefragt. Geschweige denn Interesse an einer Internet-Plattform gezeigt, über die er seine Luxuskarosse mit anderen teilen kann. Genau das will Mercedes mit einer Sharing-Plattform ab November dieses Jahres in Deutschland erproben.

Zetsche stellt in Paris jedoch klar, das keine einzelne Technologie oder Dienstleistung den Unterschied machen werde, sondern ein neuartiges Gesamterlebnis von Mobilität: „Jede der Case-Säulen hat das Potenzial, die gesamte Automobilindustrie auf den Kopf zu stellen. Aber die wahre Revolution ist die Verbindung dieser Aspekte in einem umfassenden, nahtlosen Paket.“

Zetsches ständiger Balanceakt

Alt und neu, analog und digital, Sakko zur verwaschenen Jeans: Als Vordenker balanciert Zetsche ständig zwischen den Gegensätzen. Seine Grundsatzrede auf dem Pariser Autosalon klingt über weite Strekken wie ein Appell an die eigene Belegschaft: Das Schweizer Uhrwerk als Zeichen für Verlässlichkeit und Präzision im mechanischen Zeitalter – „das bleibt auch in Zukunft wichtig!“, beruhigt er seine Mitarbeiter. Schon im nächsten Moment predigt er jedoch das Credo des digitalen Zeitalters: Ihm gefalle die Idee einer „agilen Schwarmorganisation“, verkündet der Manager mit dem grauen Walrossbart: „Case wird als rechtlich getrennte Organisation ein perfekter Startpunkt für diese Vorstellung sein.“

Mehr Silicon Valley wagen, ohne die Stärken der Vergangenheit aufzugeben, lautet die Botschaft. Noch weiß allerdings niemand, wie dieses Autofahren 2.0 wirklich aussieht, geschweige denn, wie man damit Geld verdient. Elektro-Studien wie der Mercedes Generation EQ und der VW I. D. zeigen in Paris jedenfalls das genaue Gegenteil einer Design-Revolution. Mit ihren mutlos-monolithischen Grundformen pendeln sie irgendwo zwischen Van und Crossover. Bloß nicht auffallen!

Der Fluch der großen Reichweite

Damit sich die Hoffnungsträger wenigstens ein bisschen vom Mainstream unterscheiden, wurde ihnen das Dach tief ins Gesicht gedrückt. Doch der Trick funktioniert nur auf geschickt fotografierten Bildern. Wer versucht hat, auf den Rücksitzen des VW I. D. zu sitzen oder sich unter dem Dachholm des Mercedes EQ durchzuschlängeln, erkennt den Schwindel: In der Serie werden aus halbwegs schnittigen Showcars bleischwere Hochdachautos. Das ist der Fluch der großen Reichweite.

Die Physik lässt sich auch im digitalen Zeitalter nicht überlisten: Stromer mit 500 Kilometer Radius benötigen riesige Unterflur-Batteriepakete, auf denen die Passagiere thronen. Tesla kann dieses hochgebockte Kutschendesign mit einem flachen Batterieformat recht gut kaschieren. Weil kein anderer Hersteller die schmalen Rundzellen von Panasonic verwendet, werden sich die Designer mit ihren Tesla-Fightern mächtig anstrengen müssen.

Auch Matthias Müller beschwört ein „neues Zeitalter“

Dass die meisten Kunden 500 Kilometer Batteriereichweite gar nicht brauchen, ist die Ironie dieses Technologiewandels. Bisher hat kaum jemand die Stromer als Erstauto für die ganze Familie verwendet, geschweige denn Urlaubsfahrten damit geplant. Das Wettrennen um den größten Batterieradius wendet sich also nicht an die umweltbewussten Pioniere, sondern an den komfortorientierten Otto-Normalverbraucher: Einmal pro Woche Tanken ist gelernt. Bloß nicht umgewöhnen!

Auch Matthias Müller beschwört in Paris ein „neues Zeitalter“: „Die Elektromobilität und digitale Vernetzung werden zu Game Changern“. Welche Spielregeln für eine neue Generation von Kunden gelten werden, weiß aber auch das Oberhaupt des Volkswagen-Konzerns nicht sicher zu sagen. Vielleicht sind es digital animierte Innenwelten, die ein neues Markenerlebnis schaffen. Auf dem Mercedes-Stand ließ sich Müller lange die Bedienphilosophie der EQ-Studie erklären. Die hochauflösenden 3-D-Landschaften auf dem Riesenbildschirm sollten ihn wohl von der Tristesse im VW I. D. ablenken.

Quelle: http://www.sueddeutsche.de/auto/die-zukunft-von-daimler-dieter-zetsche-beschwoert-den-wandel-der-autobranche-1.3193256

Mercedes‘ Tesla killer is coming in 2019

mercedes eqScreenshot

Mercedes-Benz just made a huge move to take on Tesla.

The German automaker unveiled its all-electric SUV concept at the Paris Motor Show Thursday, and with a competitive price tag and solid range potential, it’s poised to become a big competitor in the EV space.

Called Generation EQ, the SUV concept is expected to have a range up to 310 miles.

Called Generation EQ, the SUV concept is expected to have a range up to 310 miles.

Mercedes-Benz

The production version of the SUV is expected to hit the roads in 2019, Dieter Zetsche, the head of the Mercedes-Benz car division, said at the Paris Motor Show press event. Mercedes is calling the SUV unveiled today a „close-to-production concept vehicle.“

The production version of the SUV is expected to hit the roads in 2019, Dieter Zetsche, the head of the Mercedes-Benz car division, said at the Paris Motor Show press event. Mercedes is calling the SUV unveiled today a "close-to-production concept vehicle."

Mercedes-Benz

The car will fall in the same price range as the GLC Crossover, which currently starts at $39,150. That’s a very competitive price for an electric SUV, considering Chevy Bolt’s all-electric SUV crossover will start at $37,500 when it hits showrooms at the end of 2016.

The car will fall in the same price range as the GLC Crossover, which currently starts at $39,150. That's a very competitive price for an electric SUV, considering Chevy Bolt's all-electric SUV crossover will start at $37,500 when it hits showrooms at the end of 2016.

Mercedes-Benz

The interior comes with a massive, 24-inch display that shows speed, range, driving data, and navigation information. The display will alert the driver if the car is running low on battery and of nearby charging stations. The steering wheel also comes with touch controls.

The interior comes with a massive, 24-inch display that shows speed, range, driving data, and navigation information. The display will alert the driver if the car is running low on battery and of nearby charging stations. The steering wheel also comes with touch controls.

Mercedes-Benz

The SUV comes with some autonomous features, but not many. Mercedes says the car can automatically adjust the speed and driving dynamics when approaching curves.

The SUV comes with some autonomous features, but not many. Mercedes says the car can automatically adjust the speed and driving dynamics when approaching curves.

Mercedes-Benz

The car is part of Mercedes‘ efforts to ramp up its electric-car offerings. Daimler’s chief development officer, Thomas Weber, said in May that Mercedes-Benz was aiming to add four new electric cars to its model range by 2020.

The car is part of Mercedes' efforts to ramp up its electric-car offerings. Daimler's chief development officer, Thomas Weber, said in May that Mercedes-Benz was aiming to add four new electric cars to its model range by 2020.

Mercedes-Benz

 

http://www.businessinsider.de/mercedes-electric-suv-production-in-2019-photos-2016-9?op=1

A High-Stakes Bet: Turning Google Assistant Into a ‘Star Trek’ Computer

Google’s new assistant will be incorporated in new products like Google Home, an Amazon Echo-like talking computer. CreditJustin Sullivan/Getty Images

Google is one of the most valuable companies in the world, but its future, like that of all tech giants, is clouded by a looming threat. The search company makes virtually all of its money from ads placed on the World Wide Web. But what happens to the cash machine if web search eventually becomes outmoded?

That worry isn’t far-fetched. More of the world’s computing time keeps shifting to smartphones, where apps have supplanted the web. And internet-connected devices that may dominate the next era in tech — smartwatches, home-assistant devices like Amazon’s Echo, or virtual reality machines like Oculus Rift — are likely to be free of the web, and may even lack screens.

But if Google is worried, it isn’t showing it. The company has long been working on a not-so-secret weapon to avert its potential irrelevance. Google has shoveled vast financial and engineering resources into a collection of data mining and artificial intelligence systems, from speech recognition to machine translation to computer vision.

Now Google is melding these advances into a new product, a technology whose ultimate aim is something like the talking computer on “Star Trek.”It is a high-stakes bet: If this new tech fails, it could signal the beginning of the end of Google’s reign over our lives. But if it succeeds, Google could achieve a centrality in human experience unrivaled by any tech product so far.

The company calls its version of this all-powerful machine the Google Assistant. Today, it resembles other digital helpers you’ve likely used — things like Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana. It currently lives in Google’s new messaging app, Allo, and will also be featured in a few new gadgets the company plans to unveil next week, including a new smartphone and an Amazon Echo-like talking computer called Google Home.

But Google has much grander aims for the Assistant. People at the company say that Sundar Pichai, who took over as Google’s chief executive last year after Google was split into a conglomerate called Alphabet, has bet the company on the new tech. Mr. Pichai declined an interview request for this column, but at Google’s developer conference in May, he called the development of the Assistant “a seminal moment” for the company.

If the Assistant or something like it does not take off, Google’s status as the chief navigator of our digital lives could be superseded by a half-dozen other assistants. You might interact with Alexa in your house, with Siri on your phone, and with Facebook Messenger’s chatbot when you’re out and about. Google’s search engine (not to mention its Android operating system, Chrome, Gmail, Maps and other properties) would remain popular and lucrative, but possibly far less so than they are today.

That’s the threat. But the Assistant also presents Google with a delicious opportunity. The “Star Trek” computer is no metaphor. The company believes that machine learning has advanced to the point that it is now possible to build a predictive, all-knowing, superhelpful and conversational assistant of the sort that Captain Kirk relied on to navigate the stars.

Photo

CreditStuart Goldenberg

The Assistant, in Google’s most far-out vision, would always be around, wherever you are, on whatever device you use, to handle just about any informational task.

Consider this common situation: Today, to book a trip, you usually have to load up several travel sites, consult your calendar and coordinate with your family and your colleagues. If the Assistant works as well as Google hopes, all you might have to do is say, “O.K., Google, I need to go to Hong Kong next week. Take care of it.”

Based on your interactions with it over the years, Google would know your habits, your preferences and your budget. It would know your friends, family and your colleagues. With access to so much data, and with the computational power to interpret all of it, the Assistant most likely could handle the entire task; if it couldn’t, it would simply ask you to fill in the gaps, the way a human assistant might.

Computers have made a lot of everyday tasks far easier to accomplish, yet they still require a sometimes annoying level of human involvement to get the most out of them. The Assistant’s long-term aim is to eliminate all this busywork.

If it succeeds, it would be the ultimate expression of what Larry Page, Google’s co-founder, once described as the perfect search engine: a machine that “understands exactly what you mean and gives you back exactly what you want.”

At this point, a few readers may be recoiling at the potential invasion of autonomy and privacy that such a machine would necessitate.

The Assistant would involve giving ourselves over to machines more fully. We would trust them not just with our information but increasingly with our decisions. Many people are already freaked out by what Google, Facebook and other tech companies know about us. Would we be willing to hand over even more power to computers?

Those are important questions, but they are also well down the road. For now, the more pressing question for the Assistant is: Will it even work?

Photo

Sundar Pichai, Google’s chief executive, calls the development of the assistant “a seminal moment” for the company. CreditJustin Sullivan/Getty Images

Google has technological advantages that suggest it could build a more capable digital assistant than others have accomplished. Many of the innovations that it has built into its search engine — including its knowledge graph database of more than a billion people, places and things, and the 17 years it has spent trying to understand the meaning of web queries — will form the Assistant’s brain.

Google has also been one of the leaders in machine learning, the process that allows computers to discover facts about the world without being explicitly programmed. Machine learning is at the heart of a number of recent advances, including Google Photos’ uncanny capacity to search through your images for arbitrary terms (photos of people hugging, for instance).

“We are in the process of transforming into a machine-learning company,” Jeff Dean, who is in charge of Google Brain, the company’s artificial intelligence project, told me this year. For each problem Google solves this way, it gets better at solving other problems. “It’s a boulder going downhill gathering more momentum as it goes,” Mr. Dean said.

If you use the Assistant today, you’ll see some of these advances. As my colleague Brian X. Chen explained last week, if your friend sends you a picture of his dog on Allo, Google Assistant will not only recognize that it’s a dog, but it will also tell you the breed.

That’s an amazing technological feat. But as Brian pointed out, it’s also pretty useless. Why does your friend care if you know his dog’s a Shih Tzu?

This gets to a deeper difficulty. The search company might have the technical capacity to create the smartest assistant around, but it’s not at all clear that it has the prowess to create the friendliest, most charming or most useful assistant. Google needs to nail not just Assistant’s smarts, but also its personality — a new skill for Google, and one that its past forays into social software (Google Plus, anyone?) don’t speak highly of.

Then there is the mismatch between Google’s ambitions and Assistant’s current reality. Danny Sullivan, the founding editor of Search Engine Land, told me that so far, he hadn’t noticed the Assistant helping him in any major way.

“When I was trying to book a movie, it didn’t really narrow things down for me,” he said. “And there were some times it was wrong. I asked it to show me my upcoming trip, and it didn’t get that.”

Of course, it’s still early. Mr. Sullivan has high hopes for the Assistant. It would be premature to look at the technology today and get discouraged about its future, especially since Google sees this as a multiyear, perhaps even decade-long project. And especially if Google’s future depends on getting this right.

 

Here’s the electric car Audi is building to take on Tesla

Audi E-tron quattroAudi

Tesla’s Model S and Model X are soon going to have some serious competition.

Last September, Audi revealed its all-electric e-tron quattro concept at the Frankfurt Motor Show. The SUV, which is slated to go into production by 2018, will have three electric motors, a range of 310 miles on a single charge, and quick charging capabilities.

Here’s a look at some of the features in the e-tron quattro that we hope to see in the production version.

Like the e-tron concept, Audi will most likely include piloted driving technology in its upcoming all-electric SUV.

Like the e-tron concept, Audi will most likely include piloted driving technology in its upcoming all-electric SUV.

Audi piloted techYouTube/Audi

The e-tron quattro concept has piloted driving technology, which uses radar sensors, a video camera, ultrasonic sensors, and a laser scanner to collect data about the car’s environment and create a model of the vehicle’s surroundings in real-time.

Audi currently has a lot of this tech in its newer vehicles, so it’s likely we will see a more advanced piloted system in the production version of the e-tron quattro.

 

Cameras could replace side view mirrors.

Cameras could replace side view mirrors.

Audi

The e-tron quattro has curved displays built into the front section of the doors that lets the driver view what is around them. There’s no guarantee we’ll see this in the production version, but automakers are beginning to experiment with new kinds of mirror designs.

For example, GM’s a digital mirror in the Chevy Bolt and the Cadillac CT6that uses cameras to stream whatever is behind you.

It will likely be covered in screens.

It will likely be covered in screens.

Audi

The e-tron quattro concept features two touch displays in the cockpit, one to the driver’s left to control lights and the piloted driving systems and one to the right where media and navigation is controlled.

The center console has two more OLED displays for climate control and infotainment.

With its 95 kWh battery, the e-tron quattro has an impressive range of 310 miles on a single charge.

With its 95 kWh battery, the e-tron quattro has an impressive range of 310 miles on a single charge.

Audi

To put that into perspective, Tesla’s Model X SUV with all wheel drive and a 100kWh battery has a range of 289 miles on a single charge. Audi has already said its range will beat this.

It may be able to fully charge in just 50 minutes.

It may be able to fully charge in just 50 minutes.

Audi

We know the production version will have quick charging capabilities, but we don’t know exactly how fast it will work. However, we’re hoping it’s in line with the e-tron quattro concept’s charge time.

The concept car has a Combined Charging System (CCS), meaning it can be charged with a DC or AC electrical current. It can fully charge with a DC current outputting 150 kW in just about 50 minutes.

 

The e-tron quattro concept is equipped with induction charging technology, so it can be charged wirelessly over a charging plate.

The e-tron quattro concept is equipped with induction charging technology, so it can be charged wirelessly over a charging plate.

Audi

We can’t say if this is a definite feature the production version will have, but our fingers are crossed.

It will have super fast connectivity.

It will have super fast connectivity.

Audi

Audi announced at CES this year that it is the first automaker to support the latest standard for mobile communications: LTE Advanced.

LTE Advanced is the latest enhancement to LTE, meaning that it can deliver larger and faster wireless data payloads than 4G LTE. We can almost certainly expect to see the technology integrated into the upcoming production car.

http://www.businessinsider.de/audis-electric-vs-tesla-2016-9?op=1

Self-driving cars are here, but that doesn’t mean you can call them ‚driverless‘

Volvo Driverless Car What I imagine I could’ve been doing on my way to college instead of holding a steering wheel for nine hours. (Not actually me) Volvo

I went to college nine hours away from home — easily doable in a day’s drive, but tedious nonetheless.

On one trip through the cornfields of Indiana, I remember turning to my friend wondering why we hadn’t figured out cruise control for steering wheels. I had already been cruising at a steady 70 m.p.h. for hours with my feet on the floor. Why did I have to touch the steering wheel to keep it in the lines too?

Less than six years later, the answer is that I don’t have to touch the steering wheel anymore. Self-driving cars are here, and they’re arriving faster than many predicted.

The pace at which a self-driving car went from myth to reality has caused all sorts of problems, from a talent shortage in the field to a sudden arms race in trying to build the best self-driving car on the market. Uber’s CEO Travis Kalanick called it „existential“ for the company to develop its own driverless car technology.

Yet, there’s still a large distinction — and years of development — between the self-driving cars hitting the streets today and the driverless cars that we dream of in the future. Most „driverless“ cars today still have a driver in the front seat. Teaching a car how to drive itself (even with a driver on hand) is just the important first step.

Dreams of driverless

It’s hard not to be seduced by the images of driverless cars.

Mercedes-Benz‘ concept car shows four seats all turned to face each other. Bentley’s driverless dream comes with a holographic butler — a future staple for the high-end autonomous car. The Rolls-Royce has a two person couch with a giant TV where the driver normally sits.

Bentley Bentley

Even Larry Page is rumored to be working on a flying car so we all finally get one step closer to“The Jetsons“ future we’ve envisioned.

However, what’s not acknowledged is just how hard it is to get cars to that point. When I asked Uber’s Kalanick just what’s holding truly driverless cars back, he laughed because there’s just so much — and a lot of it just that the technology hasn’t even been developed. A self-driving car shouldn’t freak out at a four-way intersection or turn off every time it goes over a bridge.

To get in a self-driving car today, it feels like having cruise control, but for the whole car. The autopilot keeps the car’s speed steady, it stays evenly inside the lines, and maintains the proper following distance. The only way to experience a self-driving car is to either own a Tesla or live in Pittsburgh and magically hail a self-driving Uber.

After taking a ride in Otto’s self-driving truck, I explained the experience to my 92-year-old grandmother as being in a plane: You have a licensed driver who does take off and landing, or in this case, getting onto the interstate, but then once it’s clear, you just set it to autopilot.

While having „self-driving cars“ in the hands in the public is a huge milestone, it’s just the beginning in the path to full autonomy.

Truly driverless cars remain years away — but still closer than you think. Ford, for example, plans to roll out its first fully autonomous carsfor ride-sharing by 2021. Google is aiming for 2020 , and Tesla is planning to make its vehicles part of car-sharing networkonce its cars are fully autonomous.

The impacts of that will be widely felt. Merrill Lynch predicted in a 2015 report that driverless taxis like Ubers will make up 43% of new car sales by 2040. The Boston Consulting Group also wrote in a 2015 report that driverless taxi sales are bound to incline. The BCG predicts that 23% of global new car sales will come from driverless taxis by 2040, which will result in a decline in vehicle ownership in cities.

Before we get to driverless though, we need to perfect self-driving. To do that, that means putting real self-driving cars to the roads in a test. That’s why they are here and happening now. Driverless will come next.

http://www.businessinsider.com/self-driving-vs-driverless-car-difference-explained-2016-9?IR=T