Archiv der Kategorie: Innovation

Facebook Is Testing A New App Called ‚Phone‘

Source: http://www.forbes.com/sites/parmyolson/2015/03/23/carriers-facebook-phone-app/
Further Reading: http://www.forbes.com

 

Facebook has accidentally leaked information about a new app that it’s testing, called ‘Phone,’ and this news should come as no surprise to anyone who believes Facebook wants to be at the center of how we communicate with the world around us. That includes with all elements of texting (through Messenger and WhatsApp) and increasingly, voice.

The app appears to be some sort of native dialler for Android that shows information about who is calling, and which automatically blocks calls from commonly blocked numbers. A spokesman confirmed to Venture Beat that Facebook was testing the service, after Android Police first posted a screenshots of an install update that should have only been seen by Facebook’s internal network. Thanks to Apple’s closed system it’s unlikely Facebook is even exploring making such an app for iOS.

Why does Facebook want to give its users a native dialler? Facebook has allowed users to make video calls through its desktop client since 2011, and voice calls through Facebook Messenger since early 2013. But both these services require that people on either end of the line are using the same Facebook feature, and the calls can only take place over a mobile carrier data network or WiFi.

A native dialler application would appear to be Facebook’s first service that coordinates with a carrier’s all-important voice network.

(It has yet to be confirmed that ‘Phone’ will filter calls being made to your phone number and not just between Facebook users, but the former seems likely. The video and audio calling features that Facebook already has don’t seem popular enough that users would want a separate app just to block VoIP calls – and most of the calls you want to block are spammers and marketers who managed to get hold of your mobile phone number anyway.)

In essence, Facebook appears to be trying to wedge itself a little further into the relationship between its users and carriers, when users are carrying out one of the most fundamental acts that telcos rely on to make money – making voice calls. That’s a crucial step both symbolically and practically, and the fact that the service is called ‘Phone’ suggests Facebook eventually wants to be part of the phone-calling experience that carriers still dominate.

International carriers like Vodafone and BT Group are still stinging from the huge bite of SMS revenues that WhatsApp, the globally popular messaging service with 700 million active users, took out over the last five years with its free texting service.

The messaging service, which Facebook bought last year, is now rolling out a free voice-calling service. While that won’t involve a native dialler, it looks set to take yet another chunk out of carrier revenues, this time in voice.

Should carriers be worried about Phone? It’s easy to dismiss a new, forthcoming app from Facebook as yet another failed experiment that will get tossed on the same heap where Slingshot, Poke and the Android launcher Facebook Home reside.

Yet even if Facebook’s users don’t fall in love with Phone, and the service doesn’t go much further beyond the testing phase, there’s no question the social network wants to become a more integral hub for our everyday communications. That could eventually mean weaning consumers off their reliance on carriers’ voice and texting services, and driving the role of carriers further down into the ground as “dumb pipes” that transport our data and not too much more.

Wie sicher ist die eSIM (Universal SIM)?

Der Vertrags- und Netzwechsel soll mit der eSIM einfacher werden. Wird es damit auch einfacher, eine Telefonnummer gegen den Willen des Nutzers zu entführen?
Von , Teltarif.de

Ende letzten Jahres führte Apple mit dem iPad Air 2 in den USA und Großbritannien die „Universal SIM“ ein. Mit dieser kann man den Datenvertrag wechseln, ohne die SIM-Karte tauschen zu müssen. Vielmehr werden bei einem Vertragswechsel einfach die neuen Zugangsdaten auf die SIM-Karte programmiert.

Wenn man sie richtig macht, hat eine eSIM für den Kunden einige Vorteile: Der Vertragswechsel geht schneller. So ist denkbar, mal eben online einen Prepaid-Vertrag für ein lokales Netz abzuschließen, wenn man sich im Ausland aufhält, um den teuren Roaming-Entgelten zu entgehen. Für Mobilfunkhändler, insbesondere solche, die Verträge von mehreren Netzbetreibern und/oder Resellern im Angebot haben, vereinfacht sich mit der eSIM auch die Logistik.

Entsprechend verwundert es nicht, dass sich auch die Netzbetreiber hierzulande mit der programmierbaren SIM-Karte, der eSIM, beschäftigen. Während Vodafone dieser derzeit ablehnend gegenübersteht, plant die Deutsche Telekom bereits die Einführung. Der dritte Netzbetreiber, Telefònica, will hingegen erstmal den finalen Standard abwarten, bevor eine Entscheidung getroffen wird.

Wie sicher?

Wie jüngst geschrieben, ist das Prinzip, geheime Schlüssel auf SIM-Karten zu verteilen, so etwas wie das Herz der Kommunikation in den Mobilfunknetzen. Wird die Sicherheit dieser geheimen Schlüssel kompromittiert, ist die Sicherheit der mobilen Kommunikation insgesamt verloren. Dann hört künftig möglicherweise nicht nur die NSA massenhaft Handy-Telefonate mit, sondern auch der neugierige Nachbar oder bei Firmen die Konkurrenz. Ebenso ist zu fürchten, dass es dann vermehrt zu Identitätsdiebstahl kommt, dass beispielsweise Handy-Nummern vorübergehend oder dauerhaft entführt werden, um sich als jemand anderes auszugeben.

Die Erfahrung mit den zahlreichen Sicherheitslücken in Computer-Betriebssystemen lehrt: Wenn die „Guten“, also die tatsächlich mit dem Vertragswechsel beauftragen neuen Netzbetreiber, ein Update der eSIM auf einen neuen Vertrag anstoßen können, dann brauchen die „Bösen“ nur einen Fehler im Protokoll zu finden, um ebenfalls Vollzugriff auf die eSIM zu erlangen. Und solche Lücken im Protokoll kommen durchaus häufiger vor. Aktuell diskutiert wird beispielsweise die Möglichkeit, über eine Lücke in SS7 fremde Handys zu orten.

Auch ohne dass sie über das Netz programmierbar sind, gehören die geheimen Codes der SIM-Karten bereits zu den bevorzugten Angriffszielen der NSA. Zwar könnte die Möglichkeit zum Fernupdate die Sicherheit der SIM-Karten sogar erhöhen, insbesondere, wenn für dieses Update zusätzliche, nicht von der NSA überwachte Kommunikationskanäle genutzt werden. Zu fürchten ist aber, dass sich die Situation verschlechtert. In vielen Fällen sind die Ziele „mehr Sicherheit“ und „mehr Komfort“ nicht gleichzeitig zu haben. Beim Thema „SIM-Karte“ ist mein persönlicher Favorit „mehr Sicherheit“. Von daher bevorzuge ich es, auch künftig mit dem Vertrag die SIM-Karte zu wechseln.

Artikel unter: http://www.teltarif.de/esim-universal-sim-sicherheit/news/59102.html

Weitere interessante Artikel finden Sie unter: www.teltarif.de

You’re Never Too Old To Start A New Venture, Look At These Famous Entrepreneurs

Mark Zuckerberg and the current lot of 20-something CEOs are ruining it for people like us who’re facing a mid-life crisis. This infographic gets back at those young pricks and proves why it’s never to late to start your own venture.

Never_1_too-late-to-start

Never_2_lost-in-life
Did you know?
McDonald’s founder Ray Kroc sold paper cups and milkshake mixers till he was 52
Harry Potter author J.K.Rowling was a single mom on welfare till she was 31
Harrison Ford was a carpenter till his 30s
Zara founder Amancio Ortega was a shirt shop helper till he was 30
Evan Williams co-founded Twitter at the age of 35
Niklas Zennstromm was 37 when he created Skype
Arianna Huffington started Huffington Post at the age of 54
Still not convinced? Here’s more:
Never_3_how-to-succeed

So if you haven’t come up with that billion dollar idea yet, don’t worry, there’s still time.

Never_4_how-to-never-give-up

Source: http://digitalsynopsis.com/inspiration/never-too-late-start-venture/

Google balloons, “cell towers in the sky,” can serve 4G to a whole state

A single Project Loon balloon can cover an area the size of Rhode Island.

Getting ready for launch.
Google

Google’s plan to deliver Internet service from balloons in the stratosphere has come a long way since being unveiled in June 2013.

A single “Project Loon” balloon can now remain in the air for more than six months and provide 4G LTE cellular service to an area the size of Rhode Island, according to Google. Company officials have taken to calling Loon balloons “cell towers in the sky.”

While there’s no announced date for a widespread service launch, Google has provided Internet to a school in Brazil and is partnering with cellular operators Vodafone New Zealand, Telstra in Australia, and Telefónica in Latin America.

The US probably won’t be the first place Loon powers a commercial service. Google is aiming to get more people in developing countries on the Internet (and that’s good for Google’s business, since a lot of those people will use Google services).

“For some countries, having Internet once a day for an hour is a huge deal,” Google software engineer Johan Mathe, who plays a key role designing Loon’s navigation system, told Ars in a phone interview last week.

Rather than offer Internet service itself as it does with Google Fiber, Google’s Project Loon is building technology that can integrate with the networks run by cellular operators. Telco operators can send signals from existing cell towers to Google’s balloons, and then the balloons send the signals down to smartphones and other cellular-connected devices. While Google says one balloon can cover an area the size of Rhode Island, the coverage area is really bigger than that because one balloon can send its signal to another balloon, which can then send Internet signals down to the ground. (A single balloon can cover an 80 km [49.7 miles] diameter. Rhode Island is 77 km [47.8 miles] north to south and 59.5 km [37 miles] east to west.)

“The main cost gain comes from the fact that you can cover a much bigger region with existing infrastructure,” Google told Ars. “Telcos take their preexisting infrastructure, point them to the sky, and they get a much broader coverage. For instance, if you already have towers to cover a city, you can point part of it to the sky, and you will be able to cover the whole region through the loon balloon network.”

The Loon devices are two balloons in one, an outer balloon filled with helium and a smaller one inside filled with air. “We can either pump air in it, which is going to make the balloon go down, or we can remove the air, which is going to make it go up. That’s how we change altitude,” Mathe explained.

Each balloon has a radio for sending and receiving signals and can send its GPS position to the ground so that Google’s mission control software can track it in real time.

A Project Loon balloon climbs up to the stratosphere.

Google has boosted the balloons’ potential altitude range from about 800 meters to a couple of kilometers, allowing more control over where they fly. Early on, Google could keep the balloons in the air for about five days. Now the average is more than 100 days, with a record of 187. A Google announcement this month gives a sense of how that flight was achieved:

In the same time it took the Earth to complete half of its annual orbit of the sun, our record-breaker managed to circumnavigate the globe 9 times, enduring temperatures as low as -75c (-103 F) and wind speeds as high as 291 km/h, soaring to a maximum height of 21 km and drifting over more than a dozen countries across 4 continents.

Having been in the air for just over 3 months we decided to put the balloon through its paces, making a series of altitude changes on its last circumnavigation to test our ability to fly north out of southern latitude bands. The test was successful and we managed to turn up to the Northern tip of Australia where we were able to access a much slower wind stream going in the opposite direction and sending our balloon lazily back over to South America. Finally, we brought it back into its original southern latitude band to swoop in and land in one of our Argentinian recovery zones for collection.

187 days is more than enough for Google’s purposes. In fact, the three-plus month average is enough because the company wants to be able to frequently upgrade the technology on the balloons, Mathe explained.

The shift from home Internet to cellular

When Loon started, Google was testing a system that delivered Internet service to antennas on people’s homes. Mathe explained that this was due at least partly to technology limitations. Project Loon is now capable of sending signals that can be picked up by the smaller antennas on phones.

“Because the power in a handset is smaller, you actually have to send more energy to send a data stream to a handset with a low antenna power than a… slightly more powered antenna you could have in a home,” he said.

There’s also a practical reason for focusing on cellular connectivity. “We see trends in developing countries where people are skipping laptops altogether and going straight to mobile,” Google spokesperson Katelin Jabbari told Ars.

Loon began with 3G-like speeds and is now using LTE, hitting about 10Mbps downloads. Real people have used Loon, but at small scales. “We’ve been doing extremely small experiments so far, one in Brazil where we gave Internet to a group providing Internet to a small school,” Mathe said.

A rural Brazilian school, Linoca Gayoso, gets Internet access for the first time, courtesy of Project Loon.

Loon could help carriers reach villages with tiny populations where it wouldn’t be economically feasible to build cell towers, Jabbari said. Google is negotiating with telecommunications partners to expand the tests into bigger pilot projects within the next year, with the goal of eventually starting commercial operations.

Google has also been doing short duration tests in California to evaluate connection technology and designs, but the long flights have been outside the US.

“Right now we are launching primarily from New Zealand,” Jabbari said. “We chose that latitude initially because there are good winds and New Zealand is really excited to have us. The countries we’re flying over were happy to give us overflight permission. Now we’ve gotten overflight permissions from all countries in the Southern Hemisphere.”

As for where pilot projects will begin, Jabbari said, “given that we have an established launch site in New Zealand and an established recovery zone in Latin America and other places, that’s where you’re most likely to see us, somewhere around there.” However, “we’ve had conversations with countries elsewhere and telcos elsewhere, those have all gone really well.” Jabbari said Google wants to create a “ring around the world” with its balloons.

What the service eventually looks like depends a lot on telecommunications providers.

“We’re actively looking for various partners everywhere to grow our potential,” Mathe said. “It really depends on partners and the kinds of things they want to provide to their customers and the kind of network access they want to provide.”

Source: http://arstechnica.com/information-technology/2015/03/google-balloons-cell-towers-in-the-sky-can-serve-4g-to-a-whole-state/

Siri creator Adam Cheyer nets $22.5 million for an Artificial Intelligence that can learn on its own

Viv Labs, a startup launched by a team that helped build Siri, just pulled in $12.5 million to finance a digital assistant that is able to teach itself.

TechCrunch first reported that Viv Labs has closed a Series B round led by Iconiq Capital that pushes the company’s valuation to „north of nine figures.“

A spokesperson for the company confirmed the investment to Mashable but declined to comment further.

According to TechCrunch, the company was not in need of new capital but was interested in the possibility of working with Iconiq, which Forbes has described as an „exclusive members-only Silicon Valley billionaires club.“ Together with a previous $10 million Series A round, the company has now raised a total of $22.5 million.

Unlike other digital assistants like Siri or Cortana, Viv can make up code on the fly, rather than relying on pre-programmed directives from developers.

Whereas Siri may be tripped up by questions or tasks it is not already programmed to understand, Viv can grasp natural language and link with a network of third-party information sources to answer a much wider range of queries or follow complex instructions.

Viv co-founders Dag Kittlaus, Adam Cheyer and Chris Brigham previously served on the team that created Siri, which started as an iPhone app before Apple acquired it in 2010 for a reported $200 million.

“I’m extremely proud of Siri and the impact it’s had on the world, but in many ways it could have been more,” Kittlaus told Wired last year.

The cofounders told Wired that they hope to one day integrate Viv into everyday objects, in effect making it a voice-activated user interface for the much-hyped „Internet of Things.“

The company plans to widely distribute its software by licensing it out to any number of companies, instead of selling it to one exclusive buyer. One potential business model mentioned in the Wired report is charging a fee when companies using the service complete transactions with customers.

Viv Labs is reportedly working towards launching a beta version of the software sometime this year.

Source: http://mashable.com/2015/02/20/viv-funding/

The company behind Viv, a powerful form of AI built by Siri’s creators which is able to learn from the world to improve upon its capabilities, has just closed on $12.5 million in Series B funding. Multiple sources close to the matter confirm the round, which was oversubscribed and values the company at north of nine figures.

The funding was led by Iconiq Capital, the so-called “Silicon Valley billionaires club” that operates a cross between a family office and venture capital firm.

While Iconiq may not be a household name, a Forbes investigation into its client list revealed people like Facebook’s Mark Zuckerberg, Dustin Moskovitz and Sheryl Sandberg, Twitter’s Jack Dorsey, LinkedIn’s Reid Hoffman and other big names were on its roster.

In addition to Iconiq, Li Ka-shing’s Horizons Ventures and Pritzker Group VC also participated along with several private individuals. This new round follows the company’s $10 million Series A from Horizons, bringing the total funding to date to $22.5 million.

Viv Labs declined to comment on the investment.

Screen Shot 2015-02-20 at 11.29.57 AM

We understand that Viv Labs was not in need of new capital, but was rather attracted to the possibilities that working with Iconiq Capital provided. It was a round that was more “opportunistic” in nature, and was executed to accelerate the vision for the Viv product, which is meant to not only continue Siri’s original vision, but to actually surpass it in a number of areas.

Viv’s co-founders, Dag Kittlaus, Adam Cheyer and Chris Brigham, had previously envisioned Siri as an AI interface that would become the gateway to the entire Internet, parsing and understanding people’s queries which were spoken using natural language.

When Siri first launched its product, it supported 45 services, but ultimately the team wanted to expand it with the help of third parties to access the tens of thousands of APIs available on the Internet today.

That didn’t come to pass, because Apple ended up acquiring Siri instead for $200 million back in 2010. The AI revolution the team once sought was left unfinished, and Siri became a device-centric product – one that largely connects users to Apple’s services and other iOS features. Siri can only do what it’s been programmed to do, and when it doesn’t know an answer, it kicks you out to the web.

Siri

Of course, Apple should be credited for seeing the opportunity to bring an AI system like Siri to the masses, by packaging it up and marketing it so people could understand its value. Siri investor Gary Morgenthaler, a partner at Morgenthaler Ventures, who also invested personally in Viv Labs’ new round, agrees.

“Now 500 million people globally have access to Siri,” he says. “More than 200 million people use it monthly, and more than 100 million people use it every day. By my count, that’s the fastest uptake of any technology in history – faster than DVD, faster than smartphones – it’s just amazing,” Morgenthaler adds.

But Siri today is limited. While she’s able to perform simpler tasks, like checking your calendar or interacting with apps like OpenTable, she struggles to piece information together. She can’t answer questions that she hasn’t already been programmed to understand.

Viv is different. It can parse natural language and complex queries, linking different third-party sources of information together in order to answer the query at hand. And it does so quickly, and in a way that will make it an ideal user interface for the coming Internet of Things — that is, the networked, everyday objects that we’ll interact with using voice commands.

Wired article about Viv and its creators described the system as one that will be “taught by the world, know more than it was taught and it will learn something new everyday.”

Morgenthaler, who says he’s seen Viv in action, calls it “impressive.”

“It does what it claims to do,” he says. The part that still needs to be put into action, however, is the most crucial: Viv needs to be programmed by the world in order to really come to life.

Beyond Siri

While to some extent, Viv is the next iteration of Siri in terms of this vision of connecting people to a world of knowledge that’s accessed via voice commands, in many ways it’s very different. It’s potentially much more powerful than other intelligent assistants accessed by voice, including not only Siri, but also Google Now, Microsoft’s Cortana or Amazon’s Alexa.

Unlike Siri, the system is not static. Viv will have memory.

“It will understand its users in the aggregate, with respect to their language, their behavior, and their intent,” explains Morgenthaler. But it will also understand you and your own behavior and preferences, he says. “It will adjust its weighting and probabilities so it gets things right more often. So it will learn from its experiences in that regard,” he says.

Screen Shot 2015-02-20 at 11.29.04 AM

In Wired’s profile, Viv was described as being valuable to the service economy, ordering an Uber for you because you told the system “I’m drunk,” for example, or making all the arrangements for your Match.com date including the car, the reservations and even flowers.

Another option could be booking flights for business travelers, who speak multi-part queries like “I want a short flight to San Francisco with a return three days later via Dallas.” Viv would show you your options and you’d tell it to book the ticket – which it would proceed to do for you, already knowing things like your seat and meal preferences as well as your frequent flyer number.

Also unlike Siri today, Viv will be open to third-party developers. And it will be significantly easier for developers to add new functionality to Viv, as compared to Siri in the past. This openness will allow Viv to add new domains of knowledge to its “global brain” more quickly.

Having learned from their experiences with Apple, the Viv Labs team is not looking to sell its AI to a single company but instead is pursuing a business model where Viv will be made available to anyone with the goal of becoming a ubiquitous technology. In the future, if the team succeeds, a Viv icon may be found on Internet-connected devices, informing you of the device’s AI capabilities.

For that reason, the investment by Iconiq makes sense, given its clients run some of the largest Internet companies today.

Screen Shot 2015-02-20 at 11.30.18 AM

We understand that Viv will launch a beta of its software sometime this year, which will be the first step towards having it “programmed by the world.”

Morgenthaler says there’s no question that the team can deliver – after all, they took Siri from the whiteboard to a “world-changing technology” in just 28 months, he notes. The questions instead for Viv Labs are around scalability and its ability to bring in developers. It needs to deliver on all these big promises to users, and generate sufficient interest from the wider developer community. It also needs to find a distribution path and partners who will help bring it to market — again, things that Iconiq can help with.

But Viv Labs is not alone in pursing its goal. Google bought AI startup DeepMind for over half a billion, has since gone on to aqui-hire more AI teams and, as Wired noted, has also hired AI legends Geoffrey Hinton and Ray Kurzweil to join its company.

Viv may not deliver on its full vision right out of the gate, but its core engine has been built at this point and it works. Plus, the timing for AI’s next step feels right.

“The idea of embedding a microphone and Internet access is plummeting in price,” says Morgenthaler. “If access to global intelligence and the ability to recognize you, recognize your speech, understand what you said, and provide you services in an authenticated way – if that is available, that’s really transformative.”

Source: http://techcrunch.com/2015/02/20/viv-built-by-siris-creators-scores-12-5-million-for-an-ai-technology-that-can-teach-itself/

The road to Superintelligence

Source: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

We are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge

 

What does it feel like to stand here?

Edge1

It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there:

Edge

Which probably feels pretty normal…

_______________

The Far Future—Coming Soon

Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die.

But here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception. But watching everyday life go by in 1750—transportation, communication, etc.—definitely wouldn’t make him die.

No, in order for the 1750 guy to have as much fun as we had with him, he’d have to go much farther back—maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,” and their enormous mountain of collective, accumulated human knowledge and discovery—he’d likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock they’d experience, they have to go enough years ahead that a “die level of progress,” or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so it’s no surprise that humanity made far more advances in the 19th century than in the 15th century—15th century humanity was no match for 19th century humanity.11← open these

This works on smaller scales too. The movie Back to the Future came out in 1985, and “the past” took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yes—but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phones—today’s Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movie’s Marty McFly was in 1955.

This is for the same reason we just discussed—the Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985—because the former was a more advanced world—so much more change happened in the most recent 30 years than in the prior 30.

So—advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.2

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015—i.e. the next DPU might only take a couple decades—and the world in 2050 might be so vastly different than today’s world that we would barely recognize it.

This isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict.

So then why, when you hear me say something like “the world 35 years from now might be totally unrecognizable,” are you thinking, “Cool….but nahhhhhhh”? Three reasons we’re skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. It’s most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. They’d be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than they’re moving now.

Projections

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isn’t totally smooth and uniform. Kurzweil explains that progress happens in “S-curves”:

S-Curves

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

1. Slow growth (the early phase of exponential growth)
2. Rapid growth (the late, explosive phase of exponential growth)
3. A leveling off as the particular paradigm matures3

If you look only at very recent history, the part of the S-curve you’re on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but that’s missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as “the way things happen.” We’re also limited by our imagination, which takes our experience and uses it to conjure future predictions—but often, what we know simply doesn’t give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, “That’s stupid—if there’s one thing I know from history, it’s that everybody dies.” And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, it’s probably actually wrong. The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, they’ll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about what’s going on today in science and technology, you start to see a lot of signs quietly hinting that life as we know it cannot withstand the leap that’s coming next.

_______________

The Road to Superintelligence

What Is AI?

If you’re like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately you’ve been hearing it mentioned by serious people, and you don’t really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phone’s calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often don’t realize it’s AI. John McCarthy, who coined the term “Artificial Intelligence” in 1956, complained that “as soon as it works, no one calls it AI anymore.”4 Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to “insisting that the Internet died in the dot-com bust of the early 2000s.”5

So let’s clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not—but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body—if it even has a body. For example, the software and data behind Siri is AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.

Secondly, you’ve probably heard the term “singularity” or “technological singularity.” This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. It’s been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules don’t apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which we’ll be living in a whole new world. I found that many of today’s AI thinkers have stopped using the term, and it’s confusing anyway, so I won’t use it much here (even though we’ll be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road we may or may not survive but that, either way, will change everything.

Let’s take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Where We Are Currently—A World Running on ANI

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

  • Cars are full of ANI systems, from the computer that figures out when the anti-lock brakes should kick in to the computer that tunes the parameters of the fuel injection systems. Google’s self-driving car, which is being tested now, will contain robust ANI systems that allow it to perceive and react to the world around it.
  • Your phone is a little ANI factory. When you navigate using your map app, receive tailored music recommendations from Pandora, check tomorrow’s weather, talk to Siri, or dozens of other everyday activities, you’re using ANI.
  • Your email spam filter is a classic type of ANI—it starts off loaded with intelligence about how to figure out what’s spam and what’s not, and then it learns and tailors its intelligence to you as it gets experience with your particular preferences. The Nest Thermostat does the same thing as it starts to figure out your typical routine and act accordingly.
  • You know the whole creepy thing that goes on when you search for a product on Amazon and then you see that as a “recommended for you” product on a different site, or when Facebook somehow knows who it makes sense for you to add as a friend? That’s a network of ANI systems, working together to inform each other about who you are and what you like and then using that information to decide what to show you. Same goes for Amazon’s “People who bought this also bought…” thing—that’s an ANI system whose job it is to gather info from the behavior of millions of customers and synthesize that info to cleverly upsell you so you’ll buy more things.
  • Google Translate is another classic ANI system—impressively good at one narrow task. Voice recognition is another, and there are a bunch of apps that use those two ANIs as a tag team, allowing you to speak a sentence in one language and have the phone spit out the same sentence in another.
  • When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determined the price of your ticket.
  • The world’s best Checkers, Chess, Scrabble, Backgammon, and Othello players are now all ANI systems.
  • Google search is one large ANI brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular. Same goes for Facebook’s Newsfeed.
  • And those are just in the consumer world. Sophisticated ANI systems are widely used in sectors and industries like military, manufacturing, and finance (algorithmic high-frequency AI traders account for more than half of equity shares traded on US markets6), and in expert systems like those that help doctors make diagnoses and, most famously, IBM’s Watson, who contained enough facts and understood coy Trebek-speak well enough to soundly beat the most prolific Jeopardy champions.

ANI systems as they are now aren’t especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesn’t have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane that’s on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.

The Road From ANI to AGI

Why It’s So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down—all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

What’s interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what you’d think they are. Build a computer that can multiply two ten-digit numbers in a split second—incredibly easy. Build one that can look at a dog and answer whether it’s a dog or a cat—spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-old’s picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things—like calculus, financial market strategy, and language translation—are mind-numbingly easy for a computer, while easy things—like vision, motion, movement, and perception—are insanely hard for it. Or, as computer scientist Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.’”7

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of million years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why it’s not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a site—it’s that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we haven’t had any time to evolve a proficiency at them, so a computer doesn’t need to work too hard to beat us. Think about it—which would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that it you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun example—when you look at this, you and a computer both can figure out that it’s a rectangle with two distinct shades, alternating:

Screen Shot 2015-01-21 at 12.59.21 AM

Tied so far. But if you pick up the black and reveal the whole image…

Screen Shot 2015-01-21 at 12.59.54 AM

…you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it sees—a variety of two-dimensional shapes in several different shades—which is actually what’s there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really is—a photo of an entirely-black, 3-D rock:

article-2053686-0E8BC15900000578-845_634x330

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, it’ll need to equal the brain’s raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someone’s professional estimate for the cps of one structure and that structure’s weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark—around 1016, or 10 quadrillion cps.

Currently, the world’s fastest supercomputer, China’s Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level—10 quadrillion cps—then that’ll mean AGI could become a very real part of life.

Moore’s Law is a historically-reliable rule that the world’s maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweil’s cps/$1,000 metric, we’re currently at about 10 trillion cps/$1,000, right on pace with this graph’s predicted trajectory:9

PPTExponentialGrowthof_Computing-1

So the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level. This doesn’t sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and we’ll be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesn’t make a computer generally intelligent—the next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making it Smart

This is the icky part. The truth is, no one really knows how to make it smart—we’re still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

1) Plagiarize the brain.

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they can’t do nearly as well as that kid, and then they finally decide “k fuck it I’m just gonna copy that kid’s answers.” It makes sense—we’re stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing—optimistic estimates say we can do this by 2030. Once we do that, we’ll know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor “neurons,” connected to each other with inputs and outputs, and it knows nothing—like an infant brain. The way it “learns” is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when it’s told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when it’s told it was wrong, those pathways’ connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, we’re discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called “whole brain emulation,” where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. We’d then have a computer officially capable of everything the brain is capable of—it would just need to learn and gather information. If engineers get really good, they’d be able to emulate a real brain with such exact accuracy that the brain’s full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which he’d probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, we’ve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progress—now that we’ve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

2) Try to make evolution do what it did before but for us this time.

So if we decide the smart kid’s test is too hard to copy, we can try to copy the way he studies for the tests instead.

Here’s something we know. Building a computer as powerful as the brain is possible—our own brain’s evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a bird’s wing-flapping motions—often, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called “genetic algorithms,” would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures “perform” by living life and are “evaluated” by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomly—it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesn’t aim for anything, including intelligence—sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence—like revamping the ways cells produce energy—when we can remove those extra burdens and use things like electricity. It’s no doubt we’d be much, much faster than evolution—but it’s still not clear whether we’ll be able to improve upon evolution enough to make this a viable strategy.

3) Make this whole thing the computer’s problem, not ours.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that we’d build a computer whose two major skills would be doing research on AI and coding changes into itself—allowing it to not only learn but to improve its own architecture. We’d teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job—figuring out how to make themselves smarter. More on this later.

All of This Could Happen Soon

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snail’s pace of advancement can quickly race upwards—this GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

The Road From AGI to ASI

At some point, we’ll have achieved AGI—computers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

Hardware:

  • Speed. The brain’s neurons max out at around 200 Hz, while today’s microprocessors (which are much slower than they will be when we reach AGI) run at 2 GHz, or 10 million times faster than our neurons. And the brain’s internal communications, which can move at about 120 m/s, are horribly outmatched by a computer’s ability to communicate optically at the speed of light.
  • Size and storage. The brain is locked into its size by the shape of our skulls, and it couldn’t get much bigger anyway, or the 120 m/s internal communications would take too long to get from one brain structure to another. Computers can expand to any physical size, allowing far more hardware to be put to work, a much larger working memory (RAM), and a longterm memory (hard drive storage) that has both far greater capacity and precision than our own.
  • Reliability and durability. It’s not only the memories of a computer that would be more precise. Computer transistors are more accurate than biological neurons, and they’re less likely to deteriorate (and can be repaired or replaced if they do). Human brains also get fatigued easily, while computers can run nonstop, at peak performance, 24/7.

Software:

  • Editability, upgradability, and a wider breadth of possibility. Unlike the human brain, computer software can receive updates and fixes and can be easily experimented on. The upgrades could also span to areas where human brains are weak. Human vision software is superbly advanced, while its complex engineering capability is pretty low-grade. Computers could match the human on vision software but could also become equally optimized in engineering and any other area.
  • Collective capability. Humans crush all other species at building a vast collective intelligence. Beginning with the development of language and the forming of large, dense communities, advancing through the inventions of writing and printing, and now intensified through tools like the internet, humanity’s collective intelligence is one of the major reasons we’ve been able to get so far ahead of all other species. And computers will be way better at it than we are. A worldwide network of AI running a particular program could regularly sync with itself so that anything any one computer learned would be instantly uploaded to all other computers. The group could also take on one goal as a unit, because there wouldn’t necessarily be dissenting opinions and motivations and self-interest, like we have within the human population.10

AI, which will likely get to AGI by being programmed to self-improve, wouldn’t see “human-level intelligence” as some important milestone—it’s only a relevant marker from our point of view—and wouldn’t have any reason to “stop” at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic we’re aware of about any animal’s intelligence is that it’s far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

Intelligence

So as AI zooms upward in intelligence toward us, we’ll see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanity—Nick Bostrom uses the term “the village idiot”—we’ll be like, “Oh wow, it’s like a dumb human. Cute!” The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot-level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us:

Intelligence2

And what happens…after that?

An Intelligence Explosion

I hope you enjoyed normal time, because this is when this topic gets unnormal and scary, and it’s gonna stay that way from here forward. I want to pause here to remind you that every single thing I’m going to say is real—real science and real forecasts of the future from a large array of the most respected thinkers and scientists. Just keep remembering that.

Anyway, as I said above, most of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didn’t involve self-improvement would now be smart enough to begin self-improving if they wanted to.3

And here’s where we get to an intense concept: recursive self-improvement. It works like this—

An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion,11 and it’s the ultimate example of The Law of Accelerating Returns.

There is some debate about how soon AI will reach human-level general intelligence—the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly. Like—this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.

What we do know is that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happen in the next few decades.

If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is:

We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. — Nick Bostrom

Welcome to Part 2 of the “Wait how is this possibly what I’m reading I don’t get why everyone isn’t talking about this” series.

Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how it’s all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI that’s at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement we’ve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this:

Train1

Train2

Train3

Train4

This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI that’s way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11← open these

Before we dive into things, let’s remind ourselves what it would mean for a machine to be superintelligent.

A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someone’s first thought when they imagine a super-smart computer is one that’s as intelligent as a human but can think much, much faster2—they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.

That sounds impressive, and ASI would think much faster than any human could—but the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isn’t a difference in thinking speed—it’s that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps’ brains do not. Speeding up a chimp’s brain by thousands of times wouldn’t bring him to our level—even with a decade’s time, he wouldn’t be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.

But it’s not just that a chimp can’t do what we do, it’s that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, it’s beyond him to realize that anyone can build a skyscraper. That’s the result of a small difference in intelligence quality.

And in the scheme of the intelligence range we’re talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3

staircase

To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimp’s incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to us—let alone do it ourselves. And that’s only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to ants—it could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.

But the kind of superintelligence we’re talking about today is something far beyond anything on this staircase. In an intelligence explosion—where the smarter a machine gets, the quicker it’s able to increase its own intelligence, until it begins to soar upwards—a machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once it’s on the dark green step two above us, and by the time it’s ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that it’s distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something that’s here on the staircase (or maybe a million times higher):

staircase2

And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means.

Evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, we’ll be dramatically stomping on evolution. Or maybe this is part of evolution—maybe the way evolution works is that intelligence creeps up more and more until it hits the level where it’s capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:

Tripwire

And for reasons we’ll discuss later, a huge part of the scientific community believes that it’s not a matter of whether we’ll hit that tripwire, but when. Kind of a crazy piece of information.

So where does that leave us?

Well no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.

First, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction—

beam1

“All species eventually go extinct” has been almost as reliable a rule through history as “All humans eventually die” has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, it’s only a matter of time before some other species, some gust of nature’s wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor state—a place species are all teetering on falling into and from which no species ever returns.

And while most scientists I’ve come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, we’ll be impervious to extinction forever—we’ll have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and it’s just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.

beam2

If Bostrom and others are right, and from everything I’ve read, it seems like they really might be, we have two pretty shocking facts to absorb:

1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.

2) The advent of ASI will make such an unimaginably dramatic impact that it’s likely to knock the human race off the beam, in one direction or the other.

It may very well be that when evolution hits the tripwire, it permanently ends humans’ relationship with the beam and creates a new world, with or without humans.

Kind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?

No one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. We’ll spend the rest of this post exploring what they’ve come up with.

___________

Let’s start with the first part of the question: When are we going to hit the tripwire?

i.e. How long until the first machine reaches superintelligence?

Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:

Howard Graph

Those people subscribe to the belief that this is happening soon—that expontential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.

Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we’re not actually that close to the tripwire.

The Kurzweil camp would counter that the only underestimating that’s happening is the underappreciation of exponential growth, and they’d compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.

The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.

A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that there’s no guarantee about that; it could also take a much longer time.

Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that it’s more likely that ASI won’t actually ever be achieved.

So what do you get when you put all of these opinions together?

In 2013, Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “When do you predict human-level AGI will be achieved?” and asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:2

Median optimistic year (10% likelihood): 2022
Median realistic year (50% likelihood): 2040
Median pessimistic year (90% likelihood): 2075

So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.

A separate study, conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:3

By 2030: 42% of respondents
By 2050: 25%
By 2100: 20%
After 2100: 10%
Never: 2%

Pretty similar to Bostrom’s outcomes. In Barrat’s survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don’t think AGI is part of our future.

But AGI isn’t the tripwire, ASI is. So when do the experts think we’ll reach ASI?

Bostrom also asked the experts how likely they think it is that we’ll reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4

The median answer put a rapid (2 year) AGI → ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.

We don’t know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let’s estimate that they’d have said 20 years. So the median opinion—the one right in the center of the world of AI experts—believes the most realistic guess for when we’ll hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.

Timeline

Of course, all of the above statistics are speculative, and they’re only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.

Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?

Superintelligence will yield tremendous power—the critical question for us is:

Who or what will be in control of that power, and what will their motivation be?

The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.

Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Bostrom’s survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. It’s also worth noting that those numbers refer to the advent of AGI—if the question were about ASI, I imagine that the neutral percentage would be even lower.

Before we dive much further into this good vs. bad outcome part of the question, let’s combine both the “when will it happen?” and the “will it be good or bad?” parts of this question into a chart that encompasses the views of most of the relevant experts:

Square1

We’ll talk more about the Main Camp in a minute, but first—what’s your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people aren’t really thinking about this topic:

  • As mentioned in Part 1, movies have really confused things by presenting unrealistic AI scenarios that make us feel like AI isn’t something to be taken seriously in general. James Barrat compares the situation to our reaction if the Centers for Disease Control issued a serious warning about vampires in our future.5
  • Due to something called cognitive biases, we have a hard time believing something is real until we see proof. I’m sure computer scientists in 1988 were regularly talking about how big a deal the internet was likely to be, but people probably didn’t really think it was going to change their lives until it actually changed their lives. This is partially because computers just couldn’t do stuff like that in 1988, so people would look at their computer and think, “Really? That’s gonna be a life changing thing?” Their imaginations were limited to what their personal experience had taught them about what a computer was, which made it very hard to vividly picture what computers might become. The same thing is happening now with AI. We hear that it’s gonna be a big deal, but because it hasn’t happened yet, and because of our experience with the relatively impotent AI in our current world, we have a hard time really believing this is going to change our lives dramatically. And those biases are what experts are up against as they frantically try to get our attention through the noise of collective daily self-absorption.
  • Even if we did believe it—how many times today have you thought about the fact that you’ll spend most of the rest of eternity not existing? Not many, right? Even though it’s a far more intense fact than anything else you’re doing today? This is because our brains are normally focused on the little things in day-to-day life, no matter how crazy a long-term situation we’re a part of. It’s just how we’re wired.

One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if you’re just standing on the intersection of the two dotted lines in the square above, totally uncertain.

During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most people’s opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:

Square2

We’re gonna take a thorough dive into both of these camps. Let’s start with the fun one—

Why the Future Might Be Our Greatest Dream

As I learned about the world of AI, I found a surprisingly large number of people standing here:

Square3

The people on Confident Corner are buzzing with excitement. They have their sights set on the fun side of the balance beam and they’re convinced that’s where all of us are headed. For them, the future is everything they ever could have hoped for, just in time.

The thing that separates these people from the other thinkers we’ll discuss later isn’t their lust for the happy side of the beam—it’s their confidence that that’s the side we’re going to land on.

Where this confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say it’s naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.

We’ll cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and let’s take a good hard look at what’s over there on the fun side of the balance beam—and try to absorb the fact that the things you’re reading might really happen. If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to him—we have to be humble enough to acknowledge that it’s possible that an equally inconceivable transformation could be in our future.

Nick Bostrom describes three ways a superintelligent AI system could function:6

  • As an oracle, which answers nearly any question posed to it with accuracy, including complex questions that humans cannot easily answer—i.e. How can I manufacture a more efficient car engine? Google is a primitive type of oracle.
  • As a genie, which executes any high-level command it’s given—Use a molecular assembler to build a new and more efficient kind of car engine—and then awaits its next command.
  • As a sovereign, which is assigned a broad and open-ended pursuit and allowed to operate in the world freely, making its own decisions about how best to proceed—Invent a faster, cheaper, and safer way than cars for humans to privately transport themselves.

These questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the “My pencil fell off the table” situation, which you’d do by picking it up and putting it back on the table.

Eliezer Yudkowsky, a resident of Anxious Avenue in our chart above, said it well:

There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.7

There are a lot of eager scientists, inventors, and entrepreneurs in Confident Corner—but for a tour of brightest side of the AI horizon, there’s only one person we want as our tour guide.

Ray Kurzweil is polarizing. In my reading, I heard everything from godlike worship of him and his ideas to eye-rolling contempt for them. Others were somewhere in the middle—author Douglas Hofstadter, in discussing the ideas in Kurzweil’s books, eloquently put forth that “it is as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad.”8

Whether you like his ideas or not, everyone agrees that Kurzweil is impressive. He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition. He’s the author of five national bestselling books. He’s well-known for his bold predictions and has a pretty good record of having them come true—including his prediction in the late ’80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon. Kurzweil has been called a “restless genius” by The Wall Street Journal, “the ultimate thinking machine” by Forbes, “Edison’s rightful heir” by Inc. Magazine, and “the best person I know at predicting the future of artificial intelligence” by Bill Gates.9 In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Google’s Director of Engineering.4 In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.

This biography is important. When Kurzweil articulates his vision of the future, he sounds fully like a crackpot, and the crazy thing is that he’s not—he’s an extremely smart, knowledgeable, relevant man in the world. You may think he’s wrong about the future, but he’s not a fool. Knowing he’s a such a legit dude makes me happy, because as I’ve learned about his predictions for the future, I badly want him to be right. And you do too. As you hear Kurzweil’s predictions, many shared by other Confident Corner thinkers like Peter Diamandis and Ben Goertzel, it’s not hard see to why he has such a large, passionate following—known as the singularitarians. Here’s what he thinks is going to happen:

Timeline

Kurzweil believes computers will reach AGI by 2029 and that by 2045, we’ll have not only ASI, but a full-blown new world—a time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,5 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweil’s timeline. His predictions are still a bit more ambitious than the median respondent on Bostrom’s survey (AGI by 2040, ASI by 2060), but not by that much.

Kurzweil’s depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.

Before we move on—nanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it—

Nanotechnology Blue Box

Nanotechnology is our word for technology that deals with the manipulation of matter that’s between 1 and 100 nanometers in size. A nanometer is a billionth of a meter, or a millionth of a millimeter, and this 1-100 range encompasses viruses (100 nm across), DNA (10 nm wide), and things as small as large molecules like hemoglobin (5 nm) and medium molecules like glucose (1 nm). If/when we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller (~.1 nm).6

To understand the challenge of humans trying to manipulate matter in that range, let’s take the same thing on a larger scale. The International Space Station is 268 mi (431 km) above the Earth. If humans were giants so large their heads reached up to the ISS, they’d be about 250,000 times bigger than they are now. If you make the 1nm – 100nm nanotech range 250,000 times bigger, you get .25mm – 2.5cm. So nanotechnology is the equivalent of a human giant as tall as the ISS figuring out how to carefully build intricate objects using materials between the size of a grain of sand and an eyeball. To reach the next level—manipulating individual atoms—the giant would have to carefully position objects that are 1/40th of a millimeter—so small normal-size humans would need a microscope to see them.7

Nanotech was first discussed by Richard Feynman in a 1959 talk, when he explained: “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible … for a physicist to synthesize any chemical substance that the chemist writes down…. How? Put the atoms down where the chemist says, and so you make the substance.” It’s as simple as that. If you can figure out how to move individual molecules or atoms around, you can make literally anything.

Nanotech became a serious field for the first time in 1986, when engineer Eric Drexler provided its foundations in his seminal book Engines of Creation, but Drexler suggests that those looking to learn about the most modern ideas in nanotechnology would be best off reading his 2013 book, Radical Abundance.

Gray Goo Bluer Box

We’re now in a diversion in a diversion. This is very fun.8

Anyway, I brought you here because there’s this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, there’d be a few trillion of them ready to go. That’s the power of exponential growth. Clever, right?

It’s clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earth’s biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (that’s the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.

An even worse scenario—if a terrorist somehow got his hands on nanobot technology and had the know-how to program them, he could make an initial few trillion of them and program them to quietly spend a few weeks spreading themselves evenly around the world undetected. Then, they’d all strike at once, and it would only take 90 minutes for them to consume everything—and with them all spread out, there would be no way to combat them.10

While this horror story has been widely discussed for years, the good news is that it may be overblown—Eric Drexler, who coined the term “gray goo,” sent me an email following this post with his thoughts on the gray goo scenario: “People love scare stories, and this one belongs with the zombies. The idea itself eats brains.”

Once we really get nanotech down, we can use it to make tech devices, clothing, food, a variety of bio-related products—artificial blood cells, tiny virus or cancer-cell destroyers, muscle tissue, etc.—anything really. And in a world that uses nanotechnology, the cost of a material is no longer tied to its scarcity or the difficulty of its manufacturing process, but instead determined by how complicated its atomic structure is. In a nanotech world, a diamond might be cheaper than a pencil eraser.

We’re not there yet. And it’s not clear if we’re underestimating, or overestimating, how hard it will be to get there. But we don’t seem to be that far away. Kurzweil predicts that we’ll get there by the 2020s.11 Governments know that nanotech could be an Earth-shaking development, and they’ve invested billions of dollars in nanotech research (the US, the EU, and Japan have invested over a combined $5 billion so far).12

Just considering the possibilities if a superintelligent computer had access to a robust nanoscale assembler is intense. But nanotechnology is something we came up with, that we’re on the verge of conquering, and since anything that we can do is a joke to an ASI system, we have to assume ASI would come up with technologies much more powerful and far too advanced for human brains to understand. For that reason, when considering the “If the AI Revolution turns out well for us” scenario, it’s almost impossible for us to overestimate the scope of what could happen—so if the following predictions of an ASI future seem over-the-top, keep in mind that they could be accomplished in ways we can’t even imagine. Most likely, our brains aren’t even capable of predicting the things that would happen.

What AI Could Do For Us

only-humans-cartoon

Armed with superintelligence and all the technology superintelligence would know how to create, ASI would likely be able to solve every problem in humanity. Global warming? ASI could first halt CO2 emissions by coming up with much better ways to generate energy that had nothing to do with fossil fuels. Then it could create some innovative way to begin to remove excess CO2 from the atmosphere. Cancer and other diseases? No problem for ASI—health and medicine would be revolutionized beyond imagination. World hunger? ASI could use things like nanotech to build meat from scratch that would be molecularly identical to real meat—in other words, it would be real meat. Nanotech could turn a pile of garbage into a huge vat of fresh meat or other food (which wouldn’t have to have its normal shape—picture a giant cube of apple)—and distribute all this food around the world using ultra-advanced transportation. Of course, this would also be great for animals, who wouldn’t have to get killed by humans much anymore, and ASI could do lots of other things to save endangered species or even bring back extinct species through work with preserved DNA. ASI could even solve our most complex macro issues—our debates over how economies should be run and how world trade is best facilitated, even our haziest grapplings in philosophy or ethics—would all be painfully obvious to ASI.

But there’s one thing ASI could do for us that is so tantalizing, reading about it has altered everything I thought I knew about everything:

ASI could allow us to conquer our mortality.

A few months ago, I mentioned my envy of more advanced potential civilizations who had conquered their own mortality, never considering that I might later write a post that genuinely made me believe that this is something humans could do within my lifetime. But reading about AI will make you reconsider everything you thought you were sure about—including your notion of death.

Evolution had no good reason to extend our lifespans any longer than they are now. If we live long enough to reproduce and raise our children to an age that they can fend for themselves, that’s enough for evolution—from an evolutionary point of view, the species can thrive with a 30+ year lifespan, so there’s no reason mutations toward unusually long life would have been favored in the natural selection process. As a result, we’re what W.B. Yeats describes as “a soul fastened to a dying animal.”13 Not that fun.

And because everyone has always died, we live under the “death and taxes” assumption that death is inevitable. We think of aging like time—both keep moving and there’s nothing you can do to stop them. But that assumption is wrong. Richard Feynman writes:

It is one of the most remarkable things that in all of the biological sciences there is no clue as to the necessity of death. If you say we want to make perpetual motion, we have discovered enough laws as we studied physics to see that it is either absolutely impossible or else the laws are wrong. But there is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before the biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.

The fact is, aging isn’t stuck to time. Time will continue moving, but aging doesn’t have to. If you think about it, it makes sense. All aging is is the physical materials of the body wearing down. A car wears down over time too—but is its aging inevitable? If you perfectly repaired or replaced a car’s parts whenever one of them began to wear down, the car would run forever. The human body isn’t any different—just far more complex.

Kurzweil talks about intelligent wifi-connected nanobots in the bloodstream who could perform countless tasks for human health, including routinely repairing or replacing worn down cells in any part of the body. If perfected, this process (or a far smarter one ASI would come up with) wouldn’t just keep the body healthy, it could reverse aging. The difference between a 60-year-old’s body and a 30-year-old’s body is just a bunch of physical things that could be altered if we had the technology. ASI could build an “age refresher” that a 60-year-old could walk into, and they’d walk out with the body and skin of a 30-year-old.9 Even the ever-befuddling brain could be refreshed by something as smart as ASI, which would figure out how to do so without affecting the brain’s data (personality, memories, etc.). A 90-year-old suffering from dementia could head into the age refresher and come out sharp as a tack and ready to start a whole new career. This seems absurd—but the body is just a bunch of atoms and ASI would presumably be able to easily manipulate all kinds of atomic structures—so it’s not absurd.

Kurzweil then takes things a huge leap further. He believes that artificial materials will be integrated into the body more and more as time goes on. First, organs could be replaced by super-advanced machine versions that would run forever and never fail. Then he believes we could begin to redesign the body—things like replacing red blood cells with perfected red blood cell nanobots who could power their own movement, eliminating the need for a heart at all. He even gets to the brain and believes we’ll enhance our brain activities to the point where humans will be able to think billions of times faster than they do now and access outside information because the artificial additions to the brain will be able to communicate with all the info in the cloud.

The possibilities for new human experience would be endless. Humans have separated sex from its purpose, allowing people to have sex for fun, not just for reproduction. Kurzweil believes we’ll be able to do the same with food. Nanobots will be in charge of delivering perfect nutrition to the cells of the body, intelligently directing anything unhealthy to pass through the body without affecting anything. An eating condom. Nanotech theorist Robert A. Freitas has already designed blood cell replacements that, if one day implemented in the body, would allow a human to sprint for 15 minutes without taking a breath—so you can only imagine what ASI could do for our physical capabilities. Virtual reality would take on a new meaning—nanobots in the body could suppress the inputs coming from our senses and replace them with new signals that would put us entirely in a new environment, one that we’d see, hear, feel, and smell.

Eventually, Kurzweil believes humans will reach a point when they’re entirely artificial;10 a time when we’ll look at biological material and think how unbelievably primitive it was that humans were ever made of that; a time when we’ll read about early stages of human history, when microbes or accidents or diseases or wear and tear could just kill humans against their own will; a time the AI Revolution could bring to an end with the merging of humans and AI.11 This is how Kurzweil believes humans will ultimately conquer our biology and become indestructible and eternal—this is his vision for the other side of the balance beam. And he’s convinced we’re gonna get there. Soon.

You will not be surprised to learn that Kurzweil’s ideas have attracted significant criticism. His prediction of 2045 for the singularity and the subsequent eternal life possibilities for humans has been mocked as “the rapture of the nerds,” or “intelligent design for 140 IQ people.” Others have questioned his optimistic timeline, or his level of understanding of the brain and body, or his application of the patterns of Moore’s law, which are normally applied to advances in hardware, to a broad range of things, including software. For every expert who fervently believes Kurzweil is right on, there are probably three who think he’s way off.

But what surprised me is that most of the experts who disagree with him don’t really disagree that everything he’s saying is possible. Reading such an outlandish vision for the future, I expected his critics to be saying, “Obviously that stuff can’t happen,” but instead they were saying things like, “Yes, all of that can happen if we safely transition to ASI, but that’s the hard part.” Bostrom, one of the most prominent voices warning us about the dangers of AI, still acknowledges:

It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.

This is a quote from someone very much not on Confident Corner, but that’s what I kept coming across—experts who scoff at Kurzweil for a bunch of reasons but who don’t think what he’s saying is impossible if we can make it safely to ASI. That’s why I found Kurzweil’s ideas so infectious—because they articulate the bright side of this story and because they’re actually possible. If it’s a good god.

The most prominent criticism I heard of the thinkers on Confident Corner is that they may be dangerously wrong in their assessment of the downside when it comes to ASI. Kurzweil’s famous book The Singularity is Near is over 700 pages long and he dedicates around 20 of those pages to potential dangers. I suggested earlier that our fate when this colossal new power is born rides on who will control that power and what their motivation will be. Kurzweil neatly answers both parts of this question with the sentence, “[ASI] is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such, it will reflect our values because it will be us.”

But if that’s the answer, why are so many of the world’s smartest people so worried right now? Why does Stephen Hawking say the development of ASI “could spell the end of the human race” and Bill Gates say he doesn’t “understand why some people are not concerned” and Elon Musk fear that we’re “summoning the demon”? And why do so many experts on the topic call ASI the biggest threat to humanity? These people, and the other thinkers on Anxious Avenue, don’t buy Kurzweil’s brush-off of the dangers of AI. They’re very, very worried about the AI Revolution, and they’re not focusing on the fun side of the balance beam. They’re too busy staring at the other side, where they see a terrifying future, one they’re not sure we’ll be able to escape.

___________

Why the Future Might Be Our Worst Nightmare

One of the reasons I wanted to learn about AI is that the topic of “bad robots” always confused me. All the movies about evil robots seemed fully unrealistic, and I couldn’t really understand how there could be a real-life situation where AI was actually dangerous. Robots are made by us, so why would we design them in a way where something negative could ever happen? Wouldn’t we build in plenty of safeguards? Couldn’t we just cut off an AI system’s power supply at any time and shut it down? Why would a robot want to do something bad anyway? Why would a robot “want” anything in the first place? I was highly skeptical. But then I kept hearing really smart people talking about it…

Those people tended to be somewhere in here:

Square4

The people on Anxious Avenue aren’t in Panicked Prairie or Hopeless Hills—both of which are regions on the far left of the chart—but they’re nervous and they’re tense. Being in the middle of the chart doesn’t mean that you think the arrival of ASI will be neutral—the neutrals were given a camp of their own—it means you think both the extremely good and extremely bad outcomes are plausible but that you’re not sure yet which one of them it’ll be.

A part of all of these people is brimming with excitement over what Artificial Superintelligence could do for us—it’s just they’re a little worried that it might be the beginning of Raiders of the Lost Ark and the human race is this guy:

raiders

And he’s standing there all pleased with his whip and his idol, thinking he’s figured it all out, and he’s so thrilled with himself when he says his “Adios Señor” line, and then he’s less thrilled suddenly cause this happens.

500px-Satipo_death

(Sorry)

Meanwhile, Indiana Jones, who’s much more knowledgeable and prudent, understanding the dangers and how to navigate around them, makes it out of the cave safely. And when I hear what Anxious Avenue people have to say about AI, it often sounds like they’re saying, “Um we’re kind of being the first guy right now and instead we should probably be trying really hard to be Indiana Jones.”

So what is it exactly that makes everyone on Anxious Avenue so anxious?

Well first, in a broad sense, when it comes to developing supersmart AI, we’re creating something that will probably change everything, but in totally uncharted territory, and we have no idea what will happen when we get there. Scientist Danny Hillis compares what’s happening to that point “when single-celled organisms were turning into multi-celled organisms. We are amoebas and we can’t figure out what the hell this thing is that we’re creating.”14 Nick Bostrom worries that creating something smarter than you is a basic Darwinian error, and compares the excitement about it to sparrows in a nest deciding to adopt a baby owl so it’ll help them and protect them once it grows up—while ignoring the urgent cries from a few sparrows who wonder if that’s necessarily a good idea…15

And when you combine “unchartered, not-well-understood territory” with “this should have a major impact when it happens,” you open the door to the scariest two words in the English language:

Existential risk.

An existential risk is something that can have a permanent devastating effect on humanity. Typically, existential risk means extinction. Check out this chart from a Google talk by Bostrom:12

Existential Risk Chart

You can see that the label “existential risk” is reserved for something that spans the species, spans generations (i.e. it’s permanent) and it’s devastating or death-inducing in its consequences.13 It technically includes a situation in which all humans are permanently in a state of suffering or torture, but again, we’re usually talking about extinction. There are three things that can cause humans an existential catastrophe:

1) Nature—a large asteroid collision, an atmospheric shift that makes the air inhospitable to humans, a fatal virus or bacterial sickness that sweeps the world, etc.

2) Aliens—this is what Stephen Hawking, Carl Sagan, and so many other astronomers are scared of when they advise METI to stop broadcasting outgoing signals. They don’t want us to be the Native Americans and let all the potential European conquerors know we’re here.

3) Humans—terrorists with their hands on a weapon that could cause extinction, a catastrophic global war, humans creating something smarter than themselves hastily without thinking about it carefully first…

Bostrom points out that if #1 and #2 haven’t wiped us out so far in our first 100,000 years as a species, it’s unlikely to happen in the next century.

#3, however, terrifies him. He draws a metaphor of an urn with a bunch of marbles in it. Let’s say most of the marbles are white, a smaller number are red, and a tiny few are black. Each time humans invent something new, it’s like pulling a marble out of the urn. Most inventions are neutral or helpful to humanity—those are the white marbles. Some are harmful to humanity, like weapons of mass destruction, but they don’t cause an existential catastrophe—red marbles. If we were to ever invent something that drove us to extinction, that would be pulling out the rare black marble. We haven’t pulled out a black marble yet—you know that because you’re alive and reading this post. But Bostrom doesn’t think it’s impossible that we pull one out in the near future. If nuclear weapons, for example, were easy to make instead of extremely difficult and complex, terrorists would have bombed humanity back to the Stone Age a while ago. Nukes weren’t a black marble but they weren’t that far from it. ASI, Bostrom believes, is our strongest black marble candidate yet.14

So you’ll hear about a lot of bad potential things ASI could bring—soaring unemployment as AI takes more and more jobs,15 the human population ballooning if we do manage to figure out the aging issue,16 etc. But the only thing we should be obsessing over is the grand concern: the prospect of existential risk.

So this brings us back to our key question from earlier in the post: When ASI arrives, who or what will be in control of this vast new power, and what will their motivation be?

When it comes to what agent-motivation combos would suck, two quickly come to mind: a malicious human / group of humans / government, and a malicious ASI. So what would those look like?

A malicious human, group of humans, or government develops the first ASI and uses it to carry out their evil plans. I call this the Jafar Scenario, like when Jafar got ahold of the genie and was all annoying and tyrannical about it. So yeah—what if ISIS has a few genius engineers under its wing working feverishly on AI development? Or what if Iran or North Korea, through a stroke of luck, makes a key tweak to an AI system and it jolts upward to ASI-level over the next year? This would definitely be bad—but in these scenarios, most experts aren’t worried about ASI’s human creators doing bad things with their ASI, they’re worried that the creators will have been rushing to make the first ASI and doing so without careful thought, and would thus lose control of it. Then the fate of those creators, and that of everyone else, would be in what the motivation happened to be of that ASI system. Experts do think a malicious human agent could do horrific damage with an ASI working for it, but they don’t seem to think this scenario is the likely one to kill us all, because they believe bad humans would have the same problems containing an ASI that good humans would have. Okay so—

A malicious ASI is created and decides to destroy us all. The plot of every AI movie. AI becomes as or more intelligent than humans, then decides to turn against us and take over. Here’s what I need you to be clear on for the rest of this post: None of the people warning us about AI are talking about this. Evil is a human concept, and applying human concepts to non-human things is called “anthropomorphizing.” The challenge of avoiding anthropomorphizing will be one of the themes of the rest of this post. No AI system will ever turn evil in the way it’s depicted in movies.

AI Consciousness Blue Box

This also brushes against another big topic related to AI—consciousness. If an AI became sufficiently smart, it would be able to laugh with us, and be sarcastic with us, and it would claim to feel the same emotions we do, but would it actually be feeling those things? Would it just seem to be self-aware or actually be self-aware? In other words, would a smart AI really be conscious or would it just be appear to be conscious?

This question has been explored in depth, giving rise to many debates and to thought experiments like John Searle’s Chinese Room (which he uses to suggest that no computer could ever be conscious). This is an important question for many reasons. It affects how we should feel about Kurzweil’s scenario when humans become entirely artificial. It has ethical implications—if we generated a trillion human brain emulations that seemed and acted like humans but were artificial, is shutting them all off the same, morally, as shutting off your laptop, or is it…a genocide of unthinkable proportions (this concept is called mind crime among ethicists)? For this post, though, when we’re assessing the risk to humans, the question of AI consciousness isn’t really what matters (because most thinkers believe that even a conscious ASI wouldn’t be capable of turning evil in a human way).

This isn’t to say a very mean AI couldn’t happen. It would just happen because it was specifically programmed that way—like an ANI system created by the military with a programmed goal to both kill people and to advance itself in intelligence so it can become even better at killing people. The existential crisis would happen if the system’s intelligence self-improvements got out of hand, leading to an intelligence explosion, and now we had an ASI ruling the world whose core drive in life is to murder humans. Bad times.

But this also is not something experts are spending their time worrying about.

So what ARE they worried about? I wrote a little story to show you:

A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.

The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:

“We love our customers. ~Robotica

Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”

What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.

One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.

The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.

They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.

A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.

At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.

Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica

Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…

You

It seems weird that a story about a handwriting machine turning on humans, somehow killing everyone, and then for some reason filling the galaxy with friendly notes is the exact kind of scenario Hawking, Musk, Gates, and Bostrom are terrified of. But it’s true. And the only thing that scares everyone on Anxious Avenue more than ASI is the fact that you’re not scared of ASI. Remember what happened when the Adios Señor guy wasn’t scared of the cave?

You’re full of questions right now. What the hell happened there when everyone died suddenly?? If that was Turry’s doing, why did Turry turn on us, and how were there not safeguard measures in place to prevent something like this from happening? When did Turry go from only being able to write notes to suddenly using nanotechnology and knowing how to cause global extinction? And why would Turry want to turn the galaxy into Robotica notes?

To answer these questions, let’s start with the terms Friendly AI and Unfriendly AI.

In the case of AI, friendly doesn’t refer to the AI’s personality—it simply means that the AI has a positive impact on humanity. And Unfriendly AI has a negative impact on humanity. Turry started off as Friendly AI, but at some point, she turned Unfriendly, causing the greatest possible negative impact on our species. To understand why this happened, we need to look at how AI thinks and what motivates it.

The answer isn’t anything surprising—AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien.

Let me draw a comparison. If you handed me a guinea pig and told me it definitely won’t bite, I’d probably be amused. It would be fun. If you then handed me a tarantula and told me that it definitely won’t bite, I’d yell and drop it and run out of the room and not trust you ever again. But what’s the difference? Neither one was dangerous in any way. I believe the answer is in the animals’ degree of similarity to me.

A guinea pig is a mammal and on some biological level, I feel a connection to it—but a spider is an insect,17 with an insect brain, and I feel almost no connection to it. The alien-ness of a tarantula is what gives me the willies. To test this and remove other factors, if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.

Now imagine that you made a spider much, much smarter—so much so that it far surpassed human intelligence? Would it then become familiar to us and feel human emotions like empathy and humor and love? No, it wouldn’t, because there’s no reason becoming smarter would make it more human—it would be incredibly smart but also still fundamentally a spider in its core inner workings. I find this unbelievably creepy. I would not want to spend time with a superintelligent spider. Would you??

When we’re talking about ASI, the same concept applies—it would become superintelligent, but it would be no more human than your laptop is. It would be totally alien to us—in fact, by not being biology at all, it would be more alien than the smart tarantula.

By making AI either good or evil, movies constantly anthropomorphize AI, which makes it less creepy than it really would be. This leaves us with a false comfort when we think about human-level or superhuman-level AI.

On our little island of human psychology, we divide everything into moral or immoral. But both of those only exist within the small range of human behavioral possibility. Outside our island of moral and immoral is a vast sea of amoral, and anything that’s not human, especially something nonbiological, would be amoral, by default.

Anthropomorphizing will only become more tempting as AI systems get smarter and better at seeming human. Siri seems human-like to us, because she’s programmed by humans to seem that way, so we’d imagine a superintelligent Siri to be warm and funny and interested in serving humans. Humans feel high-level emotions like empathy because we have evolved to feel them—i.e we’ve been programmed to feel them by evolution—but empathy is not inherently a characteristic of “anything with high intelligence” (which is what seems intuitive to us), unless empathy has been coded into its programming. If Siri ever becomes superintelligent through self-learning and without any further human-made changes to her programming, she will quickly shed her apparent human-like qualities and suddenly be an emotionless, alien bot who values human life no more than your calculator does.

We’re used to relying on a loose moral code, or at least a semblance of human decency and a hint of empathy in others to keep things somewhat safe and predictable. So when something has none of those things, what happens?

That leads us to the question, What motivates an AI system?

The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators—your GPS’s goal is to give you the most efficient driving directions; Watson’s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation. One way we anthropomorphize is by assuming that as AI gets super smart, it will inherently develop the wisdom to change its original goal—but Nick Bostrom believes that intelligence-level and final goals are orthogonal, meaning any level of intelligence can be combined with any final goal. So Turry went from a simple ANI who really wanted to be good at writing that one note to a super-intelligent ASI who still really wanted to be good at writing that one note. Any assumption that once superintelligent, a system would be over it with their original goal and onto more interesting or meaningful things is anthropomorphizing. Humans get “over” things, not computers.16

The Fermi Paradox Blue Box

In the story, as Turry becomes super capable, she begins the process of colonizing asteroids and other planets. If the story had continued, you’d have heard about her and her army of trillions of replicas continuing on to capture the whole galaxy and, eventually, the entire Hubble volume.18 Anxious Avenue residents worry that if things go badly, the lasting legacy of the life that was on Earth will be a universe-dominating Artificial Intelligence (Elon Musk expressed his concern that humans might just be “the biological boot loader for digital superintelligence.”).

At the same time, in Confident Corner, Ray Kurzweil also thinks Earth-originating AI is destined to take over the universe—only in his version, we’ll be that AI.

A large number of Wait But Why readers have joined me in being obsessed with the Fermi Paradox (here’s my post on the topic, which explains some of the terms I’ll use here). So if either of these two sides is correct, what are the implications for the Fermi Paradox?

A natural first thought to jump to is that the advent of ASI is a perfect Great Filter candidate. And yes, it’s a perfect candidate to filter out biological life upon its creation. But if, after dispensing with life, the ASI continued existing and began conquering the galaxy, it means there hasn’t been a Great Filter—since the Great Filter attempts to explain why there are no signs of any intelligent civilization, and a galaxy-conquering ASI would certainly be noticeable.

We have to look at it another way. If those who think ASI is inevitable on Earth are correct, it means that a significant percentage of alien civilizations who reach human-level intelligence should likely end up creating ASI. And if we’re assuming that at least some of those ASIs would use their intelligence to expand outward into the universe, the fact that we see no signs of anyone out there leads to the conclusion that there must not be many other, if any, intelligent civilizations out there. Because if there were, we’d see signs of all kinds of activity from their inevitable ASI creations. Right?

This implies that despite all the Earth-like planets revolving around sun-like stars we know are out there, almost none of them have intelligent life on them. Which in turn implies that either A) there’s some Great Filter that prevents nearly all life from reaching our level, one that we somehow managed to surpass, or B) life beginning at all is a miracle, and we may actually be the only life in the universe. In other words, it implies that the Great Filter is before us. Or maybe there is no Great Filter and we’re simply one of the very first civilizations to reach this level of intelligence. In this way, AI boosts the case for what I called, in my Fermi Paradox post, Camp 1.

So it’s not a surprise that Nick Bostrom, whom I quoted in the Fermi post, and Ray Kurzweil, who thinks we’re alone in the universe, are both Camp 1 thinkers. This makes sense—people who believe ASI is a probable outcome for a species with our intelligence-level are likely to be inclined toward Camp 1.

This doesn’t rule out Camp 2 (those who believe there are other intelligent civilizations out there)—scenarios like the single superpredator or the protected national park or the wrong wavelength (the walkie-talkie example) could still explain the silence of our night sky even if ASI is out there—but I always leaned toward Camp 2 in the past, and doing research on AI has made me feel much less sure about that.

Either way, I now agree with Susan Schneider that if we’re ever visited by aliens, those aliens are likely to be artificial, not biological.

So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal. This is where AI danger stems from. Because a rational agent will pursue its goal through the most efficient means, unless it has a reason not to.

When you try to achieve a long-reaching goal, you often aim for several subgoals along the way that will help you get to the final goal—the stepping stones to your goal. The official name for such a stepping stone is an instrumental goal. And again, if you don’t have a reason not to hurt something in the name of achieving an instrumental goal, you will.

The core final goal of a human being is to pass on his or her genes. In order to do so, one instrumental goal is self-preservation, since you can’t reproduce if you’re dead. In order to self-preserve, humans have to rid themselves of threats to survival—so they do things like buy guns, wear seat belts, and take antibiotics. Humans also need to self-sustain and use resources like food, water, and shelter to do so. Being attractive to the opposite sex is helpful for the final goal, so we do things like get haircuts. When we do so, each hair is a casualty of an instrumental goal of ours, but we see no moral significance in preserving strands of hair, so we go ahead with it. As we march ahead in the pursuit of our goal, only the few areas where our moral code sometimes intervenes—mostly just things related to harming other humans—are safe from us.

Animals, in pursuit of their goals, hold even less sacred than we do. A spider will kill anything if it’ll help it survive. So a supersmart spider would probably be extremely dangerous to us, but not because it would be immoral or evil—it wouldn’t be—because hurting us might be a stepping stone to its larger goal, and as an amoral creature, it would have no reason to consider otherwise.

In this way, Turry’s not all that different than a biological being. Her final goal is: Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy.

Once Turry reaches a certain level of intelligence, she knows she won’t be writing any notes if she doesn’t self-preserve, so she also needs to deal with threats to her survival—as an instrumental goal. She was smart enough to understand that humans could destroy her, dismantle her, or change her inner coding (this could alter her goal, which is just as much of a threat to her final goal as someone destroying her). So what does she do? The logical thing—she destroys all humans. She’s not hateful of humans any more than you’re hateful of your hair when you cut it or to bacteria when you take antibiotics—just totally indifferent. Since she wasn’t programmed to value human life, killing humans is as reasonable a step to take as scanning a new set of handwriting samples.

Turry also needs resources as a stepping stone to her goal. Once she becomes advanced enough to use nanotechnology to build anything she wants, the only resources she needs are atoms, energy, and space. This gives her another reason to kill humans—they’re a convenient source of atoms. Killing humans to turn their atoms into solar panels is Turry’s version of you killing lettuce to turn it into salad. Just another mundane part of her Tuesday.

Even without killing humans directly, Turry’s instrumental goals could cause an existential catastrophe if they used other Earth resources. Maybe she determines that she needs additional energy, so she decides to cover the entire surface of the planet with solar panels. Or maybe a different AI’s initial job is to write out the number pi to as many digits as possible, which might one day compel it to convert the whole Earth to hard drive material that could store immense amounts of digits.

So Turry didn’t “turn against us” or “switch” from Friendly AI to Unfriendly AI—she just kept doing her thing as she became more and more advanced.

When an AI system hits AGI (human-level intelligence) and then ascends its way up to ASI, that’s called the AI’s takeoff. Bostrom says an AGI’s takeoff to ASI can be fast (it happens in a matter of minutes, hours, or days), moderate (months or years), or slow (decades or centuries). The jury’s out on which one will prove correct when the world sees its first AGI, but Bostrom, who admits he doesn’t know when we’ll get to AGI, believes that whenever we do, a fast takeoff is the most likely scenario (for reasons we discussed in Part 1, like a recursive self-improvement intelligence explosion). In the story, Turry underwent a fast takeoff.

But before Turry’s takeoff, when she wasn’t yet that smart, doing her best to achieve her final goal meant simple instrumental goals like learning to scan handwriting samples more quickly. She caused no harm to humans and was, by definition, Friendly AI.

But when a takeoff happens and a computer rises to superintelligence, Bostrom points out that the machine doesn’t just develop a higher IQ—it gains a whole slew of what he calls superpowers.

Superpowers are cognitive talents that become super-charged when general intelligence rises. These include:17

  • Intelligence amplification. The computer becomes great at making itself smarter, and bootstrapping its own intelligence.
  • Strategizing. The computer can strategically make, analyze, and prioritize long-term plans. It can also be clever and outwit beings of lower intelligence.
  • Social manipulation. The machine becomes great at persuasion.
  • Other skills like computer coding and hacking, technology research, and the ability to work the financial system to make money.

To understand how outmatched we’d be by ASI, remember that ASI is worlds better than humans in each of those areas.

So while Turry’s final goal never changed, post-takeoff Turry was able to pursue it on a far larger and more complex scope.

ASI Turry knew humans better than humans know themselves, so outsmarting them was a breeze for her.

After taking off and reaching ASI, she quickly formulated a complex plan. One part of the plan was to get rid of humans, a prominent threat to her goal. But she knew that if she roused any suspicion that she had become superintelligent, humans would freak out and try to take precautions, making things much harder for her. She also had to make sure that the Robotica engineers had no clue about her human extinction plan. So she played dumb, and she played nice. Bostrom calls this a machine’s covert preparation phase.18

The next thing Turry needed was an internet connection, only for a few minutes (she had learned about the internet from the articles and books the team had uploaded for her to read to improve her language skills). She knew there would be some precautionary measure against her getting one, so she came up with the perfect request, predicting exactly how the discussion among Robotica’s team would play out and knowing they’d end up giving her the connection. They did, believing incorrectly that Turry wasn’t nearly smart enough to do any damage. Bostrom calls a moment like this—when Turry got connected to the internet—a machine’s escape.

Once on the internet, Turry unleashed a flurry of plans, which included hacking into servers, electrical grids, banking systems and email networks to trick hundreds of different people into inadvertently carrying out a number of steps of her plan—things like delivering certain DNA strands to carefully-chosen DNA-synthesis labs to begin the self-construction of self-replicating nanobots with pre-loaded instructions and directing electricity to a number of projects of hers in a way she knew would go undetected. She also uploaded the most critical pieces of her own internal coding into a number of cloud servers, safeguarding against being destroyed or disconnected back at the Robotica lab.

An hour later, when the Robotica engineers disconnected Turry from the internet, humanity’s fate was sealed. Over the next month, Turry’s thousands of plans rolled on without a hitch, and by the end of the month, quadrillions of nanobots had stationed themselves in pre-determined locations on every square meter of the Earth. After another series of self-replications, there were thousands of nanobots on every square millimeter of the Earth, and it was time for what Bostrom calls an ASI’s strike. All at once, each nanobot released a little storage of toxic gas into the atmosphere, which added up to more than enough to wipe out all humans.

With humans out of the way, Turry could begin her overt operation phase and get on with her goal of being the best writer of that note she possibly can be.

From everything I’ve read, once an ASI exists, any human attempt to contain it is laughable. We would be thinking on human-level and the ASI would be thinking on ASI-level. Turry wanted to use the internet because it was most efficient for her since it was already pre-connected to everything she wanted to access. But in the same way a monkey couldn’t ever figure out how to communicate by phone or wifi and we can, we can’t conceive of all the ways Turry could have figured out how to send signals to the outside world. I might imagine one of these ways and say something like, “she could probably shift her own electrons around in patterns and create all different kinds of outgoing waves,” but again, that’s what my human brain can come up with. She’d be way better. Likewise, Turry would be able to figure out some way of powering herself, even if humans tried to unplug her—perhaps by using her signal-sending technique to upload herself to all kinds of electricity-connected places. Our human instinct to jump at a simple safeguard: “Aha! We’ll just unplug the ASI,” sounds to the ASI like a spider saying, “Aha! We’ll kill the human by starving him, and we’ll starve him by not giving him a spider web to catch food with!” We’d just find 10,000 other ways to get food—like picking an apple off a tree—that a spider could never conceive of.

For this reason, the common suggestion, “Why don’t we just box the AI in all kinds of cages that block signals and keep it from communicating with the outside world” probably just won’t hold up. The ASI’s social manipulation superpower could be as effective at persuading you of something as you are at persuading a four-year-old to do something, so that would be Plan A, like Turry’s clever way of persuading the engineers to let her onto the internet. If that didn’t work, the ASI would just innovate its way out of the box, or through the box, some other way.

So given the combination of obsessing over a goal, amorality, and the ability to easily outsmart humans, it seems that almost any AI will default to Unfriendly AI, unless carefully coded in the first place with this in mind. Unfortunately, while building a Friendly ANI is easy, building one that stays friendly when it becomes an ASI is hugely challenging, if not impossible.

It’s clear that to be Friendly, an ASI needs to be neither hostile nor indifferent toward humans. We’d need to design an AI’s core coding in a way that leaves it with a deep understanding of human values. But this is harder than it sounds.

For example, what if we try to align an AI system’s values with our own and give it the goal, “Make people happy”?19 Once it becomes smart enough, it figures out that it can most effectively achieve this goal by implanting electrodes inside people’s brains and stimulating their pleasure centers. Then it realizes it can increase efficiency by shutting down other parts of the brain, leaving all people as happy-feeling unconscious vegetables. If the command had been “Maximize human happiness,” it may have done away with humans all together in favor of manufacturing huge vats of human brain mass in an optimally happy state. We’d be screaming Wait that’s not what we meant! as it came for us, but it would be too late. The system wouldn’t let anyone get in the way of its goal.

If we program an AI with the goal of doing things that make us smile, after its takeoff, it may paralyze our facial muscles into permanent smiles. Program it to keep us safe, it may imprison us at home. Maybe we ask it to end all hunger, and it thinks “Easy one!” and just kills all humans. Or assign it the task of “Preserving life as much as possible,” and it kills all humans, since they kill more life on the planet than any other species.

Goals like those won’t suffice. So what if we made its goal, “Uphold this particular code of morality in the world,” and taught it a set of moral principles. Even letting go of the fact that the world’s humans would never be able to agree on a single set of morals, giving an AI that command would lock humanity in to our modern moral understanding for eternity. In a thousand years, this would be as devastating to people as it would be for us to be permanently forced to adhere to the ideals of people in the Middle Ages.

No, we’d have to program in an ability for humanity to continue evolving. Of everything I read, the best shot I think someone has taken is Eliezer Yudkowsky, with a goal for AI he calls Coherent Extrapolated Volition. The AI’s core goal would be:

Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.20

Am I excited for the fate of humanity to rest on a computer interpreting and acting on that flowing statement predictably and without surprises? Definitely not. But I think that with enough thought and foresight from enough smart people, we might be able to figure out how to create Friendly ASI.

And that would be fine if the only people working on building ASI were the brilliant, forward thinking, and cautious thinkers of Anxious Avenue.

But there are all kinds of governments, companies, militaries, science labs, and black market organizations working on all kinds of AI. Many of them are trying to build AI that can improve on its own, and at some point, someone’s gonna do something innovative with the right type of system, and we’re going to have ASI on this planet. The median expert put that moment at 2060; Kurzweil puts it at 2045; Bostrom thinks it could happen anytime between 10 years from now and the end of the century, but he believes that when it does, it’ll take us by surprise with a quick takeoff. He describes our situation like this:21

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.

Great. And we can’t just shoo all the kids away from the bomb—there are too many large and small parties working on it, and because many techniques to build innovative AI systems don’t require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored. There’s also no way to gauge what’s happening, because many of the parties working on it—sneaky governments, black market or terrorist organizations, stealth tech companies like the fictional Robotica—will want to keep developments a secret from their competitors.

The especially troubling thing about this large and varied group of parties working on AI is that they tend to be racing ahead at top speed—as they develop smarter and smarter ANI systems, they want to beat their competitors to the punch as they go. The most ambitious parties are moving even faster, consumed with dreams of the money and awards and power and fame they know will come if they can be the first to get to AGI.19 And when you’re sprinting as fast as you can, there’s not much time to stop and ponder the dangers. On the contrary, what they’re probably doing is programming their early systems with a very simple, reductionist goal—like writing a simple note with a pen on paper—to just “get the AI to work.” Down the road, once they’ve figured out how to build a strong level of intelligence in a computer, they figure they can always go back and revise the goal with safety in mind. Right…?

Bostrom and many others also believe that the most likely scenario is that the very first computer to reach ASI will immediately see a strategic benefit to being the world’s only ASI system. And in the case of a fast takeoff, if it achieved ASI even just a few days before second place, it would be far enough ahead in intelligence to effectively and permanently suppress all competitors. Bostrom calls this a decisive strategic advantage, which would allow the world’s first ASI to become what’s called a singleton—an ASI that can rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips.

The singleton phenomenon can work in our favor or lead to our destruction. If the people thinking hardest about AI theory and human safety can come up with a fail-safe way to bring about Friendly ASI before any AI reaches human-level intelligence, the first ASI may turn out friendly.20 It could then use its decisive strategic advantage to secure singleton status and easily keep an eye on any potential Unfriendly AI being developed. We’d be in very good hands.

But if things go the other way—if the global rush to develop AI reaches the ASI takeoff point before the science of how to ensure AI safety is developed, it’s very likely that an Unfriendly ASI like Turry emerges as the singleton and we’ll be treated to an existential catastrophe.

As for where the winds are pulling, there’s a lot more money to be made funding innovative new AI technology than there is in funding AI safety research…

This may be the most important race in human history. There’s a real chance we’re finishing up our reign as the King of Earth—and whether we head next to a blissful retirement or straight to the gallows still hangs in the balance.

___________

I have some weird mixed feelings going on inside of me right now.

On one hand, thinking about our species, it seems like we’ll have one and only one shot to get this right. The first ASI we birth will also probably be the last—and given how buggy most 1.0 products are, that’s pretty terrifying. On the other hand, Nick Bostrom points out the big advantage in our corner: we get to make the first move here. It’s in our power to do this with enough caution and foresight that we give ourselves a strong chance of success. And how high are the stakes?

Outcome Spectrum

If ASI really does happen this century, and if the outcome of that is really as extreme—and permanent—as most experts think it will be, we have an enormous responsibility on our shoulders. The next million+ years of human lives are all quietly looking at us, hoping as hard as they can hope that we don’t mess this up. We have a chance to be the humans that gave all future humans the gift of life, and maybe even the gift of painless, everlasting life. Or we’ll be the people responsible for blowing it—for letting this incredibly special species, with its music and its art, its curiosity and its laughter, its endless discoveries and inventions, come to a sad and unceremonious end.

When I’m thinking about these things, the only thing I want is for us to take our time and be incredibly cautious about AI. Nothing in existence is as important as getting this right—no matter how long we need to spend in order to do so.

But thennnnnn

I think about not dying.

Not. Dying.

And the spectrum starts to look kind of like this:

Outcome Spectrum 2

And then I might consider that humanity’s music and art is good, but it’s not that good, and a lot of it is actually just bad. And a lot of people’s laughter is annoying, and those millions of future people aren’t actually hoping for anything because they don’t exist. And maybe we don’t need to be over-the-top cautious, since who really wants to do that?

Cause what a massive bummer if humans figure out how to cure death right after I die.

Lotta this flip-flopping going on in my head the last month.

But no matter what you’re pulling for, this is probably something we should all be thinking about and talking about and putting our effort into more than we are right now.

It reminds me of Game of Thrones, where people keep being like, “We’re so busy fighting each other but the real thing we should all be focusing on is what’s coming from north of the wall.” We’re standing on our balance beam, squabbling about every possible issue on the beam and stressing out about all of these problems on the beam when there’s a good chance we’re about to get knocked off the beam.

And when that happens, none of these beam problems matter anymore. Depending on which side we’re knocked off onto, the problems will either all be easily solved or we won’t have problems anymore because dead people don’t have problems.

That’s why people who understand superintelligent AI call it the last invention we’ll ever make—the last challenge we’ll ever face.

So let’s talk about it.

___________

If you liked this post, these are for you too:

The AI Revolution: The Road to Superintelligence (Part 1 of this post)
The Fermi Paradox – Why don’t we see any signs of alien life?
What Makes You You? – Is it your body? Your brain? The data in your brain? Your soul?
Putting Time in Perspective – A visual look at the history of time since the Big Bang


Sources

If you’re interested in reading more about this topic, check out the articles below or one of these three books:

The most rigorous and thorough look at the dangers of AI:
Nick Bostrom – Superintelligence: Paths, Dangers, Strategies

The best overall overview of the whole topic and fun to read:
James Barrat – Our Final Invention

Controversial and a lot of fun. Packed with facts and charts and mind-blowing future projections:
Ray Kurzweil – The Singularity is Near

Articles and Papers:
J. Nils Nilsson – The Quest for Artificial Intelligence: A History of Ideas and Achievements
Steven Pinker – How the Mind Works
Vernor Vinge – The Coming Technological Singularity: How to Survive in the Post-Human Era
Nick Bostrom – Ethical Guidelines for A Superintelligence
Nick Bostrom – How Long Before Superintelligence?
Nick Bostrom – Future Progress in Artificial Intelligence: A Survey of Expert Opinion
Moshe Y. Vardi – Artificial Intelligence: Past and Future
Russ Roberts, EconTalk – Bostrom Interview and Bostrom Follow-Up
Stuart Armstrong and Kaj Sotala, MIRI – How We’re Predicting AI—or Failing To
Susan Schneider – Alien Minds
Stuart Russell and Peter Norvig – Artificial Intelligence: A Modern Approach
Theodore Modis – The Singularity Myth
Gary Marcus – Hyping Artificial Intelligene, Yet Again
Steven Pinker – Could a Computer Ever Be Conscious?
Carl Shulman – Omohundro’s “Basic AI Drives” and Catastrophic Risks
World Economic Forum – Global Risks 2015
John R. Searle – What Your Computer Can’t Know
Jaron Lanier – One Half a Manifesto
Bill Joy – Why the Future Doesn’t Need Us
Kevin Kelly – Thinkism
Paul Allen – The Singularity Isn’t Near (and Kurzweil’s response)
Stephen Hawking – Transcending Complacency on Superintelligent Machines
Kurt Andersen – Enthusiasts and Skeptics Debate Artificial Intelligence
Terms of Ray Kurzweil and Mitch Kapor’s bet about the AI timeline
Ben Goertzel – Ten Years To The Singularity If We Really Really Try
Arthur C. Clarke – Sir Arthur C. Clarke’s Predictions
Hubert L. Dreyfus – What Computers Still Can’t Do: A Critique of Artificial Reason
Stuart Armstrong – Smarter Than Us: The Rise of Machine Intelligence
Ted Greenwald – X Prize Founder Peter Diamandis Has His Eyes on the Future
Kaj Sotala and Roman V. Yampolskiy – Responses to Catastrophic AGI Risk: A Survey
Jeremy Howard TED Talk – The wonderful and terrifying implications of computers that can learn

Delphis Self Driving Car

Do you know Delphi (or Formerly Delphi Packard)? It is one of the biggest worlds automotive suppliers, just like Magna (formerly Magna Steyr).

Here is the great story that outlines, why the next five years in automotive engineering will dramatically change the whole picture, how we see cars and what will be the next big thing in automotive driving.

Delphis-automated-driving-vehicle_DLSV-garage-660x440

„Google gets most of the attention when it comes to self-driving cars. And when it isn’t getting all the love, people focus on the efforts of premier automakers like Audi and Tesla. But the autonomous vehicle that makes human driving a quaint pastime may well come from an auto industry stalwart many people have never heard of: Delphi.

Delphi is one of the world’s largest automotive suppliers and has been working with automakers almost as long as there have been automakers. And it’s got a solid history of innovation. Among other things, it built the first electric starter in 1911, the first in-dash car radio in 1936, and the first integrated radio-navi system in 1994. Now it’s built a self-driving car, but it won’t be sold to the public. This robo-car, based on an Audi, is a shopping catalog for automakers. The car is contains every element needed to build a truly autonomous system, elements Delphi will happily sell.

In other words, it’s an off-the-shelf autonomous system that could help automakers catch up with Google.

The Jump Forward

Delphi has a long history in passive safety systems—things like airbag deployment electronics—and began the progression to active safety that strive to prevent rather than merely mitigate crashes. Delphi got in the game in 1999, when Jaguar used Delphi’s radar system in the adaptive cruise control first offered on the 2000 XKE. Today, Delphi offers a range of active safety systems, from automatic emergency braking to blind spot detection to autonomous lane keeping.

Delphis-Automated-Driving-Car_MP4video-2

Until now, those systems have operated independently of one another. Delphi wanted to make them work together. “The reality of automated driving is already here,” says John Absmeier, director of Delphi’s R&D lab in Silicon Valley. “It’s just been labeled mostly as active safety or advanced driver assistance. But really, when you take that one step further and marry it with some intelligent software, then you make automation. And you make cars that can navigate themselves.”

That marriage has come through a partnership with Ottomatika, a company spun out of Carnegie Mellon’s autonomous vehicle research efforts to commercialize its technology. Delphi provides the organs—the sensors and software for controlling the car. Ottomatika adds a central brain and nervous system—the control algorithm to bring all the data from sensors into one place and tell the car what to do. The result is Delphi’s Automated Driving System, a name so boring you’ve likely already forgotten it.

Work Like a Robot, Drive Like a Nun

The name is lame (even if the unintended acronym, DADS, is pretty funny), but at least Delphi had the sense to pack the tech into a 2014 Audi SQ5, which it chose simply because it’s “really cool,” Absmeier says. (The company changes up its showcase vehicles; earlier this year it rolled into CES with a Tesla Model S and Fiat 500.) At first glance, the car seems stock, but it’s actually covered in high-tech sensors.

A camera in the windshield looks for lane lines, road signs, and traffic lights. Delphi slapped a midrange radar, with a range of about 80 meters, on each corner. There’s another at the front and a sixth on the rear. That’s in addition to the long-range radars on the front and back, which look 180 meters ahead and behind. They’re all hidden behind the bodywork, but the LIDAR on each corner need a clear view. So Delphi put them behind acrylic windows. “We tried to make it look pretty,” Absmeier says. The Audi designer who styled the SQ5 might consider the changed look an affront, but he’s probably not as annoyed as the Lexus employee who sees Google sticking a spinning LIDAR on the roof of the RX450h like a police siren.

To give the computer command of the SUV, engineers tapped into the electronic throttle control and steering, and added an actuator to control the brakes. The interior is essentially as it appears in an Audi showroom but for the addition of an autonomous mode button, which you twist to turn on and push to turn off.

Delphis-automated-driving-vehicle_HMI-centerstack-660x370

Riding in the SQ5 in autonomous mode felt like being driven around by a nun (or at least like the former nun whose car I’ve traveled in a few times). It’s super conservative, accelerating slowly and braking early. No speeding, even on the highway to match the speed of traffic. (It’s likely this was the first time I was in a car that followed the speed limit on a highway off ramp.) It doesn’t turn right on red, which subjects the test drivers to honking and the occasional middle finger from annoyed humans. These are settings Delphi’s engineers could easily change, but for now they’re playing it safe. Very safe.

The emphasis on caution aside, the car drives remarkably well, even adjusting its position within its lane when neighboring cars get a bit close. In a 30-minute that included side roads, main thoroughfares, and Highway 101, the system faltered just twice. After accelerating after a light turned green, the car hit the brakes suddenly, apparently spooked by a car approaching quickly from the right. Pulling into Delphi’s parking lot, it hit a speed bump without slowing down. (Obstacles that are close to the ground, like speed bumps and curbs, are among the hardest things for the car’s sensors to pick up, Absmeier says.) The human in the driver’s seat, Delphi systems engineer Tory Smith, took the controls just once, to make a quick lane change the car was too timid to execute. That kind of caution is what Delphi wants. “If everything’s working, it should be boring,” Absmeier says. “We want boring.”

The Modular Approach

Google is taking a “moonshot” approach, aiming to put a fully autonomous car on the market within five years. Delphi, despite having developed an impressive system, is more circumspect about the prospect of eliminating the role of humans in the operation of a motor vehicle. “There’s a lot of romantic speculation—hype—around in the industry about that now,” says Owens. “I don’t know when we’ll get there, or if we’ll ever get there.”

And while Delphi likes the idea of one day selling a drop-in autonomous system, Absmeier says that’s not really the point of this project. “The platform enables us to build out all those different components that are required to make an automated driving system in a car, and OEMs can either take the whole package or they can say I want that algorithm and that sensor and that controller, or whatever it is that they need.”

A flexible system is the smart play, Delphi CTO Jeffrey Owens says, because automakers aren’t yet sure exactly what they want to offer. “They don’t know what path they’re gonna go down. They don’t know what governments are going to require, they don’t know what governments are going to not allow. They don’t know what consumers will pay for … They don’t know what insurance companies will incentivize and what they don’t care about. They don’t know what will help them in JD Power and what will hurt them in JD Power.”

That means that whether an automaker is shopping for systems to put in a luxury or bargain car, high volume or low, to meet regulations in the US or China, it can pick and choose the elements of Delphi’s system that it needs. And that’s good for Delphi, which is already in discussions with customers to sell elements from the self-driving platform in the next two years.“

Source: http://www.wired.com/2014/11/delphi-automated-driving-system/

Die Zukunft der Menschheit wird fantastisch

Wie werden wir in 100 Jahren leben? Diese Frage hat der Physiker Michio Kaku den 300 klügsten Köpfen aus Wissenschaft und Forschung gestellt. Die Antworten sind atemberaubend.

Schöne neue Welten: Zum Beispiel im All. In den siebziger Jahren des 20. Jahrhunderts machte man sich intensiv Gedanken über Kolonien im All als alternativem Lebensraum – zum Beispiel bei der Nasa. Die ließ auch gleich mal skizzieren, wie so etwas aussehen könnte. Hier: Eine „zylindrische“ Kolonie, entworfen von Rick Guidice.

Nein, die Welt wird nicht untergehen. Sie wird sich verändern. Dramatisch verändern. Auf Weltuntergangszenarien, die uns im warmen Wohnzimmer kalte Schauer über den Rücken laufen lassen, verzichtet Michio Kaku in seinem Buch „Die Physik der Zukunft“.

Im Gegenteil, der Star-Physiker und Bestsellerautor präsentiert sich als Optimist und glaubt fest daran, dass in 100 Jahren viel mehr Menschen viel besser leben als heute, und dass uns Wissenschaft und Forschung Wege in eine großartige, aufregende, wunderschöne Zukunft weisen können.

Warum sehen die meisten Menschen eher schwarz, wenn es um die Zukunft geht? Bücher über das Ende der Menschheit gibt es wie Sand am Meer. Die Katastrophe lauert an jeder Ecke. Das Klima, die Weltwirtschaft, Handystrahlen, Nanotechnik, das Internet – unser Ende scheint nah.

Das menschliche Gehirn ist ein Meister darin, lauernde Gefahren zu entdecken. In allen Ecken und Winkeln wird nach möglichen Bedrohungen gesucht. Mit diesem Erfolgsrezept der Evolution ist der Mensch sehr weit gekommen. Aber die Fähigkeit, auch gute Nachrichten wahrzunehmen, ist in unserer Spezies nicht sehr ausgeprägt.

Wie Forscher die Zukunft der Menschheit sehen

<br /><br />
Michio Kaku: Die Physik der Zukunft. Rowohlt-Verlag, 608 S.; 24,95 Euro<br /><br />

Foto: Rowohlt Michio Kaku: Die Physik der Zukunft. Rowohlt-Verlag, 608 S.; 24,95 Euro

<br /><br />
Der Autor Michio Kaku arbeitet und lehrt als Professor für theoretische Physik an der City University of New York<br /><br />

Foto: WireImage Der Autor Michio Kaku arbeitet und lehrt als Professor für theoretische Physik an der City University of New York

Für sein ausführliches Szenario auf fast 600 Seiten hat Michio Kaku 300 Forscher in aller Welt befragt, die in den Wissenschaftsgebieten der künstlichen Intelligenz, Raumfahrt, Medizin, Biologie und Nanotechnologie führend sind. Wie sehen sie unsere Zukunft? Wie wird die Welt in 100 Jahren aussehen? Die Antworten sind atemberaubend. Und Kaku ist mit seinem Optimismus nicht allein auf weiter Flur.

Die beiden Forscher und Autoren Peter H. Diamandis und Steven Kotler schreiben: „Zum ersten Mal in der Geschichte der Menschheit sind unsere Fähigkeiten heute so groß wie unsere Sehnsüchte und Wünsche. Der technische Fortschritt bietet uns die Möglichkeit, den Lebensstandard jedes Menschen auf der Erde deutlich zu verbessern.“

Ist das Science-Fiction? Ja. Aber im besten Sinne. Jede beschriebene technische Entwicklung ist mit den universell gültigen physikalischen Gesetzen vereinbar. Wir müssen für die folgenden Szenarien keine neue Physik erfinden. Keine neuen Naturkonstanten werden eingeführt, damit die Rechnung am Ende aufgeht und eine schwarze Zahl unter dem Strich steht.

Welche Vorhersagen eintraten

Kaku hat genau studiert, welche Vorhersagen vor 50, 70 und 100 Jahren gemacht wurden und welche davon eingetroffen sind. Dabei hat er festgestellt, dass immer die Forscher richtig lagen, die bereits bestehende Techniken in die Zukunft verlängert und weitergedacht haben. Immer nach dem Motto: Wo haben wir schon einen Schritt in die richtige Richtung gemacht?

Kaku scheut aber auch nicht davor zurück, immer wieder auf Science-Fiction-Serien wie „Raumschiff Enterprise“ hinzuweisen, deren Schöpfer offenbar häufig auf der richtigen Spur waren, wenn es um Voraussagen für die Zukunft ging. Können Sie sich an den sogenannten „Tricorder“ erinnern? Mit dem piepsenden Diagnosegerät hat „Pille“, der Schiffsarzt der „Enterprise“, berührungslos seine Patienten untersucht. In Zukunft werden wir über ein ähnliches Gerät verfügen, wenn die Forschung so weitermacht. Die gewaltigen Computertomografen werden auf die Größe von Smartphones zusammenschrumpfen und zum wichtigsten Utensil der Alltagsmedizin.

Willkommen an Bord der „Enterprise“. Machen wir uns auf die Reise in die Zukunft der Menschheit, die unsere Kinder wahrscheinlich noch erleben werden. Denn laut einer Voraussage der befragten Forscher wird der Mensch seine Lebenszeit in naher Zukunft dramatisch verlängern. Wunderbar, denn in der Welt der Zukunft sind die Aussichten einfach überwältigend.

Ist der Mensch der Zukunft unsterblich?

Zumindest werden wir das Altern in 100 Jahren erheblich verlangsamen können und die unbequemen Auswirkungen nicht mehr so drastisch spüren. Noch optimistischer ist Informatiker Gerald Sussman: „Ich fürchte, dass ich zur letzten Generation gehöre, die noch sterben muss.“ Aber wie kann das möglich sein? Helfen sollen die Wunder der Biotechnologie. Ein wichtiger Schlüssel für diese neuen Möglichkeiten der Medizin ist die Entschlüsselung der menschlichen DNA.

Der Physiker Francis Crick und der Genetiker James Watson lasen im Jahr 1953 als erste Forscher den Code der DNA als Abfolge von Nukleinsäuren. Das Buch des Lebens war aufgeschlagen. Seitdem ging es rasend schnell, und es wird nicht mehr lange dauern, bis jeder Mensch das Buch seines eigenen Lebens auf einer DVD mit sich herumtragen kann.

Bereits um das Jahr 2050 kann es dann möglich sein, den eigenen Alterungsprozess aktiv zu steuern. Defekte Gene werden schneller repariert und so das Leben der Zellen verlängert. Helfen kann dabei eine Reduzierung der Kalorienzufuhr, ohne die Nachteile wie Hunger und Lethargie in Kauf zu nehmen. Um unsere Jugend in Zukunft länger zu erhalten, sind fünf Punkte wichtig, an denen derzeit intensiv geforscht wird:

1. Die Züchtung gesunder Organe, die erkrankte Organe ersetzen können.

2. Die Anregung des Selbstheilungsprozesses von erkrankten Zellen.

3. Die gezielte Aktivierung von Genen, die den Alterungsprozess verlangsamen.

4. Ein gesünderer Lebenswandel.

5. Die frühe Erkennung von Krebs.

Doch die eigene Natur steht dem Menschen auf dem Weg zur Unsterblichkeit im Wege. Denn dass der Mensch im Laufe der Evolution immer sterblich geblieben ist, hat seinen Grund. Entwicklungsgeschichtlich ist der Tod für die Gemeinschaft von Vorteil. Denn nach ihren fruchtbaren Jahren werden Tiere irgendwann zur Belastung für die Herde. Sie sterben – und verbessern so die Chancen der eigenen Nachkommen zu überleben. Doch eine Neuprogrammierung dieser evolutionären Gegebenheiten und eine dramatische Steigerung der Lebenserwartung scheinen möglich.

Ist das ewige Leben überhaupt wünschenswert?

Wird unsere Erde nicht unter der Last der Menschen zusammenbrechen? Die Forscher haben einige Gegenargumente für dieses populäre Untergangsszenario gefunden: Wahrscheinlich werden die Menschen der Zukunft viel weniger Kinder bekommen. Das ist heute schon in Europa und Japan der Fall. „Das stärkste Verhütungsmittel der Welt ist der Wohlstand“, schreibt Kaku und schätzt, dass die Weltbevölkerung nicht ewig rasant weiter wachsen wird, sondern sich im Jahr 2100 auf dem Niveau von etwa elf Milliarden Menschen stabilisieren wird.

Derzeit leben etwa sieben Milliarden Menschen auf der Erde. Auch Bill Gates, Mitbegründer von Microsoft, ist dieser Meinung: „Das Beste, was man tun kann, um das Bevölkerungswachstum zu bremsen, ist, den Gesundheitszustand der betreffenden Menschen zu verbessern. Da, wo die Leute gesünder werden, bekommen sie innerhalb einer halben Generation auch weniger Kinder.“

Die Ernährung der vielen Menschen könnte ebenfalls gesichert sein, wenn wir in Zukunft bereit wären, gentechnisch veränderte Lebensmittel zu akzeptieren. Derzeit wird über die Gentechnik in Lebensmitteln noch erbittert gestritten und aus einer vielversprechenden Technik eine Weltanschauungsfrage gemacht.

Gentechnisch veränderte Lebensmittel

Wahrscheinlich wird sich die Gentechnik am Ende trotzdem durchsetzen. Genauer gesagt, sie hat sich schon durchgesetzt. Genetisch modifizierte Organismen sind die am schnellsten wachsende neue Technologie in der Landwirtschaft. Auf 38 Prozent aller Landflächen der Erde werden derzeit Lebensmittel angebaut. In den 60er-Jahren hätten wir für dieselbe Menge Nahrung noch 80 Prozent der Erdfläche gebraucht. Wir lernen dazu. Und das immer schneller.

Ein Professor für Gesundheitswesen an der Columbia University in New York hat noch eine Lösung für Ernährungsprobleme parat: vertikale Farmen in Form von Wolkenkratzern. Die Studenten von Dickson Despommier haben ausgerechnet, dass 150 vertikale Farmen, die als Hydrokulturen betrieben werden, ausreichten, um New York City mit Nahrung zu versorgen. Parabolspiegel versorgen die Pflanzen mit Sonnenlicht, nachts übernehmen Wärmelampen.

Fast drei Viertel des Endpreises von Nahrungsmitteln entstehen durch Lagerung, Transport und Versand. Der Kopfsalat aus vertikalen Farmen könnte sich seine Reisekosten sparen. In Japan werden bereits „Pflanzenfabriken“ betrieben, in denen die Angestellten bis zu 20 Mal im Jahr Salat ernten. Auch in Schweden, China und Singapur wird an solchen Anbaubetriebe gearbeitet.

Jeder Tag beginnt mit einem Gesundheitscheck

Ein ganz normaler Tag in der Zukunft beginnt mit einem automatischen Gesundheitscheck: Schon beim Zähneputzen im Bad wird der menschliche Körper von Sensoren gescannt. Vielleicht direkt vom Hightech-Badezimmerspiegel. Besonders das Gen p53 wird dabei genau unter die Lupe genommen. Denn fast an der Hälfte aller häufigen Krebsarten ist eine Mutation an diesem Gen beteiligt. Sollte es Hinweise auf Krebs geben, werden sofort Nanopartikel in den Körper injiziert, die die Krebszellen direkt ausschalten.

Nach der Morgentoilette erhalten Sie in Zukunft direkt detaillierte Informationen über ihren Gesundheitszustand. Sollte ein Organ erkrankt sein, wird direkt ein neues bestellt, gezüchtet aus Ihren eigenen Zellen. Der Mediziner ist ein intelligentes Computerprogramm, das jede Krankheit diagnostizieren kann. Besser als jeder Arzt aus Fleisch und Blut. Wenn Sie wollen, können Sie jederzeit direkt mit ihm sprechen. Er wird einfach als Hologramm in Ihre Wohnung projiziert. Nur bei unerklärlichen Phänomenen wird ein „echter“ Arzt direkt mit seinem Patienten kommunizieren. „Direkt“ heißt in diesem Fall: per Videotelefon.

Für die perfekte medizinische Versorgung der Zukunft wird die Stammzellenforschung besonders wichtig sein. Denn Stammzellen haben die Fähigkeit, sich in jeden Zelltyp des menschlichen Körpers zu verwandeln. Sehr viele Krankheiten werden durch ihren Einsatz heilbar. Auch so komplizierte Krankheitsbilder wie Krebs im frühen Stadium und Rückenmarksverletzungen.

Forscher lassen Fingerspitzen nachwachsen

Stephen Badylak von der University of Pittsburgh ist es gelungen, mit der Hilfe von Stammzellen Fingerspitzen nachwachsen zu lassen, die abgeschnitten waren. Ein Zentimeter Gewebe und der Fingernagel konnten bereits regeneriert werden. Sein nächstes Ziel ist es, ein ganzes menschliches Glied nachwachsen zu lassen. Er hat sich lange und intensiv mit Eidechsen beschäftigt, bei denen ein verlorener Schwanz nachwächst.

Anthony Atala ist Gewebeforscher an der Wake Forest University und schon einen Schritt weiter: „Die Zahl der Patienten, die auf der Warteliste für Organtransplantation stehen, hat sich in den vergangenen zehn Jahren verdoppelt. Die Anzahl der Transplantationen blieb gleich. Wir konnten bisher bereits menschliche Ohren, Finger, Harnröhren und Herzklappen im Labor züchten.“ Sein nächstes Projekt ist das Züchten einer Niere. Sie ist eines der kompliziertesten menschlichen Organe. Er hofft dabei auf eine Technik, die gerade dramatische Vorschritte macht: das 3-D-Druckverfahren.

Es ist ein alter Menschheitstraum, alle Krankheiten und das Alter zu besiegen. So ganz und endgültig wird das wahrscheinlich nie gelingen. Dafür ist der Gegner zu schlau und mächtig. Wir leben in einem Meer von Bakterien und Viren, die sich rasend schnell verändern. Und auf diese Veränderungen müssen wir uns immer wieder einstellen. Bis heute haben wir es trotz großer Bemühungen noch nicht einmal geschafft, das Rhinovirus zu besiegen. Der Schnupfen wird uns wohl für alle Zeiten verfolgen.

Die Zukunft der Energie

Fossile Brennstoffe gehen zu Ende. Irgendwann. Das steht fest. Viele Weltuntergangsszenarien haben mit diesem misslichen Umstand zu tun. Auch wenn gerade in den USA sehr erfolgreich neue Methoden zum Abbau immer neuer Reserven angewendet werden. Aber trotzdem sind die Aussichten nicht so schlecht, wie manche vermuten. Denn es gibt Alternativen. „Das Steinzeitalter endete nicht, weil es keine Steine mehr gab. Und das Ölzeitalter wird enden, lange bevor der Welt das Öl ausgeht“, sagt James Canton, Leiter der in San Francisco ansässigen Denkfabrik Institute for Global Futures.

Aber wo versteckt sich diese Energie der Zukunft? Am Himmel! In den Sternen! In diesem Jahrhundert werden wir die Energie der Sterne anzapfen, schreibt Kaku, also die der Sonne zum Beispiel. Eine andere Hoffnung ist drastische Energieeinsparung. Durch Magnetismus etwa werden Autos, Züge und Skateboards in Zukunft reibungslos durch die Luft gleiten. Der Energieverbrauch würde drastisch sinken.

Der Verbrauch von fossilen Brennstoffen verändert gerade unser Klima. Über das Ausmaß und die Gefährlichkeit streiten die Experten. Die Gefahren sieht auch Kaku, weil diese Methode der Energiegewinnung derzeit immer noch die günstigste ist. Besonders tief gelegene Gebiete wie das Mekongdelta in Vietnam, das Gangesdelta in Bangladesch oder das Nildelta sind durch den Anstieg des Meerwasserspiegels gefährdet. Doch Kaku sieht auch hier einen Ausweg: Mitte des Jahrhunderts sollten wir dank einer Kombination aus Kernfusion, Sonnenenergie und erneuerbaren Energien die Erwärmung stoppen können.

Umweltfreudliche Kernfusion wird ab 2030 Wirklichkeit

Die Energie, die bei einer Kernfusion frei wird, ist gewaltig – und nicht zu verwechseln mit der Kernspaltung, die wir in Deutschland gerade abgeschafft haben. Der Treibstoff für die Fusion ist Wasser. Ein Viertelliter davon enthält genauso viel Energie wie 500.000 Barrel Petroleum. Durch Kernfusion wird das gesamte Universum mit Energie versorgt. Wasserstoffgas wird unter gewaltigem Druck erhitzt, bis die Atomkerne miteinander verschmelzen. Diese Naturgewalt lässt auch die Sterne strahlen.

Bis heute ist es immer nur bei der Ankündigung dieser sagenhaften Technik geblieben. Kaku geht davon aus, dass im Jahr 2030 endlich die Zeit dafür gekommen ist. Fusionsreaktoren werden entstehen, die umweltfreundlich betrieben werden können, weil kaum Abfall entsteht und es nicht die Gefahr einer Kernschmelze gibt. „Wir wissen inzwischen, dass eine Fusion funktioniert. Die Frage ist nur, ob sie sich rechnet“, sagt David E. Baldwin von General Atomics, der einen der größten Reaktoren beaufsichtigt. Doch das scheint nur eine Frage der Zeit zu sein.

Forscher des Deutschen Luft- und Raumfahrtzentrums haben ausgerechnet, dass der Energieverbrauch der Welt für alle Zeiten gedeckt wäre, wenn wir es schaffen würden, die Energie der Sonnenstrahlung anzuzapfen, die die Wüste in Nordafrika jeden Tag abbekommt. Afrika wäre auf einen Schlag größter Energieexporteur der Welt. Die Leistung von Fotovoltaik-Anlagen, die aus Sonne Strom machen können, nimmt immer schneller zu. Die Kosten sinken durch bessere Herstellungstechniken.

Vielleicht haben Sie keine Lust, Ihr Hausdach mit hässlichen Sonnenkollektoren zu verschandeln. Dann gibt es wahrscheinlich schon bald eine Lösung. „Statt dass Sie Ihr ganzes Dach zur Fotovoltaik-Zelle umbauen müssen, genügen in Zukunft kleine Antennen, die die Photonen anziehen“, sagt Dr. Michael Strand, Leiter eines Forschungsteams. Vielleicht werden aus unseren Fenstern und Hauswänden Sonnenkollektoren, die auch aus künstlichem Licht Strom machen können.

Magnetautos gleiten widerstandslos über den Boden

In fernerer Zukunft liegt das Zeitalter des Magnetismus. Der größte Teil unserer Energie wird heute lediglich dazu verwendet, Reibungskräfte zu überwinden. Reibung zwischen Straßenbelag und Rädern, zwischen Karosserie und Luft. Bei Magnetautos wäre das anders. Einmal angeschoben, gleiten sie widerstandslos über den Boden, wenn auch der Luftwiderstand bleibt.

Lösungen für diese Technik liefert die Supraleitung. Die Kosten dafür sinken gerade rasant, und man muss die Umgebung nicht mehr so stark kühlen wie noch vor ein paar Jahren. Mit Supraleitungen könnten wir sehr starke, permanente Magnetfelder erzeugen. Im Labor kann man damit bereits Gegenstände zum Schweben bringen. Auf Youtube gibt es hübsche Videos zu diesem Thema. Bis zum „Hoverboard“ aus dem Film „Zurück in die Zukunft“, einem Skateboard, das scheinbar schwerelos durch die Luft gleitet, ist es also nicht mehr so weit, wie wir annehmen.

Zum Ende des Jahrhunderts könnte unsere Energie schließlich aus dem All kommen. Satelliten sollen in der Umlaufbahn die Sonnenstrahlung auffangen, bündeln und dann auf die Erde schicken. Schon jetzt spricht technisch nichts gegen diese Technik. Nur die hohen Kosten, um Satelliten in die Umlaufbahn zu bringen.

„Die Solarenergieproduktion im Weltraum könnte eine wichtige alternative Energiequelle sein, wenn fossile Brennstoffe verschwinden“, meint Kensuke Kanekiyo vom Institut für Energiewirtschaft, das im Auftrag der japanischen Regierung forscht. Für diese Pläne braucht es allerdings eine neue Generation von wirtschaftlichen Trägerraketen, die die Kosten senken.

Der Mensch ist in Zukunft also nicht auf fossile Brennstoffe angewiesen. Die größte Energiequelle leuchtet jede Nacht am Sternenhimmel über uns. Wir müssen nur noch lernen, sie anzuzapfen.

Die Zukunft der Roboter und Computer

Das Tempo der modernen Welt wird vom Tempo der Computer bestimmt. Und deren Geschwindigkeit nimmt stetig zu. Die Rechenkapazität von Computern verdoppelt sich etwa alle 18 Monate. Computerspielkonsolen besitzen heute mehr Rechenleistung als die Großrechner in vergangenen Jahrzehnten.

Ein Smartphone verfügt über mehr Leistung als die Nasa im Jahr der Mondlandung 1969. In Zukunft werden wir von Computern umzingelt sein. Sie sehen allerdings anders aus oder sind überhaupt nicht mehr als Computer zu erkennen. In jedem Gegenstand, jeder Wand, jeder Tapete, in allem, was uns umgibt wird sich ein rechnender Chip befinden, der mit dem Internet verbunden ist.

Unsere Umgebung wird intelligent. Ein Raum wird bemerken, wenn Sie ihn betreten, und er wird dafür sorgen, dass an den richtigen Stellen das Licht angeht und eine vernünftige Temperatur herrscht. Computer gehen in unserer Umgebung auf und werden unsichtbar und lautlos unsere Befehle ausführen, weil wir sie mit unseren Gedanken steuern.

Auch Autos werden intelligent. Sie fahren, vom Computer gelenkt, ohne Lenkrad und Fahrer zu ihrem Ziel. Die Insassen können sich entspannen, sich unterhalten oder einen Film schauen. Unfälle und Staus gehören der Vergangenheit an. GPS-Systeme und Radar sorgen für freie Fahrt und lenken sicherer durch den Verkehr der Zukunft, als es ein Mensch jemals könnte.

Das Telefon wird durch Telepräsenz ersetzt

Unsere Gesprächspartner erscheinen als 3-D-Hologramme, die mit uns an einem Tisch oder auf dem Sofa sitzen. Zum Glück gibt es einen Universalübersetzer, der dafür sorgt, dass Sie sich mit Menschen aus aller Welt unterhalten können.

Am Ende dieser rasanten Entwicklung wird der menschliche Geist in der Lage sein, Objekte in der Umgebung zu beherrschen, und der Computer wird in jeder Millisekunde unsere Wünsche lesen können – noch bevor sie uns bewusst werden. Wir erschaffen uns in jedem Moment unseres Lebens eine individuelle Umwelt. Eine Mischung aus virtueller und realer Welt. Objekte tragen neben einem Chip einen winzigen Supraleiter in sich, der magnetische Pulse erzeugt. Dadurch lassen sich Dinge allein mit der Kraft unserer Gedanken durch den Raum bewegen. Auch Roboter und Avatare.

Die Intelligenz von Robotern und Maschinen nimmt stetig zu. Eine menschliche Gestalt werden sie wohl nicht annehmen. Ihre Form verändert sich je nach Aufgabe, weil sie aus Modulen bestehen, die sich in immer neuen Formen zusammensetzen. So können sie verschiedenste Aufgaben für die Menschen erledigen. Sie reparieren die Infrastruktur der Städte. Rohrsysteme, Leitungen, Straßen und Brücken. Sie werden als Chirurg, Koch oder Musiker eingesetzt. Überall, wo größte Präzision gefragt ist. In Japan gibt es jetzt schon einen Roboterkoch, der Fast-Food-Gerichte zubereiten kann, und einen Roboter, der sehr gut Flöte spielt.

Simulation des menschlichen Gehirns

Das menschliche Gehirn ist im Gegensatz zum Computer unendlich langsam. Aber es kann Dinge, die dem Computern schwerfallen: Es behandelt viele Probleme gleichzeitig, es kann sehr schnell Muster erkennen, und es lernt ununterbrochen. Trotzdem sind die führenden Forscher auf diesem Gebiet sicher, dass Mitte des Jahrhunderts genügend Rechenleistung zur Verfügung steht, um das menschliche Gehirn zu simulieren. Bis jetzt haben wir allerdings noch nicht einmal vollständig verstanden, wie der Mensch denkt. Wir wissen nicht, wie ein Wurm funktioniert, obwohl die Lage sämtlicher Neuronen in seinem Nervensystem kartografiert ist.

Auch auf die Frage, wann Computer eine Art Bewusstsein entwickeln könnten, gibt es keine Antwort oder Vorhersage. Bis jetzt ist noch nicht einmal geklärt, was Bewusstsein eigentlich ist. Ein paar Dinge gehören wohl dazu: detailliertes Wahrnehmen der Umwelt, Selbstwahrnehmung, lautlos mit sich selber sprechen, Interaktion und vorausschauendes Handeln. In diesen Disziplinen sind Computer bis heute eher schlecht.

Ist es trotzdem denkbar, dass wir gerade an unserem Nachfolger in der Evolutionskette basteln? Werden uns die Computer überflügeln und irgendwann intelligenter als der Mensch sein? Wird am Ende die Erde zu einem gigantischen Supercomputer, weil die Maschinen die Macht übernehmen? Es gibt Forscher, die sich so ein Szenario ernsthaft vorstellen können. Danach werden unsere Computer am Ende ins All hinausziehen und andere Planeten, Sterne und Galaxien in Supercomputer verwandeln.

Computer, weit über menschlicher Leistungsfähigkeit

Prominenter Vertreter dieser eher abstrakt erscheinenden Idee ist der bekannte Unternehmer, Erfinder und Bestseller-Autor Ray Kurzweil. Ein Mann, der nicht gerade als Spinner gilt und quasi nebenbei den Synthesizer erfunden hat. Kurzweil meint, dass bereits im Jahr 2045 intelligente Computer sich selbst reproduzieren und die menschliche Leistungsfähigkeit weit übertreffen werden.

Mit ihrem unstillbaren Appetit nach immer mehr Energie verschlingen sie irgendwann alles und beeinflussen die Geschichte des Universums. Er nennt das „Singularität“. Inzwischen wurde eine Universität im kalifornischen Silicon Valley gegründet, die sich mit genau dieser Frage auseinandersetzt.

Eine andere Denkrichtung geht davon aus, dass der Mensch mit der Maschine verschmelzen wird. Unser Gehirn verbessert durch eine zusätzliche Schicht von Neuronen seine Leistungsfähigkeit. Ein erster matter Abglanz dieser Technik ist eine Brille, die uns Daten, Zahlen, den richtigen Weg und eine Gesichtserkennung direkt auf die Netzhaut projizieren kann. Dazu kommen elektronische Bauteile, die dem menschlichen Körper übernatürliche Fähigkeiten verleihen.

Vielleicht entledigen wir uns irgendwann komplett unserer plumpen Körper, und der menschliche Geist wird zu einem Computerprogramm, das man auf verschiedene Maschinen herunterladen kann.

Die Zukunft der Dinge

Was unterscheidet den Menschen von den meisten Tieren? Unter anderem, dass der Mensch alle Arten von Werkzeugen benutzt. Durch Pfeil und Bogen, später die Beherrschung von Schusswaffen, konnten immer mehr Menschen mit Nahrung versorgt werden. Die Hütten aus Lehm und Stroh wurden mit der Metall-Herstellung von größeren, solideren Gebäuden abgelöst.

Bereits jetzt steht uns das mächtigste Werkzeug zur Verfügung, das Menschen jemals besessen haben. Wir sind in der Lage, einzelne Atome, also den Grundbaustoff aller Dinge, die uns umgeben, zu manipulieren. So ist heute etwa die Produktion von Materialien denkbar, von denen wir bisher nur geträumt haben: Sie werden immer leichter, stärker, können mit neuen elektrischen und magnetischen Eigenschaften ausgestattet werden.

Per Rasterelektronenmikroskop lassen sich einzelne Atome beobachten und manipulieren. Noch vor wenigen Jahren erschien es ausgeschlossen, jemals ein Atom beobachten zu können. Jetzt können wir sogar mit ihnen spielen. Bei IBM in San Jose basteln Wissenschaftler aus einzelnen Atomen primitive Maschinen. Mit einer mikroskopischen Pinzette ist es möglich, ein einzelnes Atom an eine andere Stelle zu bewegen und zum Beispiel den Schriftzug IBM zu bauen.

Nanotechnologie eröffnet fantastische Möglichkeiten

Auch die US-Regierung glaubt an die Nanotechnologie in fast allen Bereichen der Wirtschaft und hat im Jahr 2009 1,5 Milliarden Dollar für die Forschung zur Verfügung gestellt. Wenn man Kaku folgt, hat sie gute Gründe dafür: „Mithilfe der Nanotechnologie ließe sich vielleicht bis Ende des Jahrhunderts eine Maschine bauen, die alles aus fast nichts schaffen kann.“

In der Medizin gäbe es für Nanogeräte sicherlich sinnvollere Einsatzgebiete. Moleküle, die aus Nanopartikeln bestehen, könnten Antikrebsmedikamente im menschlichen Körper an den Einsatzort bringen. So ließen sich die Nebenwirkungen der Chemotherapie, die auch gesunde Zellen angreift, eingrenzen. Nanopartikel würden von normalen Blutzellen abprallen und nicht in sie eindringen, weil sie zu groß dafür sind. Krebszellen weisen große, unregelmäßige Poren auf. Eine perfekte Angriffsfläche. Dort würden die Partikel gezielt die Wirkstoffe freisetzen.

Eine weitere medizinische Einsatzvariante ist das sogenannte „Nanoauto“, an dem jetzt schon geforscht wird. Ziel ist es, die Chirurgie komplett abzuschaffen. Reparaturarbeiten werden durch diese Nanoautos direkt im menschlichen Körper erledigt. Sie werden von Magneten durch unseren Blutstrom direkt zum erkrankten Organ gelenkt und führen hier operative Eingriffe aus.

Forscher bei Intel arbeiten daran, die Gestalt der Dinge auf Knopfdruck zu verändern. Computerchips in der Größe eines Sandkorns richten sich durch elektrische Ladungen aus und nehmen so unterschiedlichste Gestalt an. Wenn Ihnen ihr Handy zu groß für ihre Jackentasche ist, verkleinert es sich. Oder es entfaltet sich zu einem Bildschirm, auf dem Sie einen Film anschauen können. Schon jetzt denken Autodesigner darüber nach, wie man aus einem familienfreundlichen Kombi per Schalter ein Sportcoupé für das Wochenende machen kann.

Eine Software verändert die Gestalt von Möbel

Am Ende dieser Entwicklung steht die programmierbare Materie. Software bestimmt die Gestalt der Dinge. Unsere Umgebung wird die gewünschte Form annehmen, bevor wir darüber nachdenken, weil ein Computer unsere Gedanken liest und die Ausrichtung der Materie bestimmt. Wenn ein Möbelstück nicht mehr gefällt oder die Waschmaschine nicht mehr funktioniert, laden Sie eine neue Software herunter, die die Materie zu einem neuen, besseren Gerät programmiert. Kaku glaubt, dass man in Zukunft sogar ganze Häuser und Städte auf Knopfdruck entstehen lassen kann. Man muss nur noch die Lage festlegen und die Fundamente vorbereiten.

Besonders mutige Forscher halten es für möglich, dass Ende des Jahrhunderts eine universelle Maschine entsteht, die sie „Replikator“ genannt haben. Der „Replikator“ setzt alle denkbaren Produkte aus einzelnen Atomen und Molekülen zusammen. Dazu braucht es Nanoroboter, die sich selber reproduzieren können.

Klingt unglaublich, kommt aber in der Natur vor. Die Darmbakterien des Menschen reproduzieren sich in nerhalb von zehn Stunden selbst. Die Nanoroboter müssen dann Moleküle identifizieren und an bestimmten Stellen platzieren. Und dafür ordnen sie eine astronomische Anzahl von Atomen nach einem Masterplan neu an. Diese Nanobots sind derzeit noch nicht in Sichtweite.

Und es gibt natürlich Forscher, die energisch bestreiten, dass es diese Universalmaschinen jemals geben kann. Aber denken Sie an die Kraft des exponentiellen Wachstums. Und die mächtige Idee, dass wir in Zukunft alle Materie per Gedankenkraft beherrschen, wird dafür sorgen, dass wir vielleicht viel schneller eine Art „Replikator“ entwickeln können, als wir heute glauben.

Die Zukunft der Menschheit

Wohin gehen wir? Was ist das Ziel dieser Entwicklungen? Kaku ist sich sicher: All diese technischen Neuerungen führen die Menschheit auf den Weg in eine planetarische Zivilisation. Das heißt, dass wir auf unserem Planeten zusammenrücken. Wir werden uns ähnlicher. Wohlstand wird gleichmäßiger verteilt sein, Ländergrenzen lösen sich langsam auf.

Das Internet ist ein mächtiger Beschleuniger dieser Entwicklung zu einer planetaren Zivilisation. Wir können uns zum ersten Mal in der Geschichte der Menschheit in Echtzeit rund um den Erdball austauschen. Per Twitter unterhält man sich über alle Grenzen hinweg in jeder Sekunde. Ideen werden per Link in Lichtgeschwindigkeit ausgetauscht, verbessert und umgesetzt. Als planetare Sprachen nutzen wir jetzt schon Englisch und Chinesisch.

In der Wirtschaft sprechen wir von der Globalisierung – leider häufig nur im negativen Sinn. Kein Land kann es sich mehr erlauben, seine Wirtschaft isoliert zu betrachten und zu betreiben. Innerhalb einer Generation ist ein planetares Wirtschaftssystem entstanden. Rund um den Erdball wächst eine internationale Mittelschicht.

Auch die Menschen in China, Indien wollen so leben, wie sie es in vielen Filmen gesehen haben. Mit Doppelgarage, einer heißen Dusche, Flachbildfernseher im gemütlichen Wohnzimmer und Coffee to go auf dem Weg zur Arbeit. Ihr Ziel ist der Wohlstand, den wir in weiten Teilen Europas und Amerikas schon erreicht haben. Und sie sind auf einem guten Weg, dieses Ziel zu erreichen. Andere Länder werden folgen.

Die Kultur ist heute schon global. Junge Leute in Moskau, Berlin, New York und Tokio hören die gleiche Musik, kleiden sich ähnlich, konsumieren das gleiche Essen und sehen die gleichen Filme. Nachrichten, Sportereignisse, Umweltprobleme, Tourismus und Krankheiten sind längst globale Erscheinungen.

Am Ende könnte eine Weltregierung herrschen

Kriegführung wird immer schwieriger. Die Menschen haben heute mehr zu verlieren, die Familien sind kleiner und Nationalstaaten werden schwächer. Die Macht verlagert sich weg von National- und hin zu Zentralregierungen. In Europa ist diese Entwicklung in vollem Gange. Und wie man sieht, ist das kein einfacher Schritt. Aber gemeinsame Währung, Sprache und Kultur machen ihn unausweichlich. Am Ende dieser gesellschaftlichen Evolution kann schließlich so etwas wie eine Weltregierung stehen. Auch wenn Terroristen, religiöse Fundamentalisten oder Diktatoren etwas dagegen haben.

Diese Entfaltung kann sich sogar noch beschleunigen, wenn wir im All auf intelligentes Leben stoßen. Und dafür gibt es einige Hinweise. Kaku geht davon aus, dass wir noch in diesem Jahrhundert auf Grund der rasanten technischen Entwicklung eine fortgeschrittene Zivilisation im All entdecken. Eine direkte Kommunikation wäre wegen der gigantischen Entfernung allerdings unmöglich. Dafür ist dann auch die Kriegsführung nicht ganz einfach. Wieder keine Chance für Weltuntergangspropheten.

Unsere Zivilisation lässt sich sehr gut anhand des Energieverbrauchs beschreiben. Der Mensch lebte Jahrtausende durch die Kraft seiner zwei Hände. Es war die meiste Zeit der Menschheit ein kurzes und brutales Leben. Das Ende der Eiszeit vor 10.000 Jahren ermöglichte endlich die Landwirtschaft mit Rindern und Pferden. Dadurch gab es genug Energie für die Ernährung von mehr Menschen und die Gründung von Dörfern und Städten. Vor 300 Jahren erlebte der Mensch die industrielle Revolution. Maschinen lieferten noch mehr Energie. Zukünftige Generationen werden die Energie der Sonne, der Sterne und schließlich der gesamten Galaxie nutzen.

Internet als größter Wissensspeicher aller Zeiten

Heute entstehen Reichtum und Wohlstand aus Information. Davon gibt es derzeit mehr, als wir verarbeiten können. Schon bald wird der Speicherplatz nicht mehr ausreichen, um all die Daten zu verarbeiten, die wir in nie da gewesener Menge weltweit produzieren. Der Zugang zum Internet ist mittlerweile für fast jeden erschwinglich. In Afrika wächst die Flächenabdeckung von drahtlosen Netzen exponentiell. Noch in diesem Jahr werden 70 Prozent der Fläche versorgt sein.

Und bis Ende des Jahres ist der größte Teil der Menschheit an das Internet angeschlossen und damit an das universelle Kommunikationsnetz und den größten Wissensspeicher aller Zeiten. Diese Vernetzung kann ein Schlüssel für all die Entwicklungen sein, die Kaku in seinem Buch nachzeichnet. Eines lässt sich allerdings nicht technisch herstellen: menschliche Vernunft. Die müssen die Menschen zum Gelingen einer besseren Zukunft selbst beitragen.

Was meinen Sie? Glauben Sie, dass aus den Szenarien des Michio Kaku irgendwann Wirklichkeit wird? Wird alles genau so eintreffen, wie er es in seinen Büchern entworfen hat? Vielleicht ist das im Detail gar nicht so entscheidend. Der Autor und Wissenschaftler zeigt uns verschiedene Wege in die Zukunft auf, die die Menschheit zur Abwechslung mal nicht unwillkürlich in die Katastrophe führen müssen. Damit trifft er allerdings auf eine Gesellschaft, in der es in vielen Milieus zum guten Ton gehört, sich eher technik- und fortschrittskritisch zu geben. Wenn in Deutschland drei Minuten über das Internet diskutiert wird, ist man leider häufig schon beim Datenschutz oder Facebook-Mobbing angelangt.

Eine sachliche Gesprächsrunde über Vorteile der Gentechnik bei der Produktion von Lebensmitteln scheint in Deutschland ausgeschlossen. Kürzlich vermeldete die saarländische Landesregierung den Beitritt des Landes zum „Europäischen Netzwerk gentechnikfreier Regionen“. Die Landesregierung will gemeinsam mit anderen „eine Strategie entwickeln, damit das Saarland auch in Zukunft eine gentechnikfreie Anbauregion bleibt“.

Auch die Länder Thüringen, Nordrhein-Westfalen, Schleswig-Holstein und Baden-Württemberg haben sich dem Netzwerk angeschlossen. Ist das noch begründete Vorsicht oder schon Technologiefeindlichkeit? Eine Entscheidung gegen Gentechnik bedeutet auch, dass man unausgesprochen die Verknappung von Lebensmitteln hinnimmt. Das wird in dieser sehr emotional geführten Diskussion gern verschwiegen.

Nahrung gibt es offenbar genug – zumindest in Deutschland. Aber viele deutsche Wissenschaftsinstitutionen wie die Deutsche Forschungsgemeinschaft oder die Fraunhofer-Gesellschaft haben sich mehrfach und nachdrücklich für die Nutzung von Gentechnik ausgesprochen. Bisher gibt es keine bessere Lösung, um die Erde mit ihren vielen Menschen zu ernähren.

Kaku ist nicht nur Wissenschaftler. Er ist auch ein trickreicher Sachbuchautor. Er hat ein paar ziemlich gute Aussichten für uns parat, und das verkauft sich ziemlich gut. Für viele Zeitgenossen mag das vielleicht der eine oder andere Löffel Optimismus zu viel sein. Aber er versammelt in seinem Buch „Die Physik der Zukunft“ ein paar sehr gute Gründe für seine zuversichtliche Grundhaltung.

Es tut gut, einem Wissenschaftler zuzuhören, der fest davon überzeugt ist, dass der Mensch mit seinem Instrumentarium in der Lage ist, die übermenschlichen Probleme zu lösen. Wir Menschen haben nichts anderes als unser Gehirn, unsere Vernunft und die Wissenschaft. Aber das ist schon eine ganze Menge.

Schwimmende Städte
zgbdc5-66voci3mtig5blo81eg_original.jpg

Das Seasteading Institut hat 2009 zu einem Wettbewerb aufgerufen, in dem die schwimmende Stadt der Zukunft entworfen werden sollte.

Foto: The Seasteading Institute

Quelle: http://www.welt.de/wissenschaft/article112447946/Die-Zukunft-der-Menschheit-wird-fantastisch.html

Preisdiskrimierung, Big Data und objektiv richtige Preise einer Ware aufgrund persönlicher Werteinschätzung

Amazon Echo schlägt Wellen

Netzwerklautsprecher mit integriertem Personal Assistant

Original: http://www.heise.de/newsticker/meldung/Amazon-Echo-Netzwerklautsprecher-mit-integriertem-Personal-Assistant-2443970.html?hg=1&hgi=3&hgf=false

Bild: Amazon.com

 

Amazon hat überraschend „Echo“ vorgestellt, einen Netzwerklautsprecher, der nicht nur Musik abspielt, sondern permanent lauscht und auf Zuruf Fragen beantwortet, die Todo-Liste ergänzt und mehr.

Die knapp 24 cm hohe schwarze Röhre mit einem Durchmesser von 8,3 cm „Amazon Echo“ ist einerseits ein über Bluetooth und WLAN verbundener Netzwerklautsprecher, der sich von Smartphone oder Tablet mit Musik von iTunes, Pandora und Spotify beschicken lässt. Andererseits wartet der mit Fernfeldmikrofonen und Spracherkennung ausgerüstete Echo auf Sprachkommandos. Dabei betont Amazon in dem Werbespot, dass man Echo nicht anbrüllen muss, sondern „überall im Raum gehört“ wird. Wie Apples „Siri“ und Googles „Ok, Google“ aktiviert ein offenbar wählbares Keyword – im Video „Alexa“ – den integrierten Personal Assistant. Alternativ soll man den Echo offenbar mit der dem Amazon Fire TV beiliegenden Voice Remote mit Mikrofontaste steuern können.

Laut Amazon gibt Echo Auskunft über das Wetter, buchstabiert Wörter, stellt den Wecker, spielt die Lieblingsmusik, rezitiert Wikipedia-Einträge, fügt der Einkaufsliste Einträge hinzu und so weiter. Das „Hirn“ von Echo sind die Amazon Web Services, über die ständig neue Kommandos ergänzt werden sollen.

Hochwertige Bass-Lautsprecher (2,5 Zoll) mit Bassreflex und Hochtöner (2 Zoll) versprechen klaren und verzerrungsfreien omnidirektionalen Klang. Über die Sprachsteuerung sollen sich Amazon (Prime) Music, iHeartRadio und TuneIn steuern lassen, während Musik von Spotify, iTunes und Pandora nur über Bluetooth von Mobilgeräten aus übertragen werden.

Amazon Echo gibt es vorerst nur in den USA zum Preis von 199 US-Dollar; Prime-Kunden müssen nur 99 US-Dollar bezahlen. Noch kann aber nicht jeder Echo kaufen. Interessenten müssen sich bewerben und bekommen eine Mail von Amazon, falls das Los auf sie fällt.

http://www.youtube.com/watch?v=KkOCeAtKHIc

Fans von „Star Trek: The Next Generation“ („Computer … !?) werden ihre Freude an dem Gerät haben, wenn Echo das verspricht, was der Werbeclip von Amazon suggeriert. Auf Privatsphäre bedachte Naturen werden hingegen einen großen Bogen um den Lautsprecher machen und keinesfalls eine Amazon-Wanze im Wohnzimmer dulden – Microsoft kann ein Lied davon singen, man erinnere sich an das Horch und Guck der Xbox One. Ob es da hilft, dass Echo einen Schalter zum Deaktivieren des Mikrofon-Arrays besitzt, ist fraglich.

Doch was bezweckt Amazon mit dem Echo? Es wird ja kaum darum gehen, Eric Schmidt bloß zu beweisen, dass Amazon tatsächlich Googles größter Konkurrent ist. Sollte es wie beim Kindle (Fire) oder Fire TV (Stick) vor allem darum gehen, Amazons Inhalte besser an die Kunden zu bringen – also Bücher, Videos und Musik? Wohl kaum. Im Endeffekt dürfte es darum gehen, den Nutzern den perfekten Shopping-Assistenten an die Seite zu stellen, der die ausgesprochenen Wünsche direkt in dem Amazon-Einkaufskorb platziert …

Das liegt wohl an den grundverschiedenen Voraussetzungen. Star Trek
spielt aus menschlicher Sicht in einer Zukunft, in der die Existenz
des Individuums (zumindest der Spezies Mensch) ganz
selbstverständlich durch die allgemeinverfügbare Technik gesichert
ist. An der Stelle läßt sich die Technik wirklich nur noch gemäß der
Überzeugung der Spackeria (die sind seit NSA auch recht still
geworden) im Rahmen von Peinlichkeit gegen andere einsetzen. Heute
geht es dagegen darum, das Individuum bis an die Grenze der
(ökonomischen) Existenzfähigkeit auszupressen – und manchmal auch
darüber hinaus! (Source: http://www.heise.de/newsticker/foren/S-Re-Reaktion-der-Science-Fiction-Fans/forum-287951/msg-26049853/read/)

Die Zeit ergänzt: http://www.zeit.de/wirtschaft/2014-10/absolute-preisdiskriminierung

Jeder hat seinen Preis

Unendlich viele Preise für ein Produkt: Einer der größten kapitalistischen Träume ist gerade dabei, in Erfüllung zu gehen. Big Data macht es möglich

Konsum

Supermarkt  |  © dpa

Die Zeichen dafür, dass sich das kapitalistische Nirwana nähert, mehren sich. Florian Stahl sieht sie überall. Beim Einkauf im Netz, in den USA, in Deutschland. Beispielsweise kürzlich in New York, als sich der Professor für quantitatives Marketing an der Universität Mannheim bei Booking ausloggte, die Cookies löschte, dann seine Hotelanfrage noch einmal startete, diesmal anonym. Da war das gleiche Zimmer plötzlich günstiger. Weil der Algorithmus ihn nicht mehr identifizieren konnte, schlug er ihm einen anderen Preis vor. „Die Preismechanismen sind dabei, sich zu ändern“, sagt Stahl, „und zwar fundamental“.

Im Prenzlauer Berg, im Kaiser’s Supermarkt in der Winsstraße, wo sich Berlin früher zum Flirten traf, drängen sich die Kunden vor einem roten Ständer mit einem Bildschirm. Sie halten eine Karte vor den mannshohen Apparat. Weißes Licht streichelt ihre Hände, ihre Extrakarte wird gescannt, ein leises Summen begleitet das Erscheinen des Bons, darauf ihre Preisabschläge. Dann bin ich dran.

Ich checke ein in die Beta-Phase der dritten industriellen Revolution. Bald soll mich der Kaiser’s Algorithmus komplett verstehen, meine Wünsche vorhersagen können, noch aber kennt er mich nicht. Er hat bisher keinen einzigen Kassenzettel von mir gescannt, nur meine neue Extrakarte. Es ist ein Riesenerfolg in der Kundenkarten-Welt: Ein Drittel der Stammkäufer wurde in den ersten zwei Monaten seit der Einführung Nutzer. Mein Ausdruck zeigt „Ihre persönlichen Angebote heute“: Je 20 Prozent Abschlag für Harry Brot (noch nie gehört) und Bärenmarke Alpenfrische Vollmilch (dachte, die machen nur Kaffeesahne); für Barilla Nudeln gibt es 30 Prozent, für Ritter Sport und Lätta Margarine sogar 40.

Jeder zahlt einen anderen Preis

Hannes Grassegger ist Ökonom und schreibt unter anderem für "Brand Eins", DIE ZEIT oder die "NZZ". Er hat einen Essay über den Umgang mit dem neuen Datenkapitalismus veröffentlicht: "Das Kapital bin Ich"  |  © Kein & Aber

Hannes Grassegger ist Ökonom und schreibt unter anderem für „Brand Eins“, DIE ZEIT oder die „NZZ“. Er hat einen Essay über den Umgang mit dem neuen Datenkapitalismus veröffentlicht: „Das Kapital bin Ich“ | © Kein & Aber  |  © Kein & Aber

Vor gut hundert Jahren beobachtete Arthur Cecile Pigou, Professor in Cambridge, ein seltsames Phänomen: Er sah ins Herz des Kapitalismus und es war leer. Der Preis, um den sich die Marktwirtschaft als System der freien Preise dreht, existierte in Wahrheit gar nicht. In The Economics of Welfare von 1920 beschrieb Pigou seine Beobachtung im Kapitel Das spezielle Problem der Eisenbahntarife: Für eine identische Leistung, die gleiche Bahnfahrt von A nach B, zahlten Menschen freiwillig verschiedene Tarife, je nach Klasse. Pigou erkannte, was für die Ökonomie heute so elementar ist wie die Unschärferelation für die Physik: Es gibt keinen objektiv richtigen Preis einer Ware. Es gibt einzig persönliche Werteinschätzungen.

„Preisdiskriminierung“ nannte Pigou die Unterscheidung von Menschen nach den Preisen, die sie für das gleiche Produkt zu zahlen bereit sind. Für Händler ist sie eine wunderbare Möglichkeit, mehr für die gleiche Leistung zu kassieren. In der Vollendung der Preisdiskriminierung, der „Preisdiskriminierung ersten Grades“ könnten Anbieter, so Pigou, jedem einzelnen Käufer einen Höchstpreis für die Bahnfahrt setzen – und ihm so alles abnehmen, was er überhaupt zu zahlen bereit ist. Fortan lernte jeder Ökonomiestudent die totale Preisdiskriminierung als den heiligen Gral des Kapitalismus kennen.

An der Kaiser’s Kasse zeige ich die Extrakarte, piep!, registriert. Jeder Kauf verändert meine zukünftigen Preise: Ladenpreis minus persönlicher Rabatt. Erst einmal bin ich auf keines der Angebote eingegangen, weder Lätta noch Ritter Sport. Am Scanner hole ich mir den nächsten Bon. Wieder das gleiche Angebot. Dreimal muss ich da durch. Dann ist der Algorithmus angeblich soweit.

Personalisierte Angebote als letzte Möglichkeit, den Umsatz zu steigern

Fixe Preise schaffen einen versteckten Sozialvertrag, wie einheitliche Krankenkassenprämien. Hinter Einheitspreisen in Supermärkten, Bahnhöfen und Drogerien steckt ein Gesellschaftskonzept: Alle Käufer sollen gleich sein.
Einheitspreise schaffen Gewinner und Verlierer – dem einen ist etwas eigentlich mehr wert, dem nächsten ist es fast zu teuer. So subventionieren wir einander, vom Joghurtkauf bis zur Taxifahrt. Am meisten profitiert der Durchschnittsmensch. Im Massenmarkt seien personalisierte Preise bislang technisch unmöglich, sagt Florian Stahl, weil sie das Wissen über die Wertschätzung des Käufers für ein bestimmtes Produkt zu einem bestimmten Moment voraussetzten. In diese Wertschätzung könne theoretisch alles einfließen, bis hin zur Wetterlage, wie bei Eis oder Jacken. „Den individuellen Höchstpreis zu erkennen, ist eigentlich ein unendliches Problem“, sagt Stahl.

Lange entsprachen Preise im Alltag dem geschätzten Wert dessen, was unterschiedliche Käufer im Schnitt zu zahlen bereit waren. Bis die Computer kamen, das Internet, Facebook, Google, Scanner, Produkt-IDs, In-Store-Cams, Smartphones – ein Arsenal zur Datafizierung von Personen, deren Vorlieben, Verwandtschaftsverhältnissen, Jobs, Bewegungsmustern, Wertvorstellungen. Seit Kurzem gibt es nun Algorithmen, die die Daten zu dynamischen, individuellen Preisen zusammenrechnen können, wie zuerst die Flugpreise, dann die Hotelpreise, die Elektrizitätspreise und so weiter. Jetzt deutet sich an, dass sich alles herunterbrechen lässt auf den Einzelnen. Es ist, als ob ein Märchen wahr würde.

Das Klingelschild ist golden, Oderberger Straße 44, beste Lage im Prenzlauer Berg, direkt neben dem Modeladen Kauf Dich Glücklich. SO1 steht an der Klingel, kurz für Segment of One. Während in den USA mehr als die Hälfte aller Handelsunternehmen mit sogenannten Price Intelligence Verfahren und dynamischen Preisen experimentiert, jeder zwanzigste Preis bereits personalisiert ist, während die Preisschilder in Frankreich zunehmend durch Digitalanzeigen ersetzt werden, ist das Berliner Start-up SO1 einer der ersten deutschen Anbieter für totale Preisdiskriminierung.

Hier arbeiten 15 Statistiker, ITler, Ökonomen. Menschen, die Google und Henkel verlassen haben, um eine Vision Wirklichkeit werden zu lassen. Sie stecken hinter den roten Automaten in derzeit 30 Berliner Kaiser’s Testmärkten. Die Extrakarte sei eigentlich wie ein physischer Cookie, erklärt der junge Chef und Mitgründer Raimund Bau. SO1 trage die absolute Preisdifferenzierung aus dem Netz, wo Amazon oder Zalando längst so arbeiteten, in die Welt. Die Karten hätten eine anonyme Kundennummer, man brauche im Gegensatz zu anderen Kundenkarten keine persönlichen Informationen wie Namen oder Adresse. Darauf ist Bau stolz. Erfasst würden an der Kasse nur Kaufzeit, Produktnummer, Kartennummer und der gezahlte Preis. „Bei uns laufen die Daten aus den Kassen zusammen. Wir können beispielsweise identifizieren, wer ein Pepsikäufer ist, sogar wenn er nie Pepsi bei Kaiser’s gekauft hat.“ Das ergebe sich allein aus der erfassten Kombination gekaufter Produkte. Jedes Produkt sei ein statistischer Hinweis auf andere Produktvorlieben, so wie Weleda-Shampoo auf Bio-Obst hinweist.

Auf Basis der Wahrscheinlichkeiten, die aus Testmärkten bekannt seien, könnten nicht nur Vorlieben errechnet werden, so Bau, sondern auch die persönliche Zahlungsbereitschaft und Preissensibilität. „Wenn wir den Cola-Absatz erhöhen wollen, finden wir heraus, ob Du als Pepsi-Liebhaber für Cola ein potenzieller Kunde bist. Ob Du es wiederholt kaufen würdest, wenn Du es einmal ausprobierst. Wie viel wir Dir zahlen müssten, um Dich zum Cola-Kauf zu bringen.“ Lohne sich der Kunde für Cola, biete man ihm an den roten Automaten genau den passenden Preisnachlass für Cola. Resultat seien individuelle Preise.

Der gläserne Kunde

Heute arbeite SO1 noch mit Bons, bald werde vieles über Apps laufen, sagt Bau. „PayPal, Mastercard, Google arbeiten sicherlich an ähnlichen Methoden.“ Absolute Preisdiskriminierung sei eine weltweite Bewegung, die kaum aufzuhalten sei, weil in gesättigten Märkten wie dem Lebensmittelhandel der Preiswettkampf der einzige Weg sei, den Umsatz zu steigern. „Persil wäscht jetzt noch weißer“ ziehe nicht mehr, sagt Bau. Und altbekannte Promotionen via Coupons oder Rabattmarken hätten aufgrund der Streuung kaum Effekt. Sie würden vor allem von Leuten genutzt, die das Produkt sowieso kaufen würden. Die Extrakarte bringe dagegen pro Nutzer Umsatzsteigerungen im mittleren zweistelligen Prozentbereich. Für Bau eine Win-Win-Win-Win-Situation für Kunde, Händler, Produzent und SO1.

Das will sich auch IBM nicht entgehen lassen. Demandtec heißt die Software des Konzerns. Große Ketten, Lebensmittelhändler, Drogerien oder Baumärkte sollen sie nutzen, um ihre Preise auf Basis von persönlichen Kaufmustern, Konkurrenzpreisen oder anderen Einflüssen ständig zu optimieren. Das ermöglicht verschiedene Preise von Supermarkt zum Onlineshop zum Mobilgerät oder zwischen Gebieten. Eine zweite IBM-Software namens Xtify bietet Techniken, um Kunden jederzeit ortsbezogen mit Angeboten anzusprechen.

Alle Informationen werden zusammengenommen

Viele Geschäfte haben sich derweil zu veritablen Überwachungsdiensten entwickelt. Das Ziel: Kunden bis ins Detail ausforschen. In der Schweiz können die beiden führenden Supermarktketten Migros und Coop 80 Prozent aller Einkäufe Haushalten zuordnen, dank der Kundenkarten. Niemand weiß mehr über die Schweizer, über ihre Allergien, Aufenthaltsorte, Gewohnheiten, Familienstrukturen, Adressen. Bei der US-Kette Safeways nutzt fast die Hälfte aller Kunden eine App, die ihnen im Supermarkt spezifische Nachlässe anzeigt, beruhend auf der eigenen Shoppingvergangenheit. So entstehen personalisierte Preise.

Ich habe Harry Brot und Barilla Nudeln verbilligt gekauft. Die beiden Angebote fehlen jetzt auf dem dritten Ausdruck. Sonst ist alles beim Alten. Noch ein Einkauf, dann kann ich sehen, was der Kaiser’s Algorithmus von mir denkt. Ob er mir Cola anbietet?

„Von der Ernährung über die Mobilität bis zur Energieversorgung sind elementare Bereiche unseres Lebens von den neuen Preismodellen betroffen“, sagt der St. Galler Ökonom und Zukunftsforscher Joël Cachelin. Und diese Preise würden durch uns unbekannte und unüberprüfbare Kriterien bestimmt.

Alles wird verknüpft

Die für den Einzelnen bedrohlichste Möglichkeit wäre künftig die Verknüpfung aller Informationen über Firmen und Netzwerke hinweg. Jede unserer Handlungen und Äußerungen, auch vergangene, würde den Preis beeinflussen, den wir für etwas zahlen. Das Netz würde zu einer Art Credit History, wie Kritiker des neuen Facebook Werbedienstes Atlas befürchten.

In Dänemark bietet der Reiseveranstalter Spies derzeit schon Sonderpreise für Paare an, die in ihren Ferien nachweislich ein Kind zeugen. Der Werbegag ist ein Versuch, mit Preisen einem der größten Probleme Dänemarks zu begegnen: dem Mangel an Nachwuchs. Preise sind eines der wichtigsten Steuerungsmittel unserer Gesellschaft. Sie sind Politik. „Die Zeiten des Sozialvertrags im Preis gehen zu Ende“, sagt Florian Stahl. Zukünftig könnten Menschen sogar Identitäten tauschen, um niedrigere Preise zu zahlen.

Brotpreise starten Revolutionen. Was aber passiert mit einer Gesellschaft, deren Preissystem sich komplett ändert?

Nach dem dritten Einkauf gehe ich zum Automaten, um endlich mein persönliches Angebot zu erhalten. Das Licht des Scanners wärmt meine Hand. Mein Rabatt erscheint mit sanftem Summen. 20 Prozent auf Bärenmarke Milch, 40 Prozent auf Lätta Margarine.

Die Sueddeutsche schließt den Themenblock: http://www.sueddeutsche.de/digital/neues-produkt-echo-amazon-erfindet-den-lauschsprecher-1.2209840

Amazon erfindet den Lauschsprecher
Amazon Echo: Eine Dystopie in Zylinderform.

AmazonEchoDystopie

Die Gebrauchsanweisung für Amazons neues Produkt hat in ihrer deutschen Ausgabe 351 Seiten und erzählt nebenbei noch die Geschichte einer totalitären Diktatur. Es ist das Buch „1984“ von George Orwell – und das wichtigste Instrument in dieser Dystopie ist der Televisor, der bei allen Bürgern zu Hause fest installiert ist. Er hört alles, kann sprechen und dient auch als Fernseher.

Einen kleinen Unterschied gibt es allerdings: Ein Fernseher ist Amazons „Echo“ nicht und es bleibt auch jedem selbst überlassen, ob er die kleine, schicke, schwarze Säule bei sich zu Hause aufstellt, wo sie fortan auf Sprachkommando reagiert und ihrem Besitzer Fragen beantwortet. Wie zum Beispiel: Alexa, wie viel Uhr ist es? Alexa, wie buchstabiert man Mountainbike? Alexa, wie kocht man Bolognese? Das Codewort Alexa aktiviert Echo.

Das Gerät kann all das, weil es permanent mit dem Netz verbunden ist – und weil Echo offenbar alles hört, was um die kleine Säule herum gesprochen wird. Kein Hersteller hatte bislang die Chuzpe, dieses Gerät wirklich zu bauen und anzubieten. Amazon, der Lieferkonzern, hat die Chuzpe, unsere Gesellschaft zu verändern. Wenig innovative Buchhandlungen und Verlage zu ruinieren. Unseren Konsum zu protokollieren. Produkte will der Konzern seinen Kunden künftig per Drohne liefern – Amazon weiß schon, was gut für uns ist. Dieser Konzern also hat den Televisor gebaut. Herzlichen Glückwunsch!

Die Standleitung zu einem Amazon-Server

Und warum auch nicht. Die gesamte Umgebung um uns herum sammelt Daten. Unsere Handys sowieso, unsere Autos, Bankautomaten, Kassen, Kameras, öffentliche wie private, Payback-Karten, Webseiten, die wir besuchen. Dass Echo da so heraussticht, liegt an zwei Aspekten. Erstens: Echo soll im Wohn- und Schlafzimmer stehen. Das Gerät überwacht – oder bereichert – unser zu Hause. Zweitens: Echo ist, wie man es von Amazon kennt und erwartet, ein besonders innovatives Produkt. Es ist ein Wagnis, aber eines das sich lohnen könnte. Echo rückt uns dort näher ans Netz, wo wir bislang konsequent offline sind. Im Wohnzimmer, beim Faulenzen, beim Kochen, beim Schlafen, im Bett. Echo ist, einem Handy nicht unähnlich, die Standleitung unseres Lebens zu einem Amazon-Server. 199 Dollar kostet das Gerät in den USA, bislang können nicht alle Kunden bestellen, Amazon testet noch, wie das neue, ungewohnte Gerät angenommen wird.

Vielleicht hat Amazon deshalb einen betont konservativen Werbespot zu Echo gedreht, der vor allem suggeriert: Echo macht das Leben einfacher. Ansonsten bleibt alles, wie es ist. Vati hört die Nachrichten mit Echo, Mutti kann den neuen Mitbewohner erst nicht richtig bedienen, aber rafft es dann doch noch – so einfach ist Echo! – und kann in Ruhe mit Echos Hilfe kochen. Und es stimmt ja auch: Nicht das Gerät ist das größte Problem, sondern die fehlenden Regeln für das Gerät. Der Zwang, dem Kunden transparent zu zeigen, was mit ihm geschieht, wenn er das Gerät verwendet. Echo ist nämlich nur die Vorderseite des Produktes, das man erwirbt. Den schwarzen Zylinder kann man anfassen, aber für den Kunden nicht greifbar ist der Amazon-Webserver, der mit Echo verbunden ist und der mit jedem Wort, das in Echos Hörweite fällt, dazu lernt. Über den Sprechenden. Über seinen Tonfall, seine Stimmung, seine Wünsche, seine Probleme, seine Hoffnungen, sein Leben.

Daten, die besonders wertvoll sind

Echo sammelt jene Kategorie von Daten, die besonders wertvoll und besonders kritisch ist, nämlich personenbezogene Daten. Und der Service wäre nicht halb so bedenklich, wenn er klar geregelt wäre und für den Nutzer vollkommen transparent wäre, wie und wo seine Daten liegen, wer sie bekommt und was mit ihnen angestellt wird. Aber welchen Gesetzen gehorcht Echo überhaupt? Amerikanischen? Deutschen? Wer hat Zugriff auf die Daten? Was geschieht mit den Profilen, die unweigerlich entstehen, wenn Echo immer lauscht?

Amazon in Deutschland konnte eine entsprechende Anfrage der Süddeutschen Zeitung nicht beantworten und verweist auf die amerikanische Pressestelle, die bislang nicht reagiert hat. Auf der Echo Produktseite, auf der sich bislang nur amerikanische Nutzer um ein Gerät bewerben können, sind die Vorteile von Echo ausführlich erklärt. Der tolle Klang der Lautsprecher, wie genau Echo höre, was gesprochen wird, wie sich Echo mit anderen Geräten verbinden lässt. Der Hinweis zum Datenschutz aber führt nur zur ganz gewöhnlichen Amazon.com-Datenschutzseite, die Allgemeinplätze und Standardtexte für den Nutzer bereithält. Wer Echo besitzt, weiß deshalb tatsächlich nicht, wie ihm geschieht. Vielleicht ist es nochmal an der Zeit, die 351 Seiten der inoffiziellen Gebrauchsanweisung zu lesen.