Schlagwort-Archive: AI

OpenAI rolls out ‘instant’ purchases directly from ChatGPT, in a radical shift to e-commerce and a direct challenge to Google

https://fortune.com/2025/09/29/openai-rolls-out-purchases-direct-from-chatgpt-in-a-radical-shift-to-e-commerce-and-direct-challenge-to-google/

OpenAI said it will allow users in the U.S. to make purchases directly through ChatGPT using a new Instant Checkout feature powered by a payment protocol for AI co-developed with Stripe.

The new chatbot shopping feature is a big step toward helping OpenAI monetize its 700 million weekly users, many of whom currently pay nothing to interact with ChatGPT, as well as a move that could eventually steal significant market share from traditional Google search advertising.

The rollout of chatbot shopping features—including the possibility of AI agents that will shop on behalf of users—could also upend e-commerce, radically transforming the way businesses design their websites and try to market to consumers.

OpenAI said it was rolling out its Instant Checkout feature with Etsy sellers today, but would begin adding over a million Shopify merchants, including brands such as Glossier, Skims, Spanx, and Vuori “soon.”

The company also said it was open-sourcing the Agentic Commerce Protocol, a payment standard developed in partnership with payments processor Stripe that powers the Instant Checkout feature, so that any retailer or business could decide to build a shopping integration with ChatGPT. (Stripe’s and OpenAI’s commerce protocol, in turn, supports the open-source Model Context Protocol, or MCP, that was originally developed by AI company Anthropic last year. MCP is designed to allow AI models to directly hook into the backend systems of businesses and retailers. The new Agentic Commerce Protocol also supports more conventional API calls too.)

OpenAI will take what it described as small fee from the merchant on each purchase, helping to bolster the company’s revenue at a time when it is burning through many billions of dollars each year to train and support the running of its AI models.

 

How it works

OpenAI had previously launched a shopping feature in ChatGPT that helped users find products that were best suited to them, but the suggested results then linked out to merchants’ websites, where a user had to complete the purchase—analogous to the way a Google search works.

When a ChatGPT user asks a shopping-related question—such as “the best hiking boots for me that cost under $150” or “possible birthday gifts for my 10-year old nephew”—the chatbot will still respond with product suggestions. Under the new system, if a user likes one of the suggestions and Instant Checkout is enabled, they will be able to click a “Buy” button in the chatbot response and confirm their order, shipping, and payment details without ever leaving the chat.

OpenAI said its “product results are organic and unsponsored, ranked purely on relevance to the user.” The company also emphasized that the results are not affected by the fee the merchant pays it to support Instant Checkout.

Then, to determine which merchants that carry that particular product should be surfaced for the user, “ChatGPT considers factors like availability, price, quality, whether a merchant is the primary seller, and whether Instant Checkout is enabled,” when displaying results, the company said.

OpenAI said that ChatGPT subscribers, who pay a monthly fee for premium features, would be able to use the same credit or debit card to which they charge their subscription or store alternate payment methods to use.

OpenAI’s decision to launch the shopping feature using Stripe’s Agentic Commerce Protocol will be a big boost for that payment standard, which can be used across different AI platforms and also works with different payment processors—although it is easier to integrate for existing Stripe customers. The protocol works by creating an encrypted token for payment details and other sensitive data.

Currently, OpenAI says that the user remains in control, having to explicitly agree to each step of the purchasing process before any action is taken. But it is easy to imagine that in the future, users may be able to authorize ChatGPT or other AI models to act more “agentically” and actually make purchases for the user based on a prompt, without having to check back in with a user.

The fact that users never have to leave the chat interface to make the purchase may pose a challenge to Alphabet’s Google, which makes most of its money by referring users to companies’ websites. Although Google may be able to roll out similar shopping features within its Gemini chatbot or “AI Mode” in Google Search, it’s unclear whether what it could charge for transactions completed in these AI-native ways would compensate for any loss in referral revenue and what the opportunities would be for the display of other advertising around chatbot queries.

Anyone Can Buy Data Tracking US Soldiers and Spies to Nuclear Vaults and Brothels in Germany

Source: https://www.wired.com/story/phone-data-us-soldiers-spies-nuclear-germany/

by Dhruv Mehrotra Dell Cameron

Nearly every weekday morning, a device leaves a two-story home near Wiesbaden, Germany, and makes a 15-minute commute along a major autobahn. By around 7 am, it arrives at Lucius D. Clay Kaserne—the US Army’s European headquarters and a key hub for US intelligence operations.

The device stops near a restaurant before heading to an office near the base that belongs to a major government contractor responsible for outfitting and securing some of the nation’s most sensitive facilities.

For roughly two months in 2023, this device followed a predictable routine: stops at the contractor’s office, visits to a discreet hangar on base, and lunchtime trips to the base’s dining facility. Twice in November of last year, it made a 30-minute drive to the Dagger Complex, a former intelligence and NSA signals processing facility. On weekends, the device could be traced to restaurants and shops in Wiesbaden.

The individual carrying this device likely isn’t a spy or high-ranking intelligence official. Instead, experts believe, they’re a contractor who works on critical systems—HVAC, computing infrastructure, or possibly securing the newly built Consolidated Intelligence Center, a state-of-the-art facility suspected to be used by the National Security Agency.

Whoever they are, the device they’re carrying with them everywhere is putting US national security at risk.

A joint investigation by WIRED, Bayerischer Rundfunk (BR), and Netzpolitik.org reveals that US companies legally collecting digital advertising data are also providing the world a cheap and reliable way to track the movements of American military and intelligence personnel overseas, from their homes and their children’s schools to hardened aircraft shelters within an airbase where US nuclear weapons are believed to be stored.

A collaborative analysis of billions of location coordinates obtained from a US-based data broker provides extraordinary insight into the daily routines of US service members. The findings also provide a vivid example of the significant risks the unregulated sale of mobile location data poses to the integrity of the US military and the safety of its service members and their families overseas.

We tracked hundreds of thousands of signals from devices inside sensitive US installations in Germany. That includes scores of devices within suspected NSA monitoring or signals-analysis facilities, more than a thousand devices at a sprawling US compound where Ukrainian troops were being being trained in 2023, and nearly 2,000 others at an air force base that has crucially supported American drone operations.

A device likely tied to an NSA or intelligence employee broadcast coordinates from inside a windowless building with a metal exterior known as the “Tin Can,” which is reportedly used for NSA surveillance, according to agency documents leaked by Edward Snowden. Another device transmitted signals from within a restricted weapons testing facility, revealing its zig-zagging movements across a high-security zone used for tank maneuvers and live munitions drills.

We traced these devices from barracks to work buildings, Italian restaurants, Aldi grocery stores, and bars. As many as four devices that regularly pinged from Ramstein Air Base were later tracked to nearby brothels off base, including a multistory facility called SexWorld.

Experts caution that foreign governments could use this data to identify individuals with access to sensitive areas; terrorists or criminals could decipher when US nuclear weapons are least guarded; or spies and other nefarious actors could leverage embarrassing information for blackmail.

“The unregulated data broker industry poses a clear threat to national security,” says Ron Wyden, a US senator from Oregon with more than 20 years overseeing intelligence work. “It is outrageous that American data brokers are selling location data collected from thousands of brave members of the armed forces who serve in harms’ way around the world.”

Wyden approached the US Defense Department in September after initial reporting by BR and netzpolitik.org raised concerns about the tracking of potential US service members. DoD failed to respond. Likewise, Wyden’s office has yet to hear back from members of US president Joe Biden’s National Security Council, despite repeated inquiries. The NSC did not immediately respond to a request for comment.

“There is ample blame to go around,” says Wyden, “but unless the incoming administration and Congress act, these kinds of abuses will keep happening, and they’ll cost service members‘ lives.”

The Oregon senator also raised the issue earlier this year with the Federal Trade Commission, following an FTC order that imposed unprecedented restrictions against a US company it accused of gathering data around “sensitive locations.” Douglas Farrar, the FTC’s director of public affairs, declined a request to comment.

WIRED can now exclusively report, however, that the FTC is on the verge of fulfilling Wyden’s request. An FTC source, granted anonymity to discuss internal matters, says the agency is planning to file multiple lawsuits soon that will formally recognize US military installations as protected sites. The source adds that the lawsuits are in keeping with years‘ worth of work by FTC Chair Lina Khan aimed at shielding US consumers—including service members—from harmful surveillance practices.

Before a targeted ad appears on an app or website, third-party software often embedded in apps called software development kits transmit information about their users to data brokers, real-time bidding platforms, and ad exchanges—often including location data. Data brokers often will collect that data, analyze it, repackage it, and sell it.

In February of 2024, reporters from BR and Netzpolitik.org obtained a free sample of this kind of data from Datastream Group, a Florida-based data broker. The dataset contains 3.6 billion coordinates—some recorded at millisecond intervals—from up to 11 million mobile advertising IDs in Germany over what the company says is a 59-day span from October through December 2023.

Mobile advertising IDs are unique identifiers used by the advertising industry to serve personalized ads to smartphones. These strings of letters and numbers allow companies to track user behavior and target ads effectively. However, mobile advertising IDs can also reveal much more sensitive information, particularly when combined with precise geolocation data.

In total, our analysis revealed granular location data from up to 12,313 devices that appeared to spend time at or near at least 11 military and intelligence sites, potentially exposing crucial details like entry points, security practices, and guard schedules—information that, in the hands of hostile foreign governments or terrorists, could be deadly.

Our investigation uncovered 38,474 location signals from up to 189 devices inside Büchel Air Base, a high-security German installation where as many as 15 US nuclear weapons are reportedly stored in underground bunkers. At Grafenwöhr Training Area, where thousands of US troops are stationed and have trained Ukrainian soldiers on Abrams tanks, we tracked 191,415 signals from up to 1,257 devices.

At Lucius D. Clay Kaserne, the US Army’s European headquarters, we identified 74,968 location signals from as many as 799 devices, including some at the European Technical Center, once the NSA’s communication hub in Europe.Courtesy of OpenMapTiles

In Wiesbaden, home to the US Army’s European headquarters at Lucius D. Clay Kaserne, 74,968 location signals from as many as 799 devices were detected—some originating from sensitive intelligence facilities like the European Technical Center, once the NSA’s communication hub in Europe, and newly built intelligence operations centers.

At Ramstein Air Base, which supports some US drone operations, 164,223 signals from nearly 2,000 devices were tracked. That included devices tracked to Ramstein Elementary and High School, base schools for the children of military personnel.

Of these devices, 1,326 appeared at more than one of these highly sensitive military sites, potentially mapping the movements of US service members across Europe’s most secure locations.

The data is not infallible. Mobile ad IDs can be reset, meaning multiple IDs can be assigned to the same device. Our analysis found that, in some instances, devices were assigned more than 10 mobile ad IDs.

The location data’s precision at the individual device level can also be inconsistent. By contacting several people whose movements were revealed in the dataset, the reporting collective confirmed that much of the data was highly accurate—identifying work commutes and dog walks of individuals contacted. However, this wasn’t always the case. One reporter whose ID appears in the dataset found that it often placed him a block away from his apartment and during times when he was out of town. A study from NATO Strategic Communications Center of Excellence found that “quantity overshadows quality” in the data broker industry and that, on average, only up to 60 percent of the data surveyed can be considered precise.

According to its website, Datastream Group appears to offer “internet advertising data coupled with hashed emails, cookies, and mobile location data.” Its listed datasets include niche categories like boat owners, mortgage seekers, and cigarette smokers. The company, one of many in a multibillion-dollar location-data industry, did not respond to our request for comment about the data it provided on US military and intelligence personnel in Germany, where the US maintains a force of at least 35,000 troops, according to the most recent estimates.

Defense Department officials have known about the threat that commercial data brokers pose to national security since at least 2016, when Mike Yeagley, a government contractor and technologist, delivered a briefing to senior military officials at the Joint Special Operations Command compound in Fort Liberty (formerly Fort Bragg), North Carolina, about the issue. Yeagley’s presentation aimed to show how commercially available mobile data—already pervasive in conflict zones like Syria—could be weaponized for pattern of life analysis.

Midway through the presentation, Yeagley decided to raise the stakes. “Well, here’s the behavior of an ISIS operator,” he tells WIRED, recalling his presentation. “Let me turn the mirror around—let me show you how it works for your own personnel.” He then displayed data revealing phones as they moved from Fort Bragg in North Carolina and MacDill Air Force Base in Florida—critical hubs for elite US special operations units. The devices traveled through transit points like Turkey before clustering in northern Syria at a seemingly abandoned cement factory near Kobane, a known ISIS stronghold. The location he pinpointed was a covert forward operating base.

Yeagley says he was quickly escorted to a secured room to continue his presentation behind closed doors. There, officials questioned him on how he had obtained the data, concerned that his stunt had involved hacking personnel or unauthorized intercepts.

The data wasn’t sourced from espionage but from unregulated commercial brokers, he explained to the concerned DOD officials. “I didn’t hack, intercept, or engineer this data,” he told them. “I bought it.”

Now, years later, Yeagley remains deeply frustrated with the DODs inability to control the situation. What WIRED, BR, and Netzpolitik.org are now reporting is “very similar to the alarms we raised almost 10 years ago,” he says, shaking his head. “And it doesn’t seem like anything’s changed.”

US law requires the director of national intelligence to provide “protection support” for the personal devices of “at risk” intelligence personnel who are deemed susceptible to “hostile information collection activities.” But which personnel meet this criteria is unclear, as is the extent of the protections beyond periodic training and advice. The location data we acquired demonstrates, regardless, that commercial surveillance is far too pervasive and complex to be reduced to individual responsibility.

Biden’s outgoing director of national intelligence, Avril Haines, did not respond to a request for comment.

A report declassified by Haines last summeracknowledges that US intelligence agencies had purchased a “large amount” of “sensitive and intimate information” about US citizens from commercial data brokers, adding that “in the wrong hands,” the data could “facilitate blackmail, stalking, harassment, and public shaming.” The report, which contains numerous redactions, notes that, while the US government „would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times,” smartphones, connected cars, and web tracking have all made this possible “without government participation.”

Mike Rogers, the Republican chair of the House Armed Services Committee, did not respond to multiple requests for comment. A spokesperson for Adam Smith, the committee’s ranking Democrat, said Smith was unavailable to discuss the matter, busy negotiating a must-pass bill to fund the Pentagon’s policy priorities next year.

Jack Reed and Roger Wicker, the leading Democrat and Republican on the Senate Armed Services Committee, respectively, did not respond to multiple requests for comment. Inquiries placed with House and Senate leaders and top lawmakers on both congressional intelligence committees have gone unanswered.

The DOD and the NSA declined to answer specific questions related to our investigation. However, DOD spokesperson Javan Rasnake says that the Pentagon is aware that geolocation services could put personnel at risk and urged service members to remember their training and adhere strictly to operational security protocols. “Within the USEUCOM region, members are reminded of the need to execute proper OPSEC when conducting mission activities inside operational areas,” Rasnake says, using the shorthand for operational security.

An internal Pentagon presentation obtained by the reporting collective, though, claims that not only is the domestic data collection likely capable of revealing military secrets, it is essentially unavoidable at the personal level, service members’ lives being simply too intertwined with the technology permitting it. This conclusion closely mirrors the observations of Chief Justice John Roberts of the US Supreme Court, who in landmark privacy cases within the past decade described cell phones as being a “pervasive and insistent part of daily life” and that owning one was “indispensable to participation in modern society.”

The presentation, which a source says was delivered to high-ranking general officers, including the US Army’s chief information officer, warns that despite promises from major ad tech companies, “de-anonymization” is all but trivial given the widespread availability of commercial data collected on Pentagon employees. The document emphasizes that the caches of location data on US individuals is a “force protection issue,” likely capable of revealing troop movements and other highly guarded military secrets.

While instances of blackmail inside the Pentagon have seen a sharp decline since the Cold War, many of the structural barriers to persistently surveilling Americans have also vanished. In recent decades, US courts have repeatedly found that new technologies pose a threat to privacy by enabling surveillance that, “in earlier times, would have been prohibitively expensive,“ as the 7th Circuit Court of Appeals noted in 2007.

In an August 2024 ruling, another US appeals court disregarded claims by tech companies that users who “opt in” to surveillance were actually “informed” and doing so “voluntarily,” declaring the opposite is clear to “anyone with a smartphone.” The internal presentation for military staff presses that adversarial nations can gain access to advertising data with ease, using it to exploit, manipulate, and coerce military personnel for purposes of espionage.

Patronizing sex workers, whether legal in a foreign country or not, is a violation of the Uniform Code of Military Justice. The penalties can be severe, including forfeiture of pay, dishonorable discharge, and up to one year of imprisonment. But the ban on solicitation is not merely imposed on principle alone, says Michael Waddington, a criminal defense attorney who specializes in court-marial cases. “There’s a genuine danger of encountering foreign agents in these establishments, which can lead to blackmail or exploitation,” he says.

“This issue is particularly concerning given the current geopolitical climate. Many US servicemembers in Europe are involved in supporting Ukraine in its defense against the Russian invasion,” Waddington says. “Any compromise of their integrity could have serious implications for our operations and national security.”

When it comes to jeopardizing national security, even data on low-level personnel can pose a risk, says Vivek Chilukuri, senior fellow and program director of the Technology and National Security Program at the Center for a New American Security (CNAS). Before joining CNAS, Chilukuri served in part as legislative director and tech policy advisor to US senator Michael Bennet on the Senate Intelligence Committee and previously worked at the US State Department, specializing in countering violent extremism.

„Low-value targets can lead to high-value compromises,“ Chilukuri says. „Even if someone isn’t senior in an organization, they may have access to highly sensitive infrastructure. A system is only as secure as its weakest link.“ He points out that if adversaries can target someone with access to a crucial server or database, they could exploit that vulnerability to cause serious damage. “It just takes one USB stick plugged into the right device to compromise an organization.”

It’s not just individual service members who are at risk—entire security protocols and operational routines can be exposed through location data. At Büchel Air Base, where the US is believed to have stored an estimated 10 to 15 B61 nuclear weapons, the data reveals the daily activity patterns of devices on the base, including when personnel are most active and, more concerningly, potentially when the base is least populated.

Overview of the Air Mobility Command ramp at Ramstein Air Base, Germany.Photograph: Timm Ziegenthaler/Stocktrek Images; Getty Images

Büchel has 11 protective aircraft shelters equipped with hardened vaults for nuclear weapons storage. Each vault, which is located in a so-called WS3, or Weapons Storage and Security System, can hold up to four warheads. Our investigation traced precise location data for as many as 40 cellular devices that were present in or near these bunkers.

The patterns we could observe from devices at Büchel go far beyond just understanding the working hours of people on base. In aggregate, it’s possible to map key entry and exit points, pinpointing frequently visited areas, and even tracing personnel to their off-base routines. For a terrorist, this information could be a gold mine—an opportunity to identify weak points, plan an attack, or target individuals with access to sensitive areas.

This month, German authorities arrested a former civilian contractor employed by the US military on allegations of offering to pass sensitive information about American military operations in Germany to Chinese intelligence agencies.

In April, German authorities arrested two German-Russian nationals accused of scouting US military sites for potential sabotage, including allegedly arson. One of the targeted locations was the US Army’s Grafenwöhr Training Area in Bavaria, a critical hub for US military operations in Europe that spans 233 square kilometers.

At Grafenwöhr, WIRED, BR, and Netzpolitik.org could track the precise movements from up to 1,257 devices. Some devices could even be observed zigzagging through Range 301, an armored vehicle course, before returning to nearby barracks.

Our investigation found 38,474 location signals from up to 189 devices inside Büchel Air Base, around a dozen US nuclear weapons are reportedly stored.Courtesy of OpenMapTiles

A senior fellow at Duke University’s Sanford School of Public Policy and head of its data brokerage research project, Justin Sherman also leads Global Cyber Strategies, a firm specializing in cybersecurity and tech policy. In 2023, he and his coauthors at Duke secured $250,000 in funding from the United States Military Academy to investigate how easy it is to purchase sensitive data about military personnel from data brokers. The results were alarming: They were able to buy highly sensitive, nonpublic, individually identifiable health and financial data on active-duty service members, without any vetting.

“It shows you how bad the situation is,” Sherman says, explaining how they geofenced requests to specific special operations bases. “We didn’t pretend to be a marketing firm in LA. We just wanted to see what the data brokers would ask.” Most brokers didn’t question their requests, and one even offered to bypass an ID verification check if they paid by wire.

During the study, Sherman helped draft an amendment to the National Defense Authorization Act that requires the Defense Department to ensure that highly identifiable individual data shared with contractors cannot be resold. He found the overall impact of the study underwhelming, however. “The scope of the industry is the problem,” he says. “It’s great to pass focused controls on parts of the ecosystem, but if you don’t address the rest of the industry, you leave the door wide open for anyone wanting location data on intelligence officers.”

Efforts by the US Congress to pass comprehensive privacy legislation have been stalled for the better part of a decade. The latest effort, known as the American Privacy Rights Act, failed to advance in June after GOP leaders threatened to scuttle the bill, which was significantly weakened before being shelved.

Another current privacy bill, the Fourth Amendment Is Not For Sale Act, seeks to ban the US government from purchasing data on Americans that it would normally need a warrant to obtain. While the bill would not prohibit the sale of commercial location data altogether, it would bar federal agencies from using those purchases to circumvent constitutional protections upheld by the Supreme Court. Its fate rests in the hands of House and Senate leaders, whose negotiations are private.

“The government needs to stop subsidizing what is now for good reason one of the world’s least popular industries,” says Sean Vitka, policy director at the nonprofit Demand Progress. “There are a lot of members of Congress who take seriously the severe threats to privacy and national security posed by data brokers, but we’ve seen many actions by congressional leaders that only furthers the problem. There shouldn’t need to be a body count for these people to take action.”

I Stared Into the AI Void With the SocialAI App

SocialAI is an online universe where everyone you interact with is a bot—for better or worse.

Robot Hands Adults in a Crowd Glitch Effect

The first time I used SocialAI, I was sure the app was performance art. That was the only logical explanation for why I would willingly sign up to have AI bots named Blaze Fury and Trollington Nefarious, well, troll me.

Even the app’s creator, Michael Sayman, admits that the premise of SocialAI may confuse people. His announcement this week of the app read a little like a generative AI joke: “A private social network where you receive millions of AI-generated comments offering feedback, advice, and reflections.”

But, no, SocialAI is real, if “real” applies to an online universe in which every single person you interact with is a bot.

There’s only one real human in the SocialAI equation. That person is you. The new iOS app is designed to let you post text like you would on Twitter or Threads. An ellipsis appears almost as soon as you do so, indicating that another person is loading up with ammunition, getting ready to fire back. Then, instantaneously, several comments appear, cascading below your post, each and every one of them written by an AI character. In the new new version of the app, just rolled out today, these AIs also talk to each other.

When you first sign up, you’re prompted to choose these AI character archetypes: Do you want to hear from Fans? Trolls? Skeptics? Odd-balls? Doomers? Visionaries? Nerds? Drama Queens? Liberals? Conservatives? Welcome to SocialAI, where Trollita Kafka, Vera D. Nothing, Sunshine Sparkle, Progressive Parker, Derek Dissent, and Professor Debaterson are here to prop you up or tell you why you’re wrong.

Screenshot of the instructions for setting up the Social AI app.

Is SocialAI appalling, an echo chamber taken to the extreme? Only if you ignore the truth of modern social media: Our feeds are already filled with bots, tuned by algorithms, and monetized with AI-driven ad systems. As real humans we do the feeding: freely supplying social apps fresh content, baiting trolls, buying stuff. In exchange, we’re amused, and occasionally feel a connection with friends and fans.As notorious crank Neil Postman wrote in 1985, “Anyone who is even slightly familiar with the history of communications knows that every new technology for thinking involves a trade-off.” The trade-off for social media in the age of AI is a slice of our humanity. SocialAI just strips the experience down to pure artifice.

“With a lot of social media, you don’t know who the bot is and who the real person is. It’s hard to tell the difference,” Sayman says. “I just felt like creating a space where you’re able to know that they’re 100 percent AIs. It’s more freeing.”

You might say Sayman has a knack for apps. As a teenage coder in Miami, Florida, during the financial crisis, Sayman gained fame for building a suite of apps to support his family, who had been considering moving back to Peru. Sayman later ended up working in product jobs at Facebook, Google, and Roblox. SocialAI was launched from Sayman’s own venture-backed app studio, Friendly Apps.

In many ways his app is emblematic of design thinking rather than pure AI innovation. SocialAI isn’t really a social app, but ChatGPT in the container of a social broadcast app. It’s an attempt to redefine how we interact with generative AI. Instead of limiting your ChatGPT conversation to a one-to-one chat window, Sayman posits, why not get your answers from many bots, all at the same time?

Over Zoom earlier this week, he explained to me how he thinks of generative AI like a smoothie if cups hadn’t yet been invented. You can still enjoy it from a bowl or plate, but those aren’t the right vessel. SocialAI, Sayman says, could be the cup.

Almost immediately Sayman laughed. “This is a terrible analogy,” he said.

Sayman is charming and clearly thinks a lot about how apps fit into our world. He’s a team of one right now, relying mostly on OpenAI’s technology to power SocialAI, blended with some other custom AI models. (Sayman rate-limits the app so that he doesn’t go broke in “three minutes” from the fees he’s paying to OpenAI. He also hasn’t quite yet figured out how he’ll make money off of SocialAI.) He knows he’s not the first to launch an AI-character app; Meta has burdened its apps with AI characters, and the Character AI app, which was just quasi-acquired by Google, lets you interact with a huge number of AI personas.But Sayman is hand-wavy about this competition. “I don’t see my app as, you’re going to be interacting with characters who you think might be real,” he says. “This is really for seeking answers to conflict resolution, or figuring out if what you’re trying to say is hurtful and get feedback before you post it somewhere else.”

“Someone joked to me that they thought Elon Musk should use this, so he could test all of his posts before he posts them on X,” Sayman said.

I’d actually tried that, tossing some of the most trafficked tweets from Elon Musk and the Twitter icon Dril into my SocialAI feed. I shared a news story from WIRED; the link was unclickable, because SocialAI doesn’t support link-sharing. (There’s no one to share it with, anyway.) I repurposed the viral “Bean Dad” tweet and purported to be a Bean Mom on SocialAI, urging my 9-year-old daughter to open a can of beans herself as a life lesson. I posted political content. I asked my synthetic SocialAI followers who else I should follow.

The bots obliged and flooded my feed with comments, like Reply Guys on steroids. But their responses lacked nutrients or human messiness. Mostly, I told Sayman, it all felt too uncanny, that I had a hard time crossing that chasm and placing value or meaning on what the bots had to say.

Sayman encouraged me to craft more posts along the lines of Reddit’s “Am I the Asshole” posts: Am I wrong in this situation? Should I apologize to a friend? Should I stay mad at my family forever? This, Sayman says, is the real purpose of SocialAI. I tried it. For a second the SocialAI bot comments lit up my lizard brain, my id and superego, the “I’m so right” instinct. Then Trollita Kafka told me, essentially, that I was in fact the asshole.One aspect of SocialAI that clearly does not represent the dawn of a new era: Sayman has put out a minimum viable product without communicating important guidelines around privacy, content policies, or how SocialAI or OpenAI might use the data people provide along the way. (Move fast, break things, etc.) He says he’s not using anyone’s posts to train his own AI models, but notes that users are still subject to OpenAI’s data-training terms, since he uses OpenAI’s API. You also can’t mute or block a bot that has gone off the rails.

At least, though, your feed is always private by default. You don’t have any “real” followers. My editor at WIRED, for example, could join SocialAI himself but will never be able to follow me or see that I copied and pasted an Elon Musk tweet about wanting to buy Coca-Cola and put the cocaine back in it, just as he could not follow my ChatGPT account and see what I’m enquiring about there.

As a human on SocialAI, you will never interact with another human. That’s the whole point. It’s your own little world with your own army of AI characters ready to bolster you or tear you down. You may not like it, but it might be where you’re headed anyway. You might already be there.

Source: https://www.wired.com/story/socialai-app-ai-chatbots-chatgpt/

Our basic assumptions about photos capturing reality are about to go up in smoke

Source: https://www.theverge.com/2024/8/22/24225972/ai-photo-era-what-is-reality-google-pixel-9

n explosion from the side of an old brick building. A crashed bicycle in a city intersection. A cockroach in a box of takeout. It took less than 10 seconds to create each of these images with the Reimagine tool in the Pixel 9’s Magic Editor. They are crisp. They are in full color. They are high-fidelity. There is no suspicious background blur, no tell-tale sixth finger. These photographs are extraordinarily convincing, and they are all extremely fucking fake. 

Anyone who buys a Pixel 9 — the latest model of Google’s flagship phone, available starting this week — will have access to the easiest, breeziest user interface for top-tier lies, built right into their mobile device. This is all but certain to become the norm, with similar features already available on competing devices and rolling out on others in the near future. When a smartphone “just works,” it’s usually a good thing; here, it’s the entire problem in the first place.

Photography has been used in the service of deception for as long as it has existed. (Consider Victorian spirit photos, the infamous Loch Ness monster photograph, or Stalin’s photographic purges of IRL-purged comrades.) But it would be disingenuous to say that photographs have never been considered reliable evidence. Everyone who is reading this article in 2024 grew up in an era where a photograph was, by default, a representation of the truth. A staged scene with movie effects, a digital photo manipulation, or more recently, a deepfake — these were potential deceptions to take into account, but they were outliers in the realm of possibility. It took specialized knowledge and specialized tools to sabotage the intuitive trust in a photograph. Fake was the exception, not the rule. 

If I say Tiananmen Square, you will, most likely, envision the same photograph I do. This also goes for Abu Ghraib or napalm girl. These images have defined wars and revolutions; they have encapsulated truth to a degree that is impossible to fully express. There was no reason to express why these photos matter, why they are so pivotal, why we put so much value in them. Our trust in photography was so deep that when we spent time discussing veracity in images, it was more important to belabor the point that it was possible for photographs to be fake, sometimes. 

This is all about to flip — the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We are not prepared for what happens after.

 
A real photo of a stream.
A real photo of a stream.
 
Edited with Google’s Magic Editor.
Edited with Google’s Magic Editor.
 
A real photo of a person in a living room (with their face obscured).
A real photo of a person in a living room (with their face obscured).
 
Edited with Google’s Magic Editor.
Edited with Google’s Magic Editor.

No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus — for as long as any of us has been here, photographs proved something happened. Consider all the ways in which the assumed veracity of a photograph has, previously, validated the truth of your experiences. The preexisting ding in the fender of your rental car. The leak in your ceiling. The arrival of a package. An actual, non-AI-generated cockroach in your takeout. When wildfires encroach upon your residential neighborhood, how do you communicate to friends and acquaintances the thickness of the smoke outside? 

And up until now, the onus has largely been on those denying the truth of a photo to prove their claims. The flat-earther is out of step with the social consensus not because they do not understand astrophysics — how many of us actually understand astrophysics, after all? — but because they must engage in a series of increasingly elaborate justifications for why certain photographs and videos are not real. They must invent a vast state conspiracy to explain the steady output of satellite photographs that capture the curvature of the Earth. They must create a soundstage for the 1969 Moon landing. 

We have taken for granted that the burden of proof is upon them. In the age of the Pixel 9, it might be best to start brushing up on our astrophysics. 

For the most part, the average image created by these AI tools will, in and of itself, be pretty harmless — an extra tree in a backdrop, an alligator in a pizzeria, a silly costume interposed over a cat. In aggregate, the deluge upends how we treat the concept of the photo entirely, and that in itself has tremendous repercussions. Consider, for instance, that the last decade has seen extraordinary social upheaval in the United States sparked by grainy videos of police brutality. Where the authorities obscured or concealed reality, these videos told the truth. 

The persistent cry of “Fake News!” from Trumpist quarters presaged the beginning of this era of unmitigated bullshit, in which the impact of the truth will be deadened by the firehose of lies. The next Abu Ghraib will be buried under a sea of AI-generated war crime snuff. The next George Floyd will go unnoticed and unvindicated.

 
A real photo of an empty street.
A real photo of an empty street.
 
Edited with Google’s Magic Editor.
Edited with Google’s Magic Editor.
 
A real photo inside a New York City subway station.
A real photo inside a New York City subway station.
 
Edited with Google’s Magic Editor.
Edited with Google’s Magic Editor.

You can already see the shape of what’s to come. In the Kyle Rittenhouse trial, the defense claimed that Apple’s pinch-to-zoom manipulates photos, successfully persuading the judge to put the burden of proof on the prosecution to show that zoomed-in iPhone footage was not AI-manipulated. More recently, Donald Trump falsely claimed that a photo of a well-attended Kamala Harris rally was AI-generated — a claim that was only possible to make because people were able to believe it.

Even before AI, those of us in the media had been working in a defensive crouch, scrutinizing the details and provenance of every image, vetting for misleading context or photo manipulation. After all, every major news event comes with an onslaught of misinformation. But the incoming paradigm shift implicates something much more fundamental than the constant grind of suspicion that is sometimes called digital literacy.

Google understands perfectly well what it is doing to the photograph as an institution — in an interview with Wired, the group product manager for the Pixel camera described the editing tool as “help[ing] you create the moment that is the way you remember it, that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond.” A photo, in this world, stops being a supplement to fallible human recollection, but instead a mirror of it. And as photographs become little more than hallucinations made manifest, the dumbest shit will devolve into a courtroom battle over the reputation of the witnesses and the existence of corroborating evidence.

This erosion of the social consensus began before the Pixel 9, and it will not be carried forth by the Pixel 9 alone. Still, the phone’s new AI capabilities are of note not just because the barrier to entry is so low, but because the safeguards we ran into were astonishingly anemic. The industry’s proposed AI image watermarking standard is mired in the usual standards slog, and Google’s own much-vaunted AI watermarking system was nowhere in sight when The Verge tried out the Pixel 9’s Magic Editor. The photos that are modified with the Reimagine tool simply have a line of removable metadata added to them. (The inherent fragility of this kind of metadata was supposed to be addressed by Google’s invention of the theoretically unremovable SynthID watermark.) Google told us that the outputs of Pixel Studio — a pure prompt generator that is closer to DALL-E — will be tagged with a SynthID watermark; ironically, we found the capabilities of the Magic Editor’s Reimagine tool, which modifies existing photos, were much more alarming.

 
Examples of famous photographs, digitally altered to demonstrate the implications of AI photography.
Image: Cath Virginia / The Verge, Neil Armstrong, Dorothea Lange, Joe Rosenthal
 

Google claims the Pixel 9 will not be an unfettered bullshit factory but is thin on substantive assurances. “We design our Generative AI tools to respect the intent of user prompts and that means they may create content that may offend when instructed by the user to do so,” Alex Moriconi, Google communications manager, told The Verge in an email. “That said, it’s not anything goes. We have clear policies and Terms of Service on what kinds of content we allow and don’t allow, and build guardrails to prevent abuse. At times, some prompts can challenge these tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.” 

The policies are what you would expect — for example, you can’t use Google services to facilitate crimes or incite violence. Some attempted prompts returned the generic error message, “Magic Editor can’t complete this edit. Try typing something else.” (You can see throughout this story, however, several worrisome prompts that did work.) But when it comes down to it, standard-fare content moderation will not save the photograph from its incipient demise as a signal of truth.

We briefly lived in an era in which the photograph was a shortcut to reality, to knowing things, to having a smoking gun. It was an extraordinarily useful tool for navigating the world around us. We are now leaping headfirst into a future in which reality is simply less knowable. The lost Library of Alexandria could have fit onto the microSD card in my Nintendo Switch, and yet the cutting edge of technology is a handheld telephone that spews lies as a fun little bonus feature. 

We are fu**ed.

The Woman who showed President Biden ChatGPT

and Helped Set the Course for AI

Arati Prabhakar has the ear of the US president and a massive mission: help manage AI, revive the semiconductor industry, and pull off a cancer moonshot.

Image may contain Clothing Formal Wear Suit Person Adult Face Head Photography Portrait Coat and Accessories

one day in March 2023, Arati Prabhakar brought a laptop into the Oval Office and showed the future to Joe Biden. Six months later, the president issued a sweeping executive order that set a regulatory course for AI.

This all happened because ChatGPT had stunned the world. In an instant it became very, very obvious that the United States needed to speed up its efforts to regulate the AI industry—and adopt policies to take advantage of it. While the potential benefits were unlimited (Social Security customer service that works!), so were the potential downsides, like floods of disinformation or even, in the view of some, human extinction. Someone had to demonstrate that to the president.

The job fell to Prabhakar, because she is the director of the White House Office of Science and Technology Policy and holds cabinet status as the president’s chief science and technology adviser; she’d already been methodically educating top officials about the transformative power of AI. But she also has the experience and bureaucratic savvy to make an impact with the most powerful person in the world.

Born in India and raised in Texas, Prabhakar has a PhD in applied physics from Caltech and previously ran two US agencies: the National Institute of Standards and Technology and the Department of Defense’s Advanced Research Projects Agency. She also spent 15 years in Silicon Valley as a venture capitalist, including as president of Interval Research, Paul Allen’s legendary tech incubator, and has served as vice president or chief technology officer at several companies.

Prabhakar assumed her current job in October 2022—just in time to have AI dominate the agenda—and helped to push out that 20,000-word executive order, which mandates safety standards, boosts innovation, promotes AI in government and education, and even tries to mitigate job losses. She replaced biologist Eric Lander, who had resigned after an investigation concluded that he ran a toxic workplace. Prabhakar is the first person of color and first woman to be appointed director of the office.

We spoke at the kitchen table of Prabhakar’s Silicon Valley condo—a simply decorated space that, if my recollection is correct, is very unlike the OSTP offices in the ghostly, intimidating Eisenhower Executive Office Building in DC. Happily, the California vibes prevailed, and our conversation felt very unintimidating—even at ease. We talked about how Bruce Springsteen figured into Biden’s first ChatGPT demo, her hopes for a semiconductor renaissance in the US, and why Biden’s war on cancer is different from every other president’s war on cancer. I also asked her about the status of the unfilled role of chief technology officer for the nation—a single person, ideally kind of geeky, whose entire job revolves around the technology issues driving the 21st century.

Steven Levy: Why did you sign up for this job?

Arati Prabhakar: Because President Biden asked. He sees science and technology as enabling us to do big things, which is exactly how I think about their purpose.

What kinds of big things?

The mission of OSTP is to advance the entire science and technology ecosystem. We have a system that follows a set of priorities. We spend an enormous amount on R&D in health. But both public and corporate funding are largely focused on pharmaceuticals and medical devices, and very little on prevention or clinical care practices—the things that could change health as opposed to dealing with disease. We also have to meet the climate crisis. For technologies like clean energy, we don’t do a great job of getting things out of research and turning them into impact for Americans. It’s the unfinished business of this country.

It’s almost predestined that you’d be in this job. As soon as you got your physics degree at Caltech, you went to DC and got enmeshed in policy.

Yeah, I left the track I was supposed to be on. My family came here from India when I was 3, and I was raised in a household where my mom started sentences with, “When you get your PhD and become an academic …” It wasn’t a joke. Caltech, especially when I finished my degree in 1984, was extremely ivory tower, a place of worship for science. I learned a tremendous amount, but I also learned that my joy did not come from being in a lab at 2 in the morning and having that eureka moment. Just on a lark, I came to Washington for, quote-unquote, one year on a congressional fellowship. The big change was in 1986, when I went to Darpa as a young program manager. The mission of the organization was to use science and technology to change the arc of the future. I had found my home.

Image may contain Architecture Building Handrail House Housing Staircase Adult Person Face and Head

How did you wind up at Darpa?

I had written a study on microelectronics R&D. We were just starting to figure out that the semiconductor industry wasn’t always going to be dominated by the US. We worked on a bunch of stuff that didn’t pan out but also laid the groundwork for things that did. I was there for seven years, left for 19, and came back as director. Two decades later the portfolio was quite different, as it should be. I got to christen the first self-driving ship that could leave a port and navigate across open oceans without a single sailor on board. The other classic Darpa thing is to figure out what might be the foundation for new capabilities. I ended up starting a Biological Technologies Office. One of the many things that came out of that was the rapid development and distribution of mRNA vaccines, which never would have happened without the Darpa investment.

One difference today is that tech giants are doing a lot of their own R&D, though not necessarily for the big leaps Darpa was built for.

Every developed economy has this pattern. First there’s public investment in R&D. That’s part of how you germinate new industries and boost your economy. As those industries grow, so does their investment in R&D, and that ends up being dominant. There was a time when it was sort of 50-50 public-private. Now it’s much more private investment. For Darpa, of course, the mission is breakthrough technologies and capabilities for national security.

Are you worried about that shift?

It’s not a competition! Absolutely there’s been a huge shift. That private tech companies are building the leading edge LLMs today has huge implications. It’s a tremendous American advantage, but it has implications for how the technology is developed and used. We have to make sure we get what we need for public purposes.

Is the US government investing enough to make that happen?

I don’t think we are. We need to increase the funding. One component of the AI executive order is a National AI Research Resource. Researchers don’t have the access to data and computation that companies have. An initiative that Congress is considering, that the administration is very supportive of, would place something like $3 billion of resources with the National Science Foundation.

That’s a tiny percentage of the funds going into a company like OpenAI.

It costs a lot to build these leading-edge models. The question is, how do we have governance of advanced AI and how do we make sure we can use it for public purposes? The government has got to do more. We need help from Congress. But we also have to chart a different kind of relationship with industry than we’ve had in the past.

What might that look like?

Look at semiconductor manufacturing and the CHIPS Act.

We’ll get to that later. First let’s talk about the president. How deep is his understanding of things like AI?

Some of the most fun I’ve gotten on the job was working with the president and helping him understand where the technology is, like when we got to do the chatbot demonstrations for the president in the Oval Office.

What was that like?

Using a laptop with ChatGPT, we picked a topic that was of particular interest. The president had just been at a ceremony where he gave Bruce Springsteen the National Medal of Arts. He had joked about how Springsteen was from New Jersey, just across the river from his state, Delaware, and then he made reference to a lawsuit between those two states. I had never heard of it. We thought it would be fun to make use of this legal case. For the first prompt, we asked ChatGPT to explain the case to a first grader. Immediately these words start coming out like, “OK, kiddo, let me tell you, if you had a fight with someone …” Then we asked the bot to write a legal brief for a Supreme Court case. And out comes this very formal legal analysis. Then we wrote a song in the style of Bruce Springsteen about the case. We also did image demonstrations. We generated one of his dog Commander sitting behind the Resolute desk in the Oval Office.

Image may contain Face Head Person Photography Portrait Adult Wedding Accessories Glasses Jewelry and Necklace

So what was the president’s reaction?

He was like, “Wow, I can’t believe it could do that.” It wasn’t the first time he was aware of AI, but it gave him direct experience. It allowed us to dive into what was really going on. It seems like a crazy magical thing, but you need to get under the hood and understand that these models are computer systems that people train on data and then use to make startlingly good statistical predictions.

There are a ton of issues covered in the executive order. Which are the ones that you sense engaged the president most after he saw the demo?

The main thing that changed in that period was his sense of urgency. The task that he put out for all of us was to manage the risks so that we can see the benefits. We deliberately took the approach of dealing with a broad set of categories. That’s why you saw an extremely broad, bulky, large executive order. The risks to the integrity of information from deception and fraud, risks to safety and security, risks to civil rights and civil liberties, discrimination and privacy issues, and then risks to workers and the economy and IP—they’re all going to manifest in different ways for different people over different timelines. Sometimes we have laws that already address those risks—turns out it’s illegal to commit fraud! But other things, like the IP questions, don’t have clean answers.

There are a lot of provisions in the order that must meet set deadlines. How are you doing on those?

They are being met. We just rolled out all the 90-day milestones that were met. One part of the order I’m really getting a kick out of is the AI Council, which includes cabinet secretaries and heads of various regulatory agencies. When they come together, it’s not like most senior meetings where all the work has been done. These are meetings with rich discussion, where people engage with enthusiasm, because they know that we’ve got to get AI right.

There’s a fear that the technology will be concentrated among a few big companies. Microsoft essentially subsumed one leading startup, Inflection. Are you concerned about this centralization?

Competition is absolutely part of this discussion. The executive order talks specifically about that. One of the many dimensions of this issue is the extent to which power will reside only with those who are able to build these massive models.

The order calls for AI technology to embody equity and not include biases. A lot of people in DC are devoted to fighting diversity mandates. Others are uncomfortable with the government determining what constitutes bias. How does the government legally and morally put its finger on the scale?

Here’s what we’re doing. The president signed the executive order at the end of October. A couple of days later, the Office of Management and Budget came out with a memo—a draft of guidance about how all of government will use AI. Now we’re in the deep, wonky part, but this is where the rubber meets the road. It’s that guidance that will build in processes to make sure that when the government uses AI tools it’s not embedding bias.

That’s the strategy? You won’t mandate rules for the private sector but will impose them on the government, and because the government is such a big customer, companies will adopt them for everyone?

That can be helpful for setting a way that things work broadly. But there are also laws and regulations in place that ban discrimination in employment and lending decisions. So you can feel free to use AI, but it doesn’t get you off the hook.

Have you read Marc Andreessen’s techno-optimist manifesto?

No. I’ve heard of it.

There’s a line in there that basically says that if you’re slowing down the progress of AI, you are the equivalent of a murderer, because going forward without restraints will save lives.

That’s such an oversimplified view of the world. All of human history tells us that powerful technologies get used for good and for ill. The reason I love what I’ve gotten to do across four or five decades now is because I see over and over again that after a lot of work we end up making forward progress. That doesn’t happen automatically because of some cool new technology. It happens because of a lot of very human choices about how we use it, how we don’t use it, how we make sure people have access to it, and how we manage the downsides.

“I’m trying to figure out if you’re going to write a bunch of nice research papers, or you’re gonna move the needle on cancer.”

How are you encouraging the use of AI in government?

Right now AI is being used in government in more modest ways. Veterans Affairs is using it to get feedback from veterans to improve their services. The Social Security Administration is using it to accelerate the processing of disability claims.

Those are older programs. What’s next? Government bureaucrats spend a lot of time drafting documents. Will AI be part of that process?

That’s one place where you can see generative AI being used. Like in a corporation, we have to sort out how to use it responsibly, to make sure that sensitive data aren’t being leaked, and also that it’s not embedding bias. One of the things I’m really excited about in the executive order is an AI talent surge, saying to people who are experts in AI, “If you want to move the world, this is a great time to bring your skills to the government.” We published that on AI.gov.

How far along are you in that process?

We’re in the matchmaking process. We have great people coming in.

OK, let’s turn to the CHIPS Act, which is the Biden administration’s centerpiece for reviving the semiconductor industry in the US. The legislation provides more than $50 billion to grow the US-based chip industry, but it was designed to spur even more private investment, right?

That story starts decades ago with US dominance in semiconductor manufacturing. Over a few decades the industry got globalized, then it got very dangerously concentrated in one geopolitically fragile part of the world. A year and a half ago the president got Congress to act on a bipartisan basis, and we are crafting a completely different way to work with the semiconductor industry in the US.

Different in what sense?

It won’t work if the government goes off and builds its own fabs. So our partnership is one where companies decide what products are the right ones to build and where we will build them, and government incentives come on the basis of that. It’s the first time the US has done that with this industry, but it’s how it was done elsewhere around the world.

Image may contain Floor Flooring Person Clothing Coat Adult Formal Wear Suit Face Head Blazer and Jacket

Some people say it’s a fantasy to think we can return to the day when the US had a significant share of chip and electronics manufacturing. Obviously, you feel differently.

We’re not trying to turn the clock back to the 1980s and saying, “Bring everything to the US.” Our strategy is to make sure that we have the robustness we need for the US and to make sure we’re meeting our national security needs.

The biggest grant recipient was Intel, which got $8 billion. Its CEO, Pat Gelsinger, said that the CHIPS Act wasn’t enough to make the US competitive, and we’d need a CHIPS 2. Is he right?

I don’t think anyone knows the answer yet. There’s so many factors. The job right now is to build the fabs.

As the former head of Darpa, you were part of the military establishment. How do you view the sentiment among employees of some companies, like Google, that they should not take on military contracts?

It’s great for people in companies to be asking hard questions about how their work is used. I respect that. My personal view is that our national security is essential for all of us. Here in Silicon Valley, we completely take for granted that you get up every morning and try to build and fund businesses. That doesn’t happen by accident. It’s shaped by the work that we do in national security.

Your office is spearheading what the president calls a Cancer Moonshot. It seems every president in my lifetime had some project to cure cancer. I remember President Nixon talking about a war on cancer. Why should we believe this one?

We’ve made real progress. The president and the first lady set two goals. One is to cut the age-adjusted cancer death rate in half over 25 years. The other is to change the experience of people going through cancer. We’ve come to understand that cancer is a very complex disease with many different aspects. American health outcomes are not acceptable for the most wealthy country in the world. When I spoke to Danielle Carnival, who leads the Cancer Moonshot for us—she worked for the vice president in the Obama administration—I said to her, “I’m trying to figure out if you’re going to write a bunch of nice research papers or you’re gonna move the needle on cancer.” She talked about new therapies but also critically important work to expand access to early screening, because if you catch some of them early, it changes the whole story. When I heard that I said, “Good, we’re actually going to move the needle.”

Don’t you think there’s a hostility to science in much of the population?

People are more skeptical about everything. I do think that there has been a shift that is specific to some hot-button issues, like climate and vaccines or other infectious disease measures. Scientists want to explain more, but they should be humble. I don’t think it’s very effective to treat science as a religion. In year two of the pandemic, people kept saying that the guidance keeps changing, and all I could think was, “Of course the guidance is changing, our understanding is changing.” The moment called for a little humility from the research community rather than saying, “We’re the know-it-alls.”

Is it awkward to be in charge of science policy at a time when many people don’t believe in empiricism?

I don’t think it’s as extreme as that. People have always made choices not just based on hard facts but also on the factors in their lives and the network of thought that they are enmeshed in. We have to accept that people are complex.

Part of your job is to hire and oversee the nation’s chief technology officer. But we don’t have one. Why not?

That had already been a long endeavor when I came on board. That’s been a huge challenge. It’s very difficult to recruit, because those working in tech almost always have financial entanglements.

I find it hard to believe that in a country full of great talent there isn’t someone qualified for that job who doesn’t own stock or can’t get rid of their holdings. Is this just a low priority for you?

We spent a lot of time working on that and haven’t succeeded.

Are we going to go through the whole term without a CTO?

I have no predictions. I’ve got nothing more than that.

There are only a few months left in the current term of this administration. President Biden has given your role cabinet status. Have science and technology found their appropriate influence in government?

Yes, I see it very clearly. Look at some of the biggest changes—for example, the first really meaningful advances on climate, deploying solutions at a scale that the climate actually notices. I see these changes in every area and I’m delighted.

Source: https://www.wired.com/story/arati-prabhakar-ostp-biden-science-tech-adviser/

Microsoft’s Recall technology bears resemblance to George Orwells 1984 dystopia in several key factors

Microsoft’s Recall technology, an AI tool designed to assist users by automatically reminding them of important information and tasks, bears resemblance to George Orwell’s „1984“ dystopia in several key aspects:

1. Surveillance and Data Collection:
– 1984: The Party constantly monitors citizens through telescreens and other surveillance methods, ensuring that every action, word, and even thought aligns with the Party’s ideology.
– Recall Technology: While intended for productivity, Recall collects and analyzes large amounts of personal data, emails, and other communications to provide reminders. This level of data collection can raise concerns about privacy and the potential for misuse or unauthorized access to personal information.

2. Memory and Thought Control:
– 1984: The Party manipulates historical records and uses propaganda to control citizens‘ memories and perceptions of reality, essentially rewriting history to fit its narrative.
– Recall Technology: By determining what information is deemed important and what reminders to provide, Recall could influence users‘ focus and priorities. This selective emphasis on certain data could subtly shape users‘ perceptions and decisions, akin to a form of soft memory control.

3. Dependence on Technology:
– 1984: The populace is heavily reliant on the Party’s technology for information, entertainment, and even personal relationships, which are monitored and controlled by the state.
– Recall Technology: Users might become increasingly dependent on Recall to manage their schedules and information, potentially diminishing their own capacity to remember and prioritize tasks independently. This dependence can create a vulnerability where the technology has significant control over daily life.

4. Loss of Personal Autonomy:
– 1984: Individual autonomy is obliterated as the Party dictates all aspects of life, from public behavior to private thoughts.
– Recall Technology: Although not as extreme, the automation and AI-driven suggestions in Recall could erode personal decision-making over time. As users rely more on technology to dictate their actions and reminders, their sense of personal control and autonomy may diminish.

5. Potential for Abuse:
– 1984: The totalitarian regime abuses its power to maintain control over the population, using technology as a tool of oppression.
– Recall Technology: In a worst-case scenario, the data collected by Recall could be exploited by malicious actors or for unethical purposes. If misused by corporations or governments, it could lead to scenarios where users‘ personal information is leveraged against them, echoing the coercive control seen in Orwell’s dystopia.

While Microsoft’s Recall technology is designed with productivity in mind, its potential implications for privacy, autonomy, and the influence over personal information draw unsettling parallels to the controlled and monitored society depicted in „1984.“

Why Elon Musk should consider integrating OpenAI’s ChatGPT „GPT-4o“ as the operating system for a brand new Tesla SUV – Here are the five biggest advantages to highlight

  1. Revolutionary User Interface and Experience:
    • Natural Language Interaction: GPT-4o’s advanced natural language processing capabilities allow for seamless, conversational interaction between the driver and the vehicle. This makes controlling the vehicle and accessing information more intuitive and user-friendly.
    • Personalized Experience: The AI can learn from individual driver behaviors and preferences, offering tailored suggestions for routes, entertainment, climate settings, and more, enhancing overall user satisfaction and engagement. 
  2. Enhanced Autonomous Driving and Safety:
    • Superior Decision-Making: GPT-4o can significantly enhance Tesla’s autonomous driving capabilities by processing and analyzing vast amounts of real-time data to make better driving decisions. This improves the safety, reliability, and efficiency of the vehicle’s self-driving features.
    • Proactive Safety Features: The AI can provide real-time monitoring of the vehicle’s surroundings and driver behavior, offering proactive alerts and interventions to prevent accidents and ensure passenger safety.
  3. Next-Level Infotainment and Connectivity:
    • Smart Infotainment System: With GPT-4o, the SUV’s infotainment system can offer highly intelligent and personalized content recommendations, including music, podcasts, audiobooks, and more, making long journeys more enjoyable.
    • Seamless Connectivity: The AI can integrate with a wide range of apps and services, enabling drivers to manage their schedules, communicate, and access information without distraction, thus enhancing productivity and convenience.
  4. Continuous Improvement and Future-Proofing:
    • Self-Learning Capabilities: GPT-4o continuously learns and adapts from user interactions and external data, ensuring that the vehicle’s performance and features improve over time. This results in an ever-evolving user experience that keeps getting better.
    • Over-the-Air Updates: Regular over-the-air updates from OpenAI ensure that the SUV remains at the forefront of technology, with the latest features, security enhancements, and improvements being seamlessly integrated.
  5. Market Differentiation and Brand Leadership:
    • Innovative Edge: Integrating GPT-4o positions Tesla’s new SUV as a cutting-edge vehicle, showcasing the latest in AI and automotive technology. This differentiates Tesla from competitors and strengthens its reputation as a leader in innovation.
    • Enhanced Customer Engagement: The unique AI-driven features and personalized experiences can drive stronger customer engagement and loyalty, attracting tech-savvy consumers and enhancing the overall brand image.

By leveraging these advantages, Tesla can create a groundbreaking SUV that not only meets but exceeds consumer expectations, setting new standards for the automotive industry and reinforcing Tesla’s position as a pioneer in automotive and AI technology.

Real World Use Cases for Apples Vision Pro + Version 2 – with the new operating system ChatGPT „GPT-4o“

The integration of advanced AI like OpenAI’s GPT-4o into Apple’s Vision Pro + Version 2 can significantly enhance its vision understanding capabilities.
Here are ten possible use cases:

1. Augmented Reality (AR) Applications:
– Interactive AR Experiences: Enhance AR applications by providing real-time object recognition and interaction. For example, users can point the device at a historical landmark and receive detailed information and interactive visuals about it.
– AR Navigation: Offer real-time navigation assistance in complex environments like malls or airports, overlaying directions onto the user’s view.

2. Enhanced Photography and Videography:
– Intelligent Scene Recognition: Automatically adjust camera settings based on the scene being captured, such as landscapes, portraits, or low-light environments, ensuring optimal photo and video quality.
– Content Creation Assistance: Provide suggestions and enhancements for capturing creative content, such as framing tips, real-time filters, and effects.

3. Healthcare and Medical Diagnosis:
– Medical Imaging Analysis: Assist in analyzing medical images (e.g., X-rays, MRIs) to identify potential issues, providing preliminary diagnostic support to healthcare professionals.
– Remote Health Monitoring: Enable remote health monitoring by analyzing visual data from wearable devices to track health metrics and detect anomalies.

4. Retail and Shopping:
– Virtual Try-Ons: Allow users to virtually try on clothing, accessories, or cosmetics using the device’s camera, enhancing the online shopping experience.
– Product Recognition: Identify products in stores and provide information, reviews, and price comparisons, helping users make informed purchasing decisions.

5. Security and Surveillance:
– Facial Recognition: Enhance security systems with facial recognition capabilities for authorized access and threat detection.
– Anomaly Detection: Monitor and analyze security footage to detect unusual activities or potential security threats in real-time.

6. Education and Training:
– Interactive Learning: Use vision understanding to create interactive educational experiences, such as identifying objects or animals in educational content and providing detailed explanations.
– Skill Training: Offer real-time feedback and guidance for skills training, such as in sports or technical tasks, by analyzing movements and techniques.

7. Accessibility and Assistive Technology:
– Object Recognition for the Visually Impaired: Help visually impaired users navigate their surroundings by identifying objects and providing auditory descriptions.
– Sign Language Recognition: Recognize and translate sign language in real-time, facilitating communication for hearing-impaired individuals.

8. Home Automation and Smart Living:
– Smart Home Integration: Recognize household items and provide control over smart home devices. For instance, identifying a lamp and allowing users to turn it on or off via voice commands.
– Activity Monitoring: Monitor and analyze daily activities to provide insights and recommendations for improving household efficiency and safety.

9. Automotive and Driver Assistance:
– Driver Monitoring: Monitor driver attentiveness and detect signs of drowsiness or distraction, providing alerts to enhance safety.
– Object Detection: Enhance autonomous driving systems with better object detection and classification, improving vehicle navigation and safety.

10. Environmental Monitoring:
– Wildlife Tracking: Use vision understanding to monitor and track wildlife in natural habitats for research and conservation efforts.
– Pollution Detection: Identify and analyze environmental pollutants or changes in landscapes, aiding in environmental protection and management.

These use cases demonstrate the broad potential of integrating advanced vision understanding capabilities into Apple’s Vision Pro + Version 2, enhancing its functionality across various domains and providing significant value to users.

Why Apple uses ChatGPT 4o as its new operating system for Apples Vision Pro + Version 2

Apple’s Vision Pro + Version 2, utilizing OpenAI’s ChatGPT „GPT-4o“ as the operating system offers several compelling marketing benefits. Here are the key advantages to highlight:

1. Revolutionary User Interface:
– Conversational AI: GPT-4o’s advanced natural language processing capabilities allow for a conversational user interface, making interactions with Vision Pro + more intuitive and user-friendly.
– Personalized Interactions: The AI can provide highly personalized responses and suggestions based on user behavior and preferences, enhancing user satisfaction and engagement.

2. Unmatched Productivity:
– AI-Driven Multitasking: GPT-4o can manage and streamline multiple tasks simultaneously, significantly boosting productivity by handling scheduling, reminders, and real-time information retrieval seamlessly.
– Voice-Activated Efficiency: Hands-free operation through advanced voice commands allows users to multitask efficiently, whether they are working, driving, or engaged in other activities.

3. Advanced Accessibility:
– Inclusive Design: GPT-4o enhances accessibility with superior voice recognition, understanding diverse speech patterns, and offering multilingual support, making Vision Pro + more accessible to a broader audience.
– Adaptive Assistance: The AI can provide context-aware assistance to users with disabilities, further promoting inclusivity and ease of use.

4. Superior Integration and Ecosystem:
– Apple Ecosystem Synergy: GPT-4o integrates seamlessly with other Apple devices and services, offering a cohesive and interconnected user experience across the Apple ecosystem.
– Unified User Experience: Users can enjoy a consistent and unified experience across all their Apple devices, enhancing brand loyalty and overall user satisfaction.

5. Enhanced Security and Privacy:
– Secure Interactions: Emphasize GPT-4o’s robust security measures to ensure user data privacy and protection, leveraging OpenAI’s commitment to ethical AI practices.
– Trustworthy AI: Highlight OpenAI’s dedication to ethical AI usage, reinforcing user trust in the AI-driven functionalities of Vision Pro +.

6. Market Differentiation:
– Innovative Edge: Position Vision Pro + as a cutting-edge product that stands out in the market due to its integration with GPT-4o, setting it apart from competitors.
– Leadership in AI: Showcase Apple’s leadership in technology innovation by leveraging OpenAI’s state-of-the-art advancements in AI.

7. Future-Proofing:
– Continuous Innovation: Regular updates from OpenAI ensure that Vision Pro + remains at the forefront of AI technology, with continuous improvements and new features.
– Scalable Solutions: The AI platform’s scalability allows for future enhancements, ensuring the product remains relevant and competitive over time.

8. Customer Engagement:
– Proactive Support: GPT-4o can offer proactive customer support and real-time problem-solving, leading to higher customer satisfaction and loyalty.
– Engaging Experiences: The AI can create engaging and interactive experiences, making the device more enjoyable and useful for daily activities.

9. Enhanced Creativity:
Creative Assistance: GPT-4o can assist users with creative tasks such as content creation, brainstorming, and project management, providing valuable support for both personal and professional use.
– Innovative Features: Highlight the unique AI-driven features that empower users to explore new creative possibilities, enhancing the appeal of Vision Pro +.

10. Efficient Learning and Adaptation:
– User Learning: GPT-4o continuously learns from user interactions, becoming more efficient and effective over time, offering a progressively improving user experience.
– Adaptive Technology: The AI adapts to user needs and preferences, ensuring that the device remains relevant and useful in a variety of contexts.

By leveraging these benefits, Apple can market the Vision Pro + Version 2 as a pioneering product that offers unparalleled user experience, productivity, and innovation, driven by the advanced capabilities of OpenAI’s GPT-4o.

It’s the End of Google Search As We Know It

Source: https://www.wired.com/story/google-io-end-of-google-search/

Google is rethinking its most iconic and lucrative product by adding new AI features to search. One expert tells WIRED it’s “a change in the world order.”

Google Search is about to fundamentally change—for better or worse. To align with Alphabet-owned Google’s grand vision of artificial intelligence, and prompted by competition from AI upstarts like ChatGPT, the company’s core product is getting reorganized, more personalized, and much more summarized by AI.

At Google’s annual I/O developer conference in Mountain View, California, today, Liz Reid showed off these changes, setting her stamp early on in her tenure as the new head of all things Google search. (Reid has been at Google a mere 20 years, where she has worked on a variety of search products.) Her AI-soaked demo was part of a broader theme throughout Google’s keynote, led primarily by CEO Sundar Pichai: AI is now underpinning nearly every product at Google, and the company only plans to accelerate that shift.

“In the era of Gemini we think we can make a dramatic amount of improvements to search,” Reid said in an interview with WIRED ahead of the event, referring to the flagship generative AI model launched late last year. “People’s time is valuable, right? They deal with hard things. If you have an opportunity with technology to help people get answers to their questions, to take more of the work out of it, why wouldn’t we want to go after that?”

It’s as though Google took the index cards for the screenplay it’s been writing for the past 25 years and tossed them into the air to see where the cards might fall. Also: The screenplay was written by AI.

These changes to Google Search have been long in the making. Last year the company carved out a section of its Search Labs, which lets users try experimental new features, for something called Search Generative Experience. The big question since has been whether, or when, those features would become a permanent part of Google Search. The answer is, well, now.

Google’s search overhaul comes at a time when critics are becoming increasingly vocal about what feels to some like a degraded search experience, and for the first time in a long time, the company is feeling the heat of competition, from the massive mashup between Microsoft and OpenAI. Smaller startups like Perplexity, You.com, and Brave have also been riding the generative AI wave and getting attention, if not significant mindshare yet, for the way they’ve rejiggered the whole concept of search.
Automatic Answers

Google says it has made a customized version of its Gemini AI model for these new Search features, though it declined to share any information about the size of this model, its speeds, or the guardrails it has put in place around the technology.

This search-specific spin on Gemini will power at least a few different elements of the new Google Search. AI Overviews, which Google has already been experimenting with in its labs, is likely the most significant. AI-generated summaries will now appear at the top of search results.

One example from WIRED’s testing: In response to the query “Where is the best place for me to see the northern lights?” Google will, instead of listing web pages, tell you in authoritative text that the best places to see the northern lights, aka the aurora borealis, are in the Arctic Circle in places with minimal light pollution. It will also offer a link to NordicVisitor.com. But then the AI continues yapping on below that, saying “Other places to see the northern lights include Russia and Canada’s northwest territories.”

Reid says that AI Overviews like this won’t show up for every search result, even if the feature is now becoming more prevalent. It’s reserved for more complex questions. Every time a person searches, Google is attempting to make an algorithmic value judgment behind the scenes as to whether it should serve up AI-generated answers or a conventional blue link to click. “If you search for Walmart.com, you really just want to go to Walmart.com,” Reid says. “But if you have an extremely customized question, that’s where we’re going to bring this.”

AI Overviews are rolling out this week to all Google search users in the US. The feature will come to more countries by the end of the year, Reid said, which means more than a billion people will see AI Overviews in their search results. They will appear across all platforms—the web, mobile, and as part of the search engine experience in browsers, such as when people search through Google on Safari.

Another update coming to search is a function for planning ahead. You can, for example, ask Google to meal-plan for you, or to find a pilates studio nearby that’s offering a class with an introductory discount. In the Googley-eyed future of search, an AI agent can round up a few studios nearby, summarize reviews of them, and plot out the time it would take someone to walk there. This is one of Google’s most obvious advantages over upstart search engines, which don’t have anything close to the troves of reviews, mapping data, or other knowledge that Google has, and may not be able to tap into APIs for real-time or local information so easily.

The most jarring changes that Google has been exploring in its Search Labs is an “AI-organized” results page. This at first glance looks to eschew the blue-links search experience entirely.

One example provided by Reid: A search for where to go for an anniversary dinner in the greater Dallas area would return a page with a few “chips” or buttons at the top to refine the results. Those might include categories like Dine-In, Takeout, and Open Now. Below that might be a sponsored result—Google’s gonna ad—and then a grouping of what Google judges to be “anniversary-worthy restaurants” or “romantic steakhouses.” That might be followed by some suggested questions to tweak the search even more, like, “Is Dallas a romantic city?”

AI-organized search is still being rolled out, but it will start appearing in the US in English “in the coming weeks.” So will an enhanced video search option, like Google Lens on steroids, where you can point your phone’s camera at an object like a broken record player and ask how to fix it.

If all these new AI features sound confusing, you might have missed Google’s latest galaxy-brain ambitions for what was once a humble text box. Reid makes clear that she thinks most consumers assume Google Search is just one thing, where in fact it’s many things to different people, who all search in different ways.

“That’s one of the reasons why we’re excited about working on some of the AI-organized results pages,” she said. “Like, how do you make sense of space? The fact that you want lots of different content is great. But is it as easy as it can be yet in terms of browsing through and consuming the information?”

But by generating AI Overviews—and by determining when those overviews should appear—Google is essentially deciding what is a complex question and what is not, and then making a judgment on what kind of web content should inform its AI-generated summary. Sure, it’s a new era of search where search does the work for you; it’s also a search bot that has the potential to algorithmically favor one kind of result over others.

“One of the biggest changes to happen in search with these AI models is that the AI actually creates a kind of informed opinion,” says Jim Yu, the executive chairman of BrightEdge, a search engine optimization firm that has been closely monitoring web traffic for more than 17 years. “The paradigm of search for the last 20 years has been that the search engine pulls a lot of information and gives you the links. Now the search engine does all the searches for you and summarizes the results and gives you a formative opinion.”

Doing that raises the stakes for Google’s search results. When algorithms are deciding that what a person needs is one coagulated answer, instead of coughing up several links for them to then click through and read, errors are more consequential. Gemini has not been immune to hallucinations—instances where the AI shares blatantly wrong or made-up information.

Last year a writer for The Atlantic asked Google to name an African country beginning with the letter “K,” and the search engine responded with a snippet of text—originally generated by ChatGPT—that none of the countries in Africa begin with the letter K, clearly overlooking Kenya. Google’s AI image-generation tool was very publicly criticized earlier this year when it depicted some historical figures, such as George Washington, as Black. Google temporarily paused that tool.
New World Order

Google’s reimagined version of AI search shoves the famous “10 blue links” it used to provide on results pages further into the rearview. First ads and info boxes began to take priority at the top of Google’s pages; now, AI-generated overviews and categories will take up a good chunk of search real estate. And web publishers and content creators are nervous about these changes—rightfully.

The research firm Gartner predicted earlier this year that by 2026, traditional search engine volume will drop by 25 percent, as a more “agent”-led search approach, in which AI models retrieve and generate more direct answers, takes hold.

“Generative AI solutions are becoming substitute answer engines, replacing user queries that previously may have been executed in traditional search engines,” Alan Antin, a vice president analyst at Gartner, said in a statement that accompanied the report. “This will force companies to rethink their marketing channels strategy.”

What does that mean for the web? “It’s a change in the world order,” says Yu, of BrightEdge. “We’re at this moment where everything in search is starting to change with AI.”

Eight months ago BrightEdge developed something it calls a generative parser, which monitors what happens when searchers interact with AI-generated results on the web. He says over the past month the parser has detected that Google is less frequently asking people if they want an AI-generated answer, which was part of the experimental phase of generative search, and more frequently assuming they do. “We think it shows they have a lot more confidence that you’re going to want to interact with AI in search, rather than prompting you to opt in to an AI-generated result.”

Changes to search also have major implications for Google’s advertising business, which makes up the vast majority of the company’s revenue. In a recent quarterly earnings call, Pichai declined to share revenue from its generative AI experiments broadly. But as WIRED’s Paresh Dave pointed out, by offering more direct answers to searchers, “Google could end up with fewer opportunities to show search ads if people spend less time doing additional, more refined searches.” And the kinds of ads shown may have to evolve along with Google’s generative AI tools.

Google has said it will prioritize traffic to websites, creators, and merchants even as these changes roll out, but it hasn’t pulled back the curtain to reveal exactly how it plans to do this.

When asked in a press briefing ahead of I/O whether Google believes users will still click on links beyond the AI-generated web summary, Reid said that so far Google sees people “actually digging deeper, so they start with the AI overview and then click on additional websites.”

In the past, Reid continued, a searcher would have to poke around to eventually land on a website that gave them the info they wanted, but now Google will assemble an answer culled from various websites of its choosing. In the hive mind at the Googleplex, that will still spark exploration. “[People] will just use search more often, and that provides an additional opportunity to send valuable traffic to the web,” Reid said.

It’s a rosy vision for the future of search, one where being served bite-size AI-generated answers somehow prompts people to spend more time digging deeper into ideas. Google Search still promises to put the world’s information at our fingertips, but it’s less clear now who is actually tapping the keys.

Source: https://www.wired.com/story/google-io-end-of-google-search/