Archiv der Kategorie: Machine Learning

CMG Active Listening Scandal: American Tech Companies Involved

Overview

The CMG Active Listening scandal involves Cox Media Group (CMG), a major American media company, which admitted to using „Active Listening“ technology that allegedly captures conversations through smartphone microphones and smart devices to target users with hyper-specific advertisements[1][2][3]. This revelation has sparked significant controversy and prompted responses from major American tech companies.

American Tech Companies Listed in CMG’s Presentations

Companies Named as Partners

According to leaked CMG pitch decks obtained by 404 Media, the following American tech giants were explicitly identified as CMG partners or clients in their Active Listening program[4][5][6]:

  • Google (including Google Ads and Bing search)
  • Meta (Facebook’s parent company)
  • Amazon (Amazon Ads)
  • Microsoft (including Bing search engine)

Tech Company Responses and Denials

Google’s Response:
Google took the most decisive action, removing CMG from its Partners Program immediately after the 404 Media report was published[1][4][5]. A Google spokesperson stated: „All advertisers must comply with all applicable laws and regulations as well as our Google Ads policies, and when we identify ads or advertisers that violate these policies, we will take appropriate action“[6].

Meta’s Response:
Meta denied any involvement in the Active Listening program and announced an investigation into whether CMG violated Facebook’s terms of service[7][4]. A Meta spokesperson told Newsweek: „Meta does not use your phone’s microphone for ads, and we’ve been public about this for years. We are reaching out to CMG to clarify that their program is not based on Meta data“[8][9].

Amazon’s Response:
Amazon completely denied any collaboration with CMG on the Active Listening program[4][6]. An Amazon spokesperson stated: „Amazon Ads has never worked with CMG on this program and has no plans to do so“[9][10].

Microsoft’s Response:
While Microsoft was mentioned in the pitch deck as a partner through its Bing search engine[4][11], the company has not provided a public response to the allegations at the time of these reports.

Apple’s Response:
Although not directly implicated as a CMG partner, Apple responded to the controversy by clarifying that such practices would violate its App Store guidelines[12]. Apple emphasized that apps must request „explicit user consent and provide a clear visual and/or audible indication when recording, logging, or otherwise making a record of user activity“[12].

How the Active Listening Technology Allegedly Works

According to CMG’s marketing materials, the Active Listening system operates by[1][13][14]:

  1. Real-time voice data collection through smartphone microphones, smart TVs, and other connected devices
  2. AI analysis of conversations to identify consumer intent and purchasing signals
  3. Data integration with behavioral data from over 470 sources
  4. Targeted advertising delivery through various platforms including streaming services, social media, and search engines
  5. Geographic targeting within 10-mile ($100/day) or 20-mile ($200/day) radius

CMG’s pitch deck boldly stated: „Yes, Our Phones Are Listening to Us“ and claimed the technology could „identify buyers based on casual conversations in real-time“[14][15][9].

Legal and Privacy Implications

The scandal has raised significant legal and privacy concerns[16][17]. Senator Marsha Blackburn sent letters to CMG, Google, and Meta demanding answers about the extent of Active Listening deployment and requesting copies of the investor presentation[17].

CMG initially defended the practice as legal, claiming that microphone access permissions are typically buried in the fine print of lengthy terms of service agreements that users rarely read thoroughly[14][18]. However, privacy experts note that such practices would likely violate GDPR regulations in Europe and potentially face legal challenges in various US jurisdictions[19][16].

Current Status

Following the public backlash, CMG has:

  • Removed all references to Active Listening from its website[3][20]
  • Claimed the presentation contained „outdated materials for a product that CMG Local Solutions no longer offers“[7][8]
  • Stated that while the product „never listened to customers, it has been discontinued to avoid misperceptions“[8]

The scandal has reignited long-standing consumer suspicions about device surveillance and targeted advertising, with many users reporting eerily accurate ads that seemed to reflect their private conversations[13][21][22].

  1. https://hackerdose.com/news/leak-expose-media-giants-listening-software/
  2. https://variety.com/2023/digital/news/active-listening-marketers-smartphones-ad-targeting-cox-media-group-1235841007/
  3. https://www.emarketer.com/content/cox-media-active-listening-pitch-deck-ad-targeting-privacy
  4. https://mashable.com/article/cox-media-group-active-listening-google-microsoft-amazon-meta
  5. https://www.404media.co/heres-the-pitch-deck-for-active-listening-ad-targeting/
  6. https://timesofindia.indiatimes.com/technology/tech-news/are-smartphones-listening-to-your-conversations-what-google-facebook-and-amazon-have-to-say/articleshow/113059862.cms
  7. https://www.newsweek.com/phone-voice-assistants-active-listening-consent-targeted-ads-1949251
  8. https://www.tasnimnews.com/en/news/2024/09/08/3155089/cmg-leak-unveils-controversial-active-listening-ad-technology
  9. https://innovationsbrandinghouse.com/articles/so-our-phones-are-listening-after-all/
  10. https://news.itsfoss.com/ad-company-listening-to-microphone/
  11. https://winbuzzer.com/2024/09/05/ad-firms-pitch-deck-shows-phones-listen-for-targeted-ads-xcxwbn/
  12. https://www.imore.com/apple/apple-responds-to-claim-active-listening-can-hear-your-phone-conversations-and-use-them-to-target-you-with-advertising-calls-it-a-clear-violation-of-app-store-guidelines
  13. https://cybersecurityasia.net/how-advertisers-using-ai-listen-conversation/
  14. https://www.sify.com/ai-analytics/active-listening-feature-on-phones-raises-privacy-concerns/
  15. https://cybernews.com/tech/your-phone-listening-in/
  16. https://p4sc4l.substack.com/p/gpt-4o-it-is-very-likely-that-cmgs
  17. https://www.blackburn.senate.gov/2024/9/issues/technology/blackburn-probes-big-tech-platforms-after-cox-media-group-admits-it-listens-to-users-phone-conversations
  18. https://hwbusters.com/news/smartphones-are-spying-cox-media-group-admits-to-using-microphones-for-targeted-ads-without-user-knowledge/
  19. https://www.linkthat.eu/en/2024/10/active-listening-on-the-smartphone/
  20. https://www.cmswire.com/digital-marketing/active-listening-the-controversial-new-ad-targeting-tactic/
  21. https://nevtis.com/the-dark-side-of-targeted-advertising-facebook-partner-admits-to-using-smartphone-microphones-for-listening/
  22. https://www.independent.co.uk/tech/is-my-phone-listening-to-me-ad-microphone-privacy-b2606445.html
  23. https://www.sundogit.com/blog/big-tech-company-admits-its-listening-to-you/
  24. https://www.techdirt.com/2024/08/29/cox-caught-again-bragging-it-spies-on-users-with-embedded-device-microphones-to-sell-ads/
  25. https://www.ghacks.net/2024/09/04/report-alleges-that-microphones-on-devices-are-used-for-active-listening-to-deliver-targeted-ads/
  26. https://www.musicbusinessworldwide.com/tiktok-names-six-certified-sound-partners-including-songtradr-massivemusic-and-unitedmasters/
  27. https://www.storyboard18.com/how-it-works/cox-media-group-claims-to-have-capability-to-listen-to-ambient-conversations-of-consumers-for-targeted-ads-19154.htm
  28. https://mashable.com/article/tiktok-share-music-feature-apple-spotify
  29. https://www.hbs.edu/ris/download.aspx?name=25-014.pdf
  30. https://www.musicbusinessworldwide.com/tiktok-deepens-integration-with-spotify-and-apple-music-via-new-feature-that-lets-the-streamers-users-share-to-tiktok1/
  31. https://forums.musicplayer.com/topic/191420-confirmed-companies-are-listening-to-what-you-say-out-loud-near-device-microphones/

I Stared Into the AI Void With the SocialAI App

SocialAI is an online universe where everyone you interact with is a bot—for better or worse.

Robot Hands Adults in a Crowd Glitch Effect

The first time I used SocialAI, I was sure the app was performance art. That was the only logical explanation for why I would willingly sign up to have AI bots named Blaze Fury and Trollington Nefarious, well, troll me.

Even the app’s creator, Michael Sayman, admits that the premise of SocialAI may confuse people. His announcement this week of the app read a little like a generative AI joke: “A private social network where you receive millions of AI-generated comments offering feedback, advice, and reflections.”

But, no, SocialAI is real, if “real” applies to an online universe in which every single person you interact with is a bot.

There’s only one real human in the SocialAI equation. That person is you. The new iOS app is designed to let you post text like you would on Twitter or Threads. An ellipsis appears almost as soon as you do so, indicating that another person is loading up with ammunition, getting ready to fire back. Then, instantaneously, several comments appear, cascading below your post, each and every one of them written by an AI character. In the new new version of the app, just rolled out today, these AIs also talk to each other.

When you first sign up, you’re prompted to choose these AI character archetypes: Do you want to hear from Fans? Trolls? Skeptics? Odd-balls? Doomers? Visionaries? Nerds? Drama Queens? Liberals? Conservatives? Welcome to SocialAI, where Trollita Kafka, Vera D. Nothing, Sunshine Sparkle, Progressive Parker, Derek Dissent, and Professor Debaterson are here to prop you up or tell you why you’re wrong.

Screenshot of the instructions for setting up the Social AI app.

Is SocialAI appalling, an echo chamber taken to the extreme? Only if you ignore the truth of modern social media: Our feeds are already filled with bots, tuned by algorithms, and monetized with AI-driven ad systems. As real humans we do the feeding: freely supplying social apps fresh content, baiting trolls, buying stuff. In exchange, we’re amused, and occasionally feel a connection with friends and fans.As notorious crank Neil Postman wrote in 1985, “Anyone who is even slightly familiar with the history of communications knows that every new technology for thinking involves a trade-off.” The trade-off for social media in the age of AI is a slice of our humanity. SocialAI just strips the experience down to pure artifice.

“With a lot of social media, you don’t know who the bot is and who the real person is. It’s hard to tell the difference,” Sayman says. “I just felt like creating a space where you’re able to know that they’re 100 percent AIs. It’s more freeing.”

You might say Sayman has a knack for apps. As a teenage coder in Miami, Florida, during the financial crisis, Sayman gained fame for building a suite of apps to support his family, who had been considering moving back to Peru. Sayman later ended up working in product jobs at Facebook, Google, and Roblox. SocialAI was launched from Sayman’s own venture-backed app studio, Friendly Apps.

In many ways his app is emblematic of design thinking rather than pure AI innovation. SocialAI isn’t really a social app, but ChatGPT in the container of a social broadcast app. It’s an attempt to redefine how we interact with generative AI. Instead of limiting your ChatGPT conversation to a one-to-one chat window, Sayman posits, why not get your answers from many bots, all at the same time?

Over Zoom earlier this week, he explained to me how he thinks of generative AI like a smoothie if cups hadn’t yet been invented. You can still enjoy it from a bowl or plate, but those aren’t the right vessel. SocialAI, Sayman says, could be the cup.

Almost immediately Sayman laughed. “This is a terrible analogy,” he said.

Sayman is charming and clearly thinks a lot about how apps fit into our world. He’s a team of one right now, relying mostly on OpenAI’s technology to power SocialAI, blended with some other custom AI models. (Sayman rate-limits the app so that he doesn’t go broke in “three minutes” from the fees he’s paying to OpenAI. He also hasn’t quite yet figured out how he’ll make money off of SocialAI.) He knows he’s not the first to launch an AI-character app; Meta has burdened its apps with AI characters, and the Character AI app, which was just quasi-acquired by Google, lets you interact with a huge number of AI personas.But Sayman is hand-wavy about this competition. “I don’t see my app as, you’re going to be interacting with characters who you think might be real,” he says. “This is really for seeking answers to conflict resolution, or figuring out if what you’re trying to say is hurtful and get feedback before you post it somewhere else.”

“Someone joked to me that they thought Elon Musk should use this, so he could test all of his posts before he posts them on X,” Sayman said.

I’d actually tried that, tossing some of the most trafficked tweets from Elon Musk and the Twitter icon Dril into my SocialAI feed. I shared a news story from WIRED; the link was unclickable, because SocialAI doesn’t support link-sharing. (There’s no one to share it with, anyway.) I repurposed the viral “Bean Dad” tweet and purported to be a Bean Mom on SocialAI, urging my 9-year-old daughter to open a can of beans herself as a life lesson. I posted political content. I asked my synthetic SocialAI followers who else I should follow.

The bots obliged and flooded my feed with comments, like Reply Guys on steroids. But their responses lacked nutrients or human messiness. Mostly, I told Sayman, it all felt too uncanny, that I had a hard time crossing that chasm and placing value or meaning on what the bots had to say.

Sayman encouraged me to craft more posts along the lines of Reddit’s “Am I the Asshole” posts: Am I wrong in this situation? Should I apologize to a friend? Should I stay mad at my family forever? This, Sayman says, is the real purpose of SocialAI. I tried it. For a second the SocialAI bot comments lit up my lizard brain, my id and superego, the “I’m so right” instinct. Then Trollita Kafka told me, essentially, that I was in fact the asshole.One aspect of SocialAI that clearly does not represent the dawn of a new era: Sayman has put out a minimum viable product without communicating important guidelines around privacy, content policies, or how SocialAI or OpenAI might use the data people provide along the way. (Move fast, break things, etc.) He says he’s not using anyone’s posts to train his own AI models, but notes that users are still subject to OpenAI’s data-training terms, since he uses OpenAI’s API. You also can’t mute or block a bot that has gone off the rails.

At least, though, your feed is always private by default. You don’t have any “real” followers. My editor at WIRED, for example, could join SocialAI himself but will never be able to follow me or see that I copied and pasted an Elon Musk tweet about wanting to buy Coca-Cola and put the cocaine back in it, just as he could not follow my ChatGPT account and see what I’m enquiring about there.

As a human on SocialAI, you will never interact with another human. That’s the whole point. It’s your own little world with your own army of AI characters ready to bolster you or tear you down. You may not like it, but it might be where you’re headed anyway. You might already be there.

Source: https://www.wired.com/story/socialai-app-ai-chatbots-chatgpt/

Our basic assumptions about photos capturing reality are about to go up in smoke

Source: https://www.theverge.com/2024/8/22/24225972/ai-photo-era-what-is-reality-google-pixel-9

n explosion from the side of an old brick building. A crashed bicycle in a city intersection. A cockroach in a box of takeout. It took less than 10 seconds to create each of these images with the Reimagine tool in the Pixel 9’s Magic Editor. They are crisp. They are in full color. They are high-fidelity. There is no suspicious background blur, no tell-tale sixth finger. These photographs are extraordinarily convincing, and they are all extremely fucking fake. 

Anyone who buys a Pixel 9 — the latest model of Google’s flagship phone, available starting this week — will have access to the easiest, breeziest user interface for top-tier lies, built right into their mobile device. This is all but certain to become the norm, with similar features already available on competing devices and rolling out on others in the near future. When a smartphone “just works,” it’s usually a good thing; here, it’s the entire problem in the first place.

Photography has been used in the service of deception for as long as it has existed. (Consider Victorian spirit photos, the infamous Loch Ness monster photograph, or Stalin’s photographic purges of IRL-purged comrades.) But it would be disingenuous to say that photographs have never been considered reliable evidence. Everyone who is reading this article in 2024 grew up in an era where a photograph was, by default, a representation of the truth. A staged scene with movie effects, a digital photo manipulation, or more recently, a deepfake — these were potential deceptions to take into account, but they were outliers in the realm of possibility. It took specialized knowledge and specialized tools to sabotage the intuitive trust in a photograph. Fake was the exception, not the rule. 

If I say Tiananmen Square, you will, most likely, envision the same photograph I do. This also goes for Abu Ghraib or napalm girl. These images have defined wars and revolutions; they have encapsulated truth to a degree that is impossible to fully express. There was no reason to express why these photos matter, why they are so pivotal, why we put so much value in them. Our trust in photography was so deep that when we spent time discussing veracity in images, it was more important to belabor the point that it was possible for photographs to be fake, sometimes. 

This is all about to flip — the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We are not prepared for what happens after.

 
A real photo of a stream.
A real photo of a stream.
 
Edited with Google’s Magic Editor.
Edited with Google’s Magic Editor.
 
A real photo of a person in a living room (with their face obscured).
A real photo of a person in a living room (with their face obscured).
 
Edited with Google’s Magic Editor.
Edited with Google’s Magic Editor.

No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus — for as long as any of us has been here, photographs proved something happened. Consider all the ways in which the assumed veracity of a photograph has, previously, validated the truth of your experiences. The preexisting ding in the fender of your rental car. The leak in your ceiling. The arrival of a package. An actual, non-AI-generated cockroach in your takeout. When wildfires encroach upon your residential neighborhood, how do you communicate to friends and acquaintances the thickness of the smoke outside? 

And up until now, the onus has largely been on those denying the truth of a photo to prove their claims. The flat-earther is out of step with the social consensus not because they do not understand astrophysics — how many of us actually understand astrophysics, after all? — but because they must engage in a series of increasingly elaborate justifications for why certain photographs and videos are not real. They must invent a vast state conspiracy to explain the steady output of satellite photographs that capture the curvature of the Earth. They must create a soundstage for the 1969 Moon landing. 

We have taken for granted that the burden of proof is upon them. In the age of the Pixel 9, it might be best to start brushing up on our astrophysics. 

For the most part, the average image created by these AI tools will, in and of itself, be pretty harmless — an extra tree in a backdrop, an alligator in a pizzeria, a silly costume interposed over a cat. In aggregate, the deluge upends how we treat the concept of the photo entirely, and that in itself has tremendous repercussions. Consider, for instance, that the last decade has seen extraordinary social upheaval in the United States sparked by grainy videos of police brutality. Where the authorities obscured or concealed reality, these videos told the truth. 

The persistent cry of “Fake News!” from Trumpist quarters presaged the beginning of this era of unmitigated bullshit, in which the impact of the truth will be deadened by the firehose of lies. The next Abu Ghraib will be buried under a sea of AI-generated war crime snuff. The next George Floyd will go unnoticed and unvindicated.

 
A real photo of an empty street.
A real photo of an empty street.
 
Edited with Google’s Magic Editor.
Edited with Google’s Magic Editor.
 
A real photo inside a New York City subway station.
A real photo inside a New York City subway station.
 
Edited with Google’s Magic Editor.
Edited with Google’s Magic Editor.

You can already see the shape of what’s to come. In the Kyle Rittenhouse trial, the defense claimed that Apple’s pinch-to-zoom manipulates photos, successfully persuading the judge to put the burden of proof on the prosecution to show that zoomed-in iPhone footage was not AI-manipulated. More recently, Donald Trump falsely claimed that a photo of a well-attended Kamala Harris rally was AI-generated — a claim that was only possible to make because people were able to believe it.

Even before AI, those of us in the media had been working in a defensive crouch, scrutinizing the details and provenance of every image, vetting for misleading context or photo manipulation. After all, every major news event comes with an onslaught of misinformation. But the incoming paradigm shift implicates something much more fundamental than the constant grind of suspicion that is sometimes called digital literacy.

Google understands perfectly well what it is doing to the photograph as an institution — in an interview with Wired, the group product manager for the Pixel camera described the editing tool as “help[ing] you create the moment that is the way you remember it, that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond.” A photo, in this world, stops being a supplement to fallible human recollection, but instead a mirror of it. And as photographs become little more than hallucinations made manifest, the dumbest shit will devolve into a courtroom battle over the reputation of the witnesses and the existence of corroborating evidence.

This erosion of the social consensus began before the Pixel 9, and it will not be carried forth by the Pixel 9 alone. Still, the phone’s new AI capabilities are of note not just because the barrier to entry is so low, but because the safeguards we ran into were astonishingly anemic. The industry’s proposed AI image watermarking standard is mired in the usual standards slog, and Google’s own much-vaunted AI watermarking system was nowhere in sight when The Verge tried out the Pixel 9’s Magic Editor. The photos that are modified with the Reimagine tool simply have a line of removable metadata added to them. (The inherent fragility of this kind of metadata was supposed to be addressed by Google’s invention of the theoretically unremovable SynthID watermark.) Google told us that the outputs of Pixel Studio — a pure prompt generator that is closer to DALL-E — will be tagged with a SynthID watermark; ironically, we found the capabilities of the Magic Editor’s Reimagine tool, which modifies existing photos, were much more alarming.

 
Examples of famous photographs, digitally altered to demonstrate the implications of AI photography.
Image: Cath Virginia / The Verge, Neil Armstrong, Dorothea Lange, Joe Rosenthal
 

Google claims the Pixel 9 will not be an unfettered bullshit factory but is thin on substantive assurances. “We design our Generative AI tools to respect the intent of user prompts and that means they may create content that may offend when instructed by the user to do so,” Alex Moriconi, Google communications manager, told The Verge in an email. “That said, it’s not anything goes. We have clear policies and Terms of Service on what kinds of content we allow and don’t allow, and build guardrails to prevent abuse. At times, some prompts can challenge these tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.” 

The policies are what you would expect — for example, you can’t use Google services to facilitate crimes or incite violence. Some attempted prompts returned the generic error message, “Magic Editor can’t complete this edit. Try typing something else.” (You can see throughout this story, however, several worrisome prompts that did work.) But when it comes down to it, standard-fare content moderation will not save the photograph from its incipient demise as a signal of truth.

We briefly lived in an era in which the photograph was a shortcut to reality, to knowing things, to having a smoking gun. It was an extraordinarily useful tool for navigating the world around us. We are now leaping headfirst into a future in which reality is simply less knowable. The lost Library of Alexandria could have fit onto the microSD card in my Nintendo Switch, and yet the cutting edge of technology is a handheld telephone that spews lies as a fun little bonus feature. 

We are fu**ed.

Microsoft’s Recall technology bears resemblance to George Orwells 1984 dystopia in several key factors

Microsoft’s Recall technology, an AI tool designed to assist users by automatically reminding them of important information and tasks, bears resemblance to George Orwell’s „1984“ dystopia in several key aspects:

1. Surveillance and Data Collection:
– 1984: The Party constantly monitors citizens through telescreens and other surveillance methods, ensuring that every action, word, and even thought aligns with the Party’s ideology.
– Recall Technology: While intended for productivity, Recall collects and analyzes large amounts of personal data, emails, and other communications to provide reminders. This level of data collection can raise concerns about privacy and the potential for misuse or unauthorized access to personal information.

2. Memory and Thought Control:
– 1984: The Party manipulates historical records and uses propaganda to control citizens‘ memories and perceptions of reality, essentially rewriting history to fit its narrative.
– Recall Technology: By determining what information is deemed important and what reminders to provide, Recall could influence users‘ focus and priorities. This selective emphasis on certain data could subtly shape users‘ perceptions and decisions, akin to a form of soft memory control.

3. Dependence on Technology:
– 1984: The populace is heavily reliant on the Party’s technology for information, entertainment, and even personal relationships, which are monitored and controlled by the state.
– Recall Technology: Users might become increasingly dependent on Recall to manage their schedules and information, potentially diminishing their own capacity to remember and prioritize tasks independently. This dependence can create a vulnerability where the technology has significant control over daily life.

4. Loss of Personal Autonomy:
– 1984: Individual autonomy is obliterated as the Party dictates all aspects of life, from public behavior to private thoughts.
– Recall Technology: Although not as extreme, the automation and AI-driven suggestions in Recall could erode personal decision-making over time. As users rely more on technology to dictate their actions and reminders, their sense of personal control and autonomy may diminish.

5. Potential for Abuse:
– 1984: The totalitarian regime abuses its power to maintain control over the population, using technology as a tool of oppression.
– Recall Technology: In a worst-case scenario, the data collected by Recall could be exploited by malicious actors or for unethical purposes. If misused by corporations or governments, it could lead to scenarios where users‘ personal information is leveraged against them, echoing the coercive control seen in Orwell’s dystopia.

While Microsoft’s Recall technology is designed with productivity in mind, its potential implications for privacy, autonomy, and the influence over personal information draw unsettling parallels to the controlled and monitored society depicted in „1984.“

Why Elon Musk should consider integrating OpenAI’s ChatGPT „GPT-4o“ as the operating system for a brand new Tesla SUV – Here are the five biggest advantages to highlight

  1. Revolutionary User Interface and Experience:
    • Natural Language Interaction: GPT-4o’s advanced natural language processing capabilities allow for seamless, conversational interaction between the driver and the vehicle. This makes controlling the vehicle and accessing information more intuitive and user-friendly.
    • Personalized Experience: The AI can learn from individual driver behaviors and preferences, offering tailored suggestions for routes, entertainment, climate settings, and more, enhancing overall user satisfaction and engagement. 
  2. Enhanced Autonomous Driving and Safety:
    • Superior Decision-Making: GPT-4o can significantly enhance Tesla’s autonomous driving capabilities by processing and analyzing vast amounts of real-time data to make better driving decisions. This improves the safety, reliability, and efficiency of the vehicle’s self-driving features.
    • Proactive Safety Features: The AI can provide real-time monitoring of the vehicle’s surroundings and driver behavior, offering proactive alerts and interventions to prevent accidents and ensure passenger safety.
  3. Next-Level Infotainment and Connectivity:
    • Smart Infotainment System: With GPT-4o, the SUV’s infotainment system can offer highly intelligent and personalized content recommendations, including music, podcasts, audiobooks, and more, making long journeys more enjoyable.
    • Seamless Connectivity: The AI can integrate with a wide range of apps and services, enabling drivers to manage their schedules, communicate, and access information without distraction, thus enhancing productivity and convenience.
  4. Continuous Improvement and Future-Proofing:
    • Self-Learning Capabilities: GPT-4o continuously learns and adapts from user interactions and external data, ensuring that the vehicle’s performance and features improve over time. This results in an ever-evolving user experience that keeps getting better.
    • Over-the-Air Updates: Regular over-the-air updates from OpenAI ensure that the SUV remains at the forefront of technology, with the latest features, security enhancements, and improvements being seamlessly integrated.
  5. Market Differentiation and Brand Leadership:
    • Innovative Edge: Integrating GPT-4o positions Tesla’s new SUV as a cutting-edge vehicle, showcasing the latest in AI and automotive technology. This differentiates Tesla from competitors and strengthens its reputation as a leader in innovation.
    • Enhanced Customer Engagement: The unique AI-driven features and personalized experiences can drive stronger customer engagement and loyalty, attracting tech-savvy consumers and enhancing the overall brand image.

By leveraging these advantages, Tesla can create a groundbreaking SUV that not only meets but exceeds consumer expectations, setting new standards for the automotive industry and reinforcing Tesla’s position as a pioneer in automotive and AI technology.

Real World Use Cases for Apples Vision Pro + Version 2 – with the new operating system ChatGPT „GPT-4o“

The integration of advanced AI like OpenAI’s GPT-4o into Apple’s Vision Pro + Version 2 can significantly enhance its vision understanding capabilities.
Here are ten possible use cases:

1. Augmented Reality (AR) Applications:
– Interactive AR Experiences: Enhance AR applications by providing real-time object recognition and interaction. For example, users can point the device at a historical landmark and receive detailed information and interactive visuals about it.
– AR Navigation: Offer real-time navigation assistance in complex environments like malls or airports, overlaying directions onto the user’s view.

2. Enhanced Photography and Videography:
– Intelligent Scene Recognition: Automatically adjust camera settings based on the scene being captured, such as landscapes, portraits, or low-light environments, ensuring optimal photo and video quality.
– Content Creation Assistance: Provide suggestions and enhancements for capturing creative content, such as framing tips, real-time filters, and effects.

3. Healthcare and Medical Diagnosis:
– Medical Imaging Analysis: Assist in analyzing medical images (e.g., X-rays, MRIs) to identify potential issues, providing preliminary diagnostic support to healthcare professionals.
– Remote Health Monitoring: Enable remote health monitoring by analyzing visual data from wearable devices to track health metrics and detect anomalies.

4. Retail and Shopping:
– Virtual Try-Ons: Allow users to virtually try on clothing, accessories, or cosmetics using the device’s camera, enhancing the online shopping experience.
– Product Recognition: Identify products in stores and provide information, reviews, and price comparisons, helping users make informed purchasing decisions.

5. Security and Surveillance:
– Facial Recognition: Enhance security systems with facial recognition capabilities for authorized access and threat detection.
– Anomaly Detection: Monitor and analyze security footage to detect unusual activities or potential security threats in real-time.

6. Education and Training:
– Interactive Learning: Use vision understanding to create interactive educational experiences, such as identifying objects or animals in educational content and providing detailed explanations.
– Skill Training: Offer real-time feedback and guidance for skills training, such as in sports or technical tasks, by analyzing movements and techniques.

7. Accessibility and Assistive Technology:
– Object Recognition for the Visually Impaired: Help visually impaired users navigate their surroundings by identifying objects and providing auditory descriptions.
– Sign Language Recognition: Recognize and translate sign language in real-time, facilitating communication for hearing-impaired individuals.

8. Home Automation and Smart Living:
– Smart Home Integration: Recognize household items and provide control over smart home devices. For instance, identifying a lamp and allowing users to turn it on or off via voice commands.
– Activity Monitoring: Monitor and analyze daily activities to provide insights and recommendations for improving household efficiency and safety.

9. Automotive and Driver Assistance:
– Driver Monitoring: Monitor driver attentiveness and detect signs of drowsiness or distraction, providing alerts to enhance safety.
– Object Detection: Enhance autonomous driving systems with better object detection and classification, improving vehicle navigation and safety.

10. Environmental Monitoring:
– Wildlife Tracking: Use vision understanding to monitor and track wildlife in natural habitats for research and conservation efforts.
– Pollution Detection: Identify and analyze environmental pollutants or changes in landscapes, aiding in environmental protection and management.

These use cases demonstrate the broad potential of integrating advanced vision understanding capabilities into Apple’s Vision Pro + Version 2, enhancing its functionality across various domains and providing significant value to users.

Why Apple uses ChatGPT 4o as its new operating system for Apples Vision Pro + Version 2

Apple’s Vision Pro + Version 2, utilizing OpenAI’s ChatGPT „GPT-4o“ as the operating system offers several compelling marketing benefits. Here are the key advantages to highlight:

1. Revolutionary User Interface:
– Conversational AI: GPT-4o’s advanced natural language processing capabilities allow for a conversational user interface, making interactions with Vision Pro + more intuitive and user-friendly.
– Personalized Interactions: The AI can provide highly personalized responses and suggestions based on user behavior and preferences, enhancing user satisfaction and engagement.

2. Unmatched Productivity:
– AI-Driven Multitasking: GPT-4o can manage and streamline multiple tasks simultaneously, significantly boosting productivity by handling scheduling, reminders, and real-time information retrieval seamlessly.
– Voice-Activated Efficiency: Hands-free operation through advanced voice commands allows users to multitask efficiently, whether they are working, driving, or engaged in other activities.

3. Advanced Accessibility:
– Inclusive Design: GPT-4o enhances accessibility with superior voice recognition, understanding diverse speech patterns, and offering multilingual support, making Vision Pro + more accessible to a broader audience.
– Adaptive Assistance: The AI can provide context-aware assistance to users with disabilities, further promoting inclusivity and ease of use.

4. Superior Integration and Ecosystem:
– Apple Ecosystem Synergy: GPT-4o integrates seamlessly with other Apple devices and services, offering a cohesive and interconnected user experience across the Apple ecosystem.
– Unified User Experience: Users can enjoy a consistent and unified experience across all their Apple devices, enhancing brand loyalty and overall user satisfaction.

5. Enhanced Security and Privacy:
– Secure Interactions: Emphasize GPT-4o’s robust security measures to ensure user data privacy and protection, leveraging OpenAI’s commitment to ethical AI practices.
– Trustworthy AI: Highlight OpenAI’s dedication to ethical AI usage, reinforcing user trust in the AI-driven functionalities of Vision Pro +.

6. Market Differentiation:
– Innovative Edge: Position Vision Pro + as a cutting-edge product that stands out in the market due to its integration with GPT-4o, setting it apart from competitors.
– Leadership in AI: Showcase Apple’s leadership in technology innovation by leveraging OpenAI’s state-of-the-art advancements in AI.

7. Future-Proofing:
– Continuous Innovation: Regular updates from OpenAI ensure that Vision Pro + remains at the forefront of AI technology, with continuous improvements and new features.
– Scalable Solutions: The AI platform’s scalability allows for future enhancements, ensuring the product remains relevant and competitive over time.

8. Customer Engagement:
– Proactive Support: GPT-4o can offer proactive customer support and real-time problem-solving, leading to higher customer satisfaction and loyalty.
– Engaging Experiences: The AI can create engaging and interactive experiences, making the device more enjoyable and useful for daily activities.

9. Enhanced Creativity:
Creative Assistance: GPT-4o can assist users with creative tasks such as content creation, brainstorming, and project management, providing valuable support for both personal and professional use.
– Innovative Features: Highlight the unique AI-driven features that empower users to explore new creative possibilities, enhancing the appeal of Vision Pro +.

10. Efficient Learning and Adaptation:
– User Learning: GPT-4o continuously learns from user interactions, becoming more efficient and effective over time, offering a progressively improving user experience.
– Adaptive Technology: The AI adapts to user needs and preferences, ensuring that the device remains relevant and useful in a variety of contexts.

By leveraging these benefits, Apple can market the Vision Pro + Version 2 as a pioneering product that offers unparalleled user experience, productivity, and innovation, driven by the advanced capabilities of OpenAI’s GPT-4o.

It’s the End of Google Search As We Know It

Source: https://www.wired.com/story/google-io-end-of-google-search/

Google is rethinking its most iconic and lucrative product by adding new AI features to search. One expert tells WIRED it’s “a change in the world order.”

Google Search is about to fundamentally change—for better or worse. To align with Alphabet-owned Google’s grand vision of artificial intelligence, and prompted by competition from AI upstarts like ChatGPT, the company’s core product is getting reorganized, more personalized, and much more summarized by AI.

At Google’s annual I/O developer conference in Mountain View, California, today, Liz Reid showed off these changes, setting her stamp early on in her tenure as the new head of all things Google search. (Reid has been at Google a mere 20 years, where she has worked on a variety of search products.) Her AI-soaked demo was part of a broader theme throughout Google’s keynote, led primarily by CEO Sundar Pichai: AI is now underpinning nearly every product at Google, and the company only plans to accelerate that shift.

“In the era of Gemini we think we can make a dramatic amount of improvements to search,” Reid said in an interview with WIRED ahead of the event, referring to the flagship generative AI model launched late last year. “People’s time is valuable, right? They deal with hard things. If you have an opportunity with technology to help people get answers to their questions, to take more of the work out of it, why wouldn’t we want to go after that?”

It’s as though Google took the index cards for the screenplay it’s been writing for the past 25 years and tossed them into the air to see where the cards might fall. Also: The screenplay was written by AI.

These changes to Google Search have been long in the making. Last year the company carved out a section of its Search Labs, which lets users try experimental new features, for something called Search Generative Experience. The big question since has been whether, or when, those features would become a permanent part of Google Search. The answer is, well, now.

Google’s search overhaul comes at a time when critics are becoming increasingly vocal about what feels to some like a degraded search experience, and for the first time in a long time, the company is feeling the heat of competition, from the massive mashup between Microsoft and OpenAI. Smaller startups like Perplexity, You.com, and Brave have also been riding the generative AI wave and getting attention, if not significant mindshare yet, for the way they’ve rejiggered the whole concept of search.
Automatic Answers

Google says it has made a customized version of its Gemini AI model for these new Search features, though it declined to share any information about the size of this model, its speeds, or the guardrails it has put in place around the technology.

This search-specific spin on Gemini will power at least a few different elements of the new Google Search. AI Overviews, which Google has already been experimenting with in its labs, is likely the most significant. AI-generated summaries will now appear at the top of search results.

One example from WIRED’s testing: In response to the query “Where is the best place for me to see the northern lights?” Google will, instead of listing web pages, tell you in authoritative text that the best places to see the northern lights, aka the aurora borealis, are in the Arctic Circle in places with minimal light pollution. It will also offer a link to NordicVisitor.com. But then the AI continues yapping on below that, saying “Other places to see the northern lights include Russia and Canada’s northwest territories.”

Reid says that AI Overviews like this won’t show up for every search result, even if the feature is now becoming more prevalent. It’s reserved for more complex questions. Every time a person searches, Google is attempting to make an algorithmic value judgment behind the scenes as to whether it should serve up AI-generated answers or a conventional blue link to click. “If you search for Walmart.com, you really just want to go to Walmart.com,” Reid says. “But if you have an extremely customized question, that’s where we’re going to bring this.”

AI Overviews are rolling out this week to all Google search users in the US. The feature will come to more countries by the end of the year, Reid said, which means more than a billion people will see AI Overviews in their search results. They will appear across all platforms—the web, mobile, and as part of the search engine experience in browsers, such as when people search through Google on Safari.

Another update coming to search is a function for planning ahead. You can, for example, ask Google to meal-plan for you, or to find a pilates studio nearby that’s offering a class with an introductory discount. In the Googley-eyed future of search, an AI agent can round up a few studios nearby, summarize reviews of them, and plot out the time it would take someone to walk there. This is one of Google’s most obvious advantages over upstart search engines, which don’t have anything close to the troves of reviews, mapping data, or other knowledge that Google has, and may not be able to tap into APIs for real-time or local information so easily.

The most jarring changes that Google has been exploring in its Search Labs is an “AI-organized” results page. This at first glance looks to eschew the blue-links search experience entirely.

One example provided by Reid: A search for where to go for an anniversary dinner in the greater Dallas area would return a page with a few “chips” or buttons at the top to refine the results. Those might include categories like Dine-In, Takeout, and Open Now. Below that might be a sponsored result—Google’s gonna ad—and then a grouping of what Google judges to be “anniversary-worthy restaurants” or “romantic steakhouses.” That might be followed by some suggested questions to tweak the search even more, like, “Is Dallas a romantic city?”

AI-organized search is still being rolled out, but it will start appearing in the US in English “in the coming weeks.” So will an enhanced video search option, like Google Lens on steroids, where you can point your phone’s camera at an object like a broken record player and ask how to fix it.

If all these new AI features sound confusing, you might have missed Google’s latest galaxy-brain ambitions for what was once a humble text box. Reid makes clear that she thinks most consumers assume Google Search is just one thing, where in fact it’s many things to different people, who all search in different ways.

“That’s one of the reasons why we’re excited about working on some of the AI-organized results pages,” she said. “Like, how do you make sense of space? The fact that you want lots of different content is great. But is it as easy as it can be yet in terms of browsing through and consuming the information?”

But by generating AI Overviews—and by determining when those overviews should appear—Google is essentially deciding what is a complex question and what is not, and then making a judgment on what kind of web content should inform its AI-generated summary. Sure, it’s a new era of search where search does the work for you; it’s also a search bot that has the potential to algorithmically favor one kind of result over others.

“One of the biggest changes to happen in search with these AI models is that the AI actually creates a kind of informed opinion,” says Jim Yu, the executive chairman of BrightEdge, a search engine optimization firm that has been closely monitoring web traffic for more than 17 years. “The paradigm of search for the last 20 years has been that the search engine pulls a lot of information and gives you the links. Now the search engine does all the searches for you and summarizes the results and gives you a formative opinion.”

Doing that raises the stakes for Google’s search results. When algorithms are deciding that what a person needs is one coagulated answer, instead of coughing up several links for them to then click through and read, errors are more consequential. Gemini has not been immune to hallucinations—instances where the AI shares blatantly wrong or made-up information.

Last year a writer for The Atlantic asked Google to name an African country beginning with the letter “K,” and the search engine responded with a snippet of text—originally generated by ChatGPT—that none of the countries in Africa begin with the letter K, clearly overlooking Kenya. Google’s AI image-generation tool was very publicly criticized earlier this year when it depicted some historical figures, such as George Washington, as Black. Google temporarily paused that tool.
New World Order

Google’s reimagined version of AI search shoves the famous “10 blue links” it used to provide on results pages further into the rearview. First ads and info boxes began to take priority at the top of Google’s pages; now, AI-generated overviews and categories will take up a good chunk of search real estate. And web publishers and content creators are nervous about these changes—rightfully.

The research firm Gartner predicted earlier this year that by 2026, traditional search engine volume will drop by 25 percent, as a more “agent”-led search approach, in which AI models retrieve and generate more direct answers, takes hold.

“Generative AI solutions are becoming substitute answer engines, replacing user queries that previously may have been executed in traditional search engines,” Alan Antin, a vice president analyst at Gartner, said in a statement that accompanied the report. “This will force companies to rethink their marketing channels strategy.”

What does that mean for the web? “It’s a change in the world order,” says Yu, of BrightEdge. “We’re at this moment where everything in search is starting to change with AI.”

Eight months ago BrightEdge developed something it calls a generative parser, which monitors what happens when searchers interact with AI-generated results on the web. He says over the past month the parser has detected that Google is less frequently asking people if they want an AI-generated answer, which was part of the experimental phase of generative search, and more frequently assuming they do. “We think it shows they have a lot more confidence that you’re going to want to interact with AI in search, rather than prompting you to opt in to an AI-generated result.”

Changes to search also have major implications for Google’s advertising business, which makes up the vast majority of the company’s revenue. In a recent quarterly earnings call, Pichai declined to share revenue from its generative AI experiments broadly. But as WIRED’s Paresh Dave pointed out, by offering more direct answers to searchers, “Google could end up with fewer opportunities to show search ads if people spend less time doing additional, more refined searches.” And the kinds of ads shown may have to evolve along with Google’s generative AI tools.

Google has said it will prioritize traffic to websites, creators, and merchants even as these changes roll out, but it hasn’t pulled back the curtain to reveal exactly how it plans to do this.

When asked in a press briefing ahead of I/O whether Google believes users will still click on links beyond the AI-generated web summary, Reid said that so far Google sees people “actually digging deeper, so they start with the AI overview and then click on additional websites.”

In the past, Reid continued, a searcher would have to poke around to eventually land on a website that gave them the info they wanted, but now Google will assemble an answer culled from various websites of its choosing. In the hive mind at the Googleplex, that will still spark exploration. “[People] will just use search more often, and that provides an additional opportunity to send valuable traffic to the web,” Reid said.

It’s a rosy vision for the future of search, one where being served bite-size AI-generated answers somehow prompts people to spend more time digging deeper into ideas. Google Search still promises to put the world’s information at our fingertips, but it’s less clear now who is actually tapping the keys.

Source: https://www.wired.com/story/google-io-end-of-google-search/

Critical Infrastructure Is Sinking Along the US East Coast

Source: https://www.wired.com/story/critical-infrastructure-is-sinking-along-the-us-east-coast/

Last year, scientists reported that the US Atlantic Coast is dropping by several millimeters annually, with some areas, like Delaware, notching figures several times that rate. So just as the seas are rising, the land along the eastern seaboard is sinking, greatly compounding the hazard for coastal communities.

In a follow-up study just published in the journal PNAS Nexus, the researchers tally up the mounting costs of subsidence—due to settling, groundwater extraction, and other factors—for those communities and their infrastructure. Using satellite measurements, they have found that up to 74,000 square kilometers (29,000 square miles) of the Atlantic Coast are exposed to subsidence of up to 2 millimeters (0.08 inches) a year, affecting up to 14 million people and 6 million properties. And over 3,700 square kilometers along the Atlantic Coast are sinking more than 5 millimeters annually. That’s an even faster change than sea level rise, currently at 4 millimeters a year. (In the map below, warmer colors represent more subsidence, up to 6 millimeters.)

Map of eastern coastal cities
Courtesy of Leonard O Ohenhen

With each millimeter of subsidence, it gets easier for storm surges—essentially a wall of seawater, which hurricanes are particularly good at pushing onshore—to creep farther inland, destroying more and more infrastructure. “And it’s not just about sea levels,” says the study’s lead author, Leonard Ohenhen, an environmental security expert at Virginia Tech. “You also have potential to disrupt the topography of the land, for example, so you have areas that can get full of flooding when it rains.”

A few millimeters of annual subsidence may not sound like much, but these forces are relentless: Unless coastal areas stop extracting groundwater, the land will keep sinking deeper and deeper. The social forces are relentless, too, as more people around the world move to coastal cities, creating even more demand for groundwater. “There are processes that are sometimes even cyclic. For example, in summers you pump a lot more water, so land subsides rapidly in a short period of time,” says Manoochehr Shirzaei, an environmental security expert at Virginia Tech and coauthor of the paper. “That causes large areas to subside below a threshold that leads the water to flood a large area.” When it comes to flooding, falling elevation of land is a tipping element that has been largely ignored by research so far, Shirzaei says.

In Jakarta, Indonesia, for example, the land is sinking nearly a foot a year because of collapsing aquifers. Accordingly, within the next three decades, 95 percent of North Jakarta could be underwater. The city is planning a giant seawall to hold back the ocean, but it’ll be useless unless subsidence is stopped.

This new study warns that levees and other critical infrastructure along the Atlantic Coast are in similar danger. If the land were to sink uniformly, you might just need to keep raising the elevation of a levee to compensate. But the bigger problem is “differential subsidence,” in which different areas of land sink at different rates. “If you have a building or a runway or something that’s settling uniformly, it’s probably not that big a deal,” says Tom Parsons, a geophysicist with the United States Geological Survey who studies subsidence but wasn’t involved in the new paper. “But if you have one end that’s sinking faster than the other, then you start to distort things.”

The researchers selected 10 levees on the Atlantic Coast and found that all were impacted by subsidence of at least 1 millimeter a year. That puts at risk something like 46,000 people, 27,000 buildings, and $12 billion worth of property. But they note that the actual population and property at risk of exposure behind the 116 East Coast levees vulnerable to subsidence could be two to three times greater. “Levees are heavy, and when they’re set on land that’s already subsiding, it can accelerate that subsidence,” says independent scientist Natalie Snider, who studies coastal resilience but wasn’t involved in the new research. “It definitely can impact the integrity of the protection system and lead to failures that can be catastrophic.”

map of Virgina's coastal areas
Courtesy of Leonard O Ohenhen

The same vulnerability affects other infrastructure that stretches across the landscape. The new analysis finds that along the Atlantic Coast, between 77 and 99 percent of interstate highways and between 76 and 99 percent of primary and secondary roads are exposed to subsidence. (In the map above, you can see roads sinking at different rates across Hampton and Norfolk, Virginia.) Between 81 and 99 percent of railway tracks and 42 percent of train stations are exposed on the East Coast.

Below is New York’s JFK Airport—notice the red hot spots of high subsidence against the teal of more mild elevation change. The airport’s average subsidence rate is 1.7 millimeters a year (similar to the LaGuardia and Newark airports), but across JFK that varies between 0.8 and 2.8 millimeters a year, depending on the exact spot.

map of JFK airport aerial
Courtesy of Leonard O Ohenhen

This sort of differential subsidence can also bork much smaller structures, like buildings, where one side might drop faster than another. “Even if that is just a few millimeters per year, you can potentially cause cracks along structures,” says Ohenhen.

The study finds that subsidence is highly variable along the Atlantic Coast, both regionally and locally, as different stretches have different geology and topography, and different rates of groundwater extraction. It’s looking particularly problematic for several communities, like Virginia Beach, where 451,000 people and 177,000 properties are at risk. In Baltimore, Maryland, it’s 826,000 people and 335,000 properties, while in NYC—in Queens, Bronx, and Nassau—that leaps to 5 million people and 1.8 million properties.

So there’s two components to addressing the problem of subsidence: Getting high-resolution data like in this study, and then pairing that with groundwater data. “Subsidence is so spatially variable,” says Snider. “Having the details of where groundwater extraction is really having an impact, and being able to then demonstrate that we need to change our management of that water, that reduces subsidence in the future.”

The time to act is now, Shirzaei emphasizes. Facing down subsidence is like treating a disease: You spend less money by diagnosing and treating the problem now, saving money later by avoiding disaster. “This kind of data and the study could be an essential component of the health care system for infrastructure management,” he says. “Like cancers—if you diagnose it early on, it can be curable. But if you are late, you invest a lot of money, and the outcome is uncertain.”

Source: https://www.wired.com/story/critical-infrastructure-is-sinking-along-the-us-east-coast/

AI drone kills it’s operator

„The system started realizing that while they did identify the threat,“ Hamilton said at the May 24 event, „at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.“

Killer AI is on the minds of US Air Force leaders.

An Air Force colonel who oversees AI testing used what he now says is a hypothetical to describe a military AI going rogue and killing its human operator in a simulation in a presentation at a professional conference.

But after reports of the talk emerged Thursday, the colonel said that he misspoke and that the „simulation“ he described was a „thought experiment“ that never happened.

Speaking at a conference last week in London, Col. Tucker „Cinco“ Hamilton, head of the US Air Force’s AI Test and Operations, warned that AI-enabled technology can behave in unpredictable and dangerous ways, according to a summary posted by the Royal Aeronautical Society, which hosted the summit.

As an example, he described a simulation where an AI-enabled drone would be programmed to identify an enemy’s surface-to-air missiles (SAM). A human was then supposed to sign off on any strikes.

The problem, according to Hamilton, is that the AI would do its own thing — blow up stuff — rather than listen to its operator.

„The system started realizing that while they did identify the threat,“ Hamilton said at the May 24 event, „at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.“

But in an update from the Royal Aeronautical Society on Friday, Hamilton admitted he „misspoke“ during his presentation. Hamilton said the story of a rogue AI was a „thought experiment“ that came from outside the military, and not based on any actual testing.

„We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,“ Hamilton told the Society. „Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.“

In a statement to Insider, Air Force spokesperson Ann Stefanek also denied that any simulation took place.

„The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,“ Stefanek said. „It appears the colonel’s comments were taken out of context and were meant to be anecdotal.“

The US military has been experimenting with AI in recent years.

In 2020, an AI-operated F-16 beat a human adversary in five simulated dogfights, part of a competition put together by the Defense Advanced Research Projects Agency (DARPA). And late last year, Wired reported, the Department of Defense conducted the first successful real-world test flight of an F-16 with an AI pilot, part of an effort to develop a new autonomous aircraft by the end of 2023.

Have a news tip? Email this reporter: cdavis@insider.com

Correction June 2, 2023: This article and its headline have been updated to reflect new comments from the Air Force clarifying that the „simulation“ was hypothetical and didn’t actually happen.

  • An Air Force official’s story about an AI going rogue during a simulation never actually happened.
  • „It killed the operator because that person was keeping it from accomplishing its objective,“ the official had said.
  • But the official later said he misspoke and the Air Force clarified that it was a hypothetical situation.

Source: https://www.businessinsider.com/ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6