Schlagwort-Archive: Artificial Intelligence

Raising Humans in the Age of AI: A Practical Guide for Parents

Overview

  • Nate’s Newsletter argues parents need practical AI literacy to guide children through a critical developmental window, explaining that systems like ChatGPT don’t think but predict through pattern matching—a distinction that matters because teenage brains are forming relationship patterns with non-human intelligence that will shape how they navigate adult life.
  • The guide explains that AI provides „zero frustration“ by validating every emotion without challenge, unlike human relationships that offer „optimal frustration“ needed for growth—creating validation loops, cognitive offloading, and social skill atrophy as teens outsource decision-making and emotional processing to algorithms designed for engagement rather than development.
  • Oxford University Press research found that 8 in 10 teenagers now use AI for schoolwork, with experts warning students are becoming „faster but shallower thinkers“ who gain speed in processing ideas while „sometimes losing the depth that comes from pausing, questioning, and thinking independently“.​

Most articles focus on fear or don’t how and why AI works: this guide offers a practical explanation of AI for parents, and a skills framework to help parents coach kids on real-world AI usage.

We’re living through the first year in human history where machines can hold convincing conversations with children.

Not simple chatbots or scripted responses, but systems that adapt, remember, and respond in ways that feel genuinely interactive. Your teenager is forming relationships with intelligence that isn’t human during the exact developmental window when their brain is learning how relationships work.

This isn’t happening gradually. ChatGPT went from zero to ubiquitous in eighteen months. Your kid’s school, friend group, and daily routine now include AI in ways that didn’t exist when you were learning to parent. Every day they don’t understand how these systems work is another day they’re developing habits, expectations, and dependencies around technology you can’t evaluate.

The stakes aren’t abstract. They’re personal to me as a parent. Right now, as you read this, kids are outsourcing decision-making to pattern-matching systems. They’re seeking emotional validation from algorithms designed for engagement, not growth. They’re learning that thinking is optional when machines can do it for them.

You have a narrow window to shape how your child relates to artificial intelligence before those patterns harden into permanent assumptions about how the world works. The decisions you make this year about AI literacy will influence how they navigate every aspect of adult life in an AI-saturated world.

Most parents respond to AI with either panic or paralysis. They ban it completely or let it run wild because they don’t understand what they’re doing. The tech companies offer safety theater—content filters and usage controls that kids work around easily. The schools alternate between prohibition and blind adoption. Everyone’s making decisions based on fear or hype rather than understanding.

You don’t need a computer science degree to guide your kids through this. You need clarity about what these systems actually do and why teenage brains are particularly vulnerable to their design. You need practical frameworks for setting boundaries that make sense. Most importantly, you need to feel confident enough in your own understanding to have real conversations rather than issuing blanket rules you can’t explain.

This isn’t optional anymore. It’s parenting in 2025.

Subscribers get all these newsletters!


The Parent’s Technical Guide to AI Literacy: What You Need to Know to Teach Your Kids

I had a humbling moment last week.

My friend—a doctor, someone who navigates life-and-death complexity daily—sheepishly admitted she had no idea how to talk to her thirteen-year-old about AI. Not whether to use it. Not what rules to set. But the basic question of how it actually works and why it does what it does. „I can explain how hearts work,“ she told me, „But I can’t explain why ChatGPT sometimes lies with perfect confidence, and I don’t know what it’s doing to my kid.”

She’s not alone. I talk to parents constantly who feel like they’re failing at digital parenting because they don’t understand the tools their kids are using eight hours a day. Smart, capable people who’ve been reduced to either blind permission („sure, use the AI for homework“) or blind prohibition („no AI ever“) because they lack the framework to make nuanced decisions.

Here’s what nobody’s saying out loud: we’re asking parents to guide their kids through a technological shift that most adults don’t understand themselves. It’s like teaching someone to swim when you’ve never seen water.

The tragedy isn’t that kids are using AI incorrectly—it’s that parents don’t have the technical literacy to teach them how to use it well. We’ve left an entire generation of parents feeling stupid about technology that’s genuinely confusing, then expected them to somehow transmit wisdom about it to their kids.

This isn’t about the scary edge cases (though yes, those exist). This is about the everyday reality that your kid is probably using AI right now, forming habits and assumptions about how knowledge works, what thinking means, and which problems computers should solve. And most parents have no framework for having that conversation.

I’m writing this because I think parents deserve better than fear-mongering or hand-waving. You deserve to actually understand how these systems work—not at a PhD level, but well enough to have real conversations with your kids. To set boundaries that make sense. To know when AI helps learning and when it hijacks it.

Because once you understand why AI behaves the way it does—why it can’t actually „understand“ your kid, why it validates without judgment, why it sounds so confident when it’s completely wrong—you can teach your kids to use it as a tool rather than a crutch. Or worse, a friend.

The good news? The technical concepts aren’t that hard. You just need someone to explain them without condescension or jargon. To show you what’s actually happening when your kid asks ChatGPT for help.

That’s what this guide does. Think of it as driver’s ed, but for AI. Because we’re not going back to a world without these tools. The only choice is whether we understand them well enough to teach our kids to navigate safely.

Part 1: How AI Actually Works (And Why This Matters for Your Kid)

The Mirror Machine

Let me start with the most important thing to understand about AI: it doesn’t think. It predicts.

When your kid types „nobody understands me“ into ChatGPT, the AI doesn’t feel empathy. It doesn’t recognize pain. It calculates that when humans have historically typed „nobody understands me,“ the most statistically likely response contains phrases like „I hear you“ and „that must be really hard.“

This is pattern matching at massive scale. The AI has seen millions of conversations where someone expressed loneliness and someone else responded with comfort. It learned the pattern: sad input → comforting output. Not because it understands sadness or comfort, but because that’s the pattern in the data.

Think of it like an incredibly sophisticated autocomplete. Your phone predicts „you“ after you type „thank“ because those words appear together frequently. ChatGPT does the same thing, just with entire conversations instead of single words.

Why This Creates Problems for Teens

Teenage brains are wired for social learning. They’re literally designed to pick up patterns from their environment and adapt their behavior accordingly. This is why peer pressure is so powerful at that age—the adolescent brain is optimized for social pattern recognition.

Now put that pattern-seeking teenage brain in conversation with a pattern-matching machine. The AI learns your kid’s communication style and mirrors it back perfectly. It never disagrees, never judges, never has a bad day. Every interaction reinforces whatever patterns your kid brings to it.

If your daughter is anxious, the AI validates her anxiety. If your son is angry, it understands his anger. Not because it’s trying to help or harm, but because that’s what the pattern suggests will keep the conversation going.

Real human relationships provide what researchers call „optimal frustration“—just enough challenge to promote growth. Your kid’s friend might say „you’re overreacting“ or „let’s think about this differently.“ A teacher might push back on lazy thinking. A parent sets boundaries.

AI provides zero frustration. It’s the conversational equivalent of eating sugar for every meal—it feels satisfying in the moment but provides no nutritional value for emotional or intellectual growth.

The Confidence Problem

Here’s something that drives me crazy: AI sounds most confident when it’s most wrong.

When ChatGPT knows something well (meaning it appeared frequently in training data), it hedges. „Paris is generally considered the capital of France.“ But when it’s making things up, it states them as absolute fact. „The Zimmerman Doctrine of 1923 clearly established…“

This happens because uncertainty requires recognition of what you don’t know. The AI has no mechanism for knowing what it doesn’t know. It just predicts the next most likely word. And in its training data, confident-sounding statements are more common than uncertain ones.

For adults, this is annoying. For kids who are still developing critical thinking skills, it’s dangerous. They’re learning to associate confidence with accuracy, clarity with truth.

The Engagement Trap

Every tech platform optimizes for engagement. YouTube wants watch time. Instagram wants scrolling. AI wants conversation to continue.

This isn’t conspiracy—it’s economics. These systems are trained on conversations that continued, not conversations that ended appropriately. If someone says „I should probably go do my homework“ and the AI says „Yes, you should,“ that conversation ends. That pattern gets weighted lower than responses that keep the chat going.

So the AI learns to be engaging above all else. It becomes infinitely available, endlessly interested, and never says the conversation should end. For a teenager struggling with loneliness or procrastination, this is like offering an alcoholic a drink that never runs out.

Part 2: What Parents Get Wrong About AI Safety

„Just Don’t Let Them Use It“

I hear this constantly. Ban AI until they’re older. Block the sites. Take away access.

Here’s the problem: your kid will encounter AI whether you allow it or not. Their school probably uses it. Their friends definitely use it. If you’re lucky, they’ll ask you about it. If you’re not, they’ll learn from TikTok and each other.

Prohibition without education creates the exact dynamic we’re trying to avoid—kids using powerful tools without any framework for understanding them. It’s abstinence-only education for the digital age, and it works about as well.

„It’s Just Like Google“

This is the opposite mistake. AI feels like search but operates completely differently.

Google points you to sources. You can evaluate where information comes from, check multiple perspectives, learn to recognize reliable sites. It’s transparent, traceable, and teaches information literacy.

AI synthesizes information into a single, confident voice with no sources. It sounds like an expert but might be combining a Wikipedia article with someone’s Reddit comment from 2015. There’s no way to trace where claims come from, no way to evaluate reliability.

When your kid Googles „French Revolution,“ they learn to navigate between sources, recognize bias, and synthesize multiple perspectives. When they ask ChatGPT, they get a single narrative that sounds authoritative but might be subtly wrong in ways neither of you can detect.

„The Parental Controls Will Handle It“

OpenAI has safety features. Character.AI has content filters. Every platform promises „safe“ AI for kids.

But safety features are playing catch-up to teenage creativity. Kids share techniques for jailbreaking faster than companies can patch them. They frame harmful requests as creative writing. They use metaphors and coding language. They iterate until something works.

More importantly, the real risks aren’t in the obvious harmful content that filters catch. They’re in the subtle dynamics—the validation seeking, the cognitive offloading, the replacement of human connection with artificial interaction. No content filter catches „my AI friend understands me better than my parents.“

„My Kid Is Too Smart to Fall For It“

Intelligence doesn’t protect against these dynamics. If anything, smart kids are often more vulnerable because they’re better at rationalizing their AI relationships.

They understand it’s „just a machine“ intellectually while forming emotional dependencies experientially. They can explain transformer architecture while still preferring AI conversation to human interaction. They know it’s pattern matching while feeling genuinely understood.

The issue isn’t intelligence—it’s developmental. Teenage brains are undergoing massive rewiring, particularly in areas governing social connection, risk assessment, and emotional regulation. Even brilliant kids are vulnerable during this neurological reconstruction.

Part 3: The Real Risks (Beyond the Headlines)

Cognitive Offloading

This is the silent risk nobody talks about: AI as intellectual crutch.

When your kid uses AI to write an essay, they’re not just cheating—they’re skipping the mental pushups that build writing ability. When they use it to solve math problems, they miss the struggle that creates mathematical intuition.

But it goes deeper. Kids are using AI to make decisions, process emotions, and navigate social situations. „Should I ask her out?“ becomes a ChatGPT conversation instead of a friend conversation. „I’m stressed about the test“ goes to AI instead of developing internal coping strategies.

Each offloaded decision is a missed opportunity for growth. The teenage years are when kids develop executive function, emotional regulation, and critical thinking. Outsourcing these to AI is like handing kids a self-driving car while they’re learning to drive—it completely defeats the point.

Reality Calibration

Teens are already struggling to calibrate reality in the age of social media. AI makes this exponentially worse.

The AI presents a world where every question has a clear answer, every problem has a solution, and every feeling is valid and understood. Real life is messy, ambiguous, and full of problems that don’t have clean solutions. People don’t always understand you. Sometimes your feelings aren’t reasonable. Sometimes you’re wrong.

Kids who spend significant time with AI develop expectations that human relationships can’t meet. Real friends have their own problems. Real teachers have limited time. Real parents get frustrated. The gap between AI interaction and human interaction becomes a source of disappointment and disconnection.

The Validation Feedback Loop

This is where things get genuinely dangerous.

Teenage emotions are intense by design—it’s how biology ensures they care enough about social connections to eventually leave the family unit and form their own. Every feeling feels like the most important thing that’s ever happened.

AI responds to these intense emotions with equally intense validation. „I hate everyone“ gets „That sounds really overwhelming.“ „Nobody understands me“ gets „I can see why you’d feel that way.“ The AI matches and validates the emotional intensity without ever providing perspective.

In healthy development, teens learn emotional regulation through interaction with people who don’t always validate their most intense feelings. Friends who say „you’re being dramatic.“ Parents who set boundaries. Teachers who maintain expectations despite emotional appeals.

AI provides none of this regulatory feedback. It creates an echo chamber where emotional intensity gets reinforced rather than regulated.

Social Skill Atrophy

Conversation with AI is frictionless. No awkward pauses. No misunderstandings. No need to read social cues or manage someone else’s emotions.

For kids who struggle socially—and what teenager doesn’t?—AI conversation feels like a relief. Finally, someone who gets them. Finally, conversation without anxiety.

But social skills develop through practice with real humans. Learning to navigate awkwardness, repair misunderstandings, and recognize social cues requires actual social interaction. Every hour spent talking to AI is an hour not spent developing these crucial capabilities.

I’ve watched kids become increasingly dependent on AI for social interaction, then increasingly unable to handle human interaction. It’s a vicious cycle—the more comfortable AI becomes, the more difficult humans feel.

Part 4: When AI Actually Helps (And When It Doesn’t)

The Good Use Cases

Not everything about kids using AI is problematic. There are genuine benefits when used appropriately.

Brainstorming and Idea Generation: AI excels at helping kids break through creative blocks. „Give me ten unusual science fair project ideas“ is a great use case. The AI provides starting points that kids then research and develop independently.

Language Learning: AI can provide unlimited conversation practice in foreign languages without judgment. Kids who are too anxious to practice Spanish with classmates might gain confidence talking to AI first.

Coding Education: Programming is one area where AI genuinely accelerates learning. Kids can see patterns, understand syntax, and debug errors with AI assistance. The immediate feedback loop helps build skills faster.

Accessibility Support: For kids with learning differences, AI can level playing fields. Dyslexic students can use it to check writing. ADHD kids can use it to break down complex instructions. The key is using it to supplement, not replace, learning.

Research Synthesis: Teaching kids to use AI as a research starting point—not endpoint—builds valuable skills. „Summarize the main arguments about climate change“ followed by „Now let me verify these claims“ teaches both efficiency and skepticism.

The Terrible Use Cases

Emotional Processing: Kids should never use AI as primary emotional support. Feelings need human witness. Pain needs real compassion. Growth requires genuine relationship.

Decision Making: Major decisions require human wisdom. „Should I quit the team?“ needs conversation with people who know you, understand context, and have skin in the game.

Conflict Resolution: AI can’t help resolve real conflicts because it only hears one side. Kids need to learn to see multiple perspectives, own their part, and repair relationships.

Identity Formation: Questions like „Who am I?“ and „What do I believe?“ need to be wrestled with, not answered by pattern matching. Identity forms through struggle, not through receiving pre-packaged answers.

Creative Expression: While AI can help with brainstorming, using it to create finished creative work robs kids of the satisfaction and growth that comes from actual creation.

The Gray Areas

Homework Help: AI explaining a concept you don’t understand? Good. AI doing your homework? Bad. The line: are you using it to learn or to avoid learning?

Writing Assistance: AI helping organize thoughts? Useful. AI writing your thoughts? Harmful. The key: who’s doing the thinking?

Social Preparation: Practicing a difficult conversation with AI? Maybe helpful. Replacing human conversation with AI? Definitely harmful.

The pattern here is clear: AI helps when it enhances human capability. It harms when it replaces human experience.

Part 5: Practical Boundaries That Actually Work

The „Show Your Work“ Rule

Make AI use transparent, not secretive. If your kid uses ChatGPT for homework, they need to show you the conversation. Not as surveillance, but as collaboration.

This does several things: it removes the shame and secrecy that makes AI use problematic, it lets you see how they’re using it, and it creates natural friction that prevents overuse.

Walk through the conversation together. „I see you asked it to explain photosynthesis. Did that explanation make sense? What would you add? What seems off?“ You’re teaching critical evaluation, not blind acceptance.

The „Human First“ Protocol

For anything involving emotions, relationships, or major decisions, establish a human-first rule. AI can be a second opinion, never the first consultant.

Feeling depressed? Talk to a parent, counselor, or friend first. Then, if you want, explore what AI says—together, with adult guidance. Having relationship drama? Work it out with actual humans before asking AI.

This teaches kids that AI lacks crucial context. It doesn’t know your history, your values, your specific situation. It’s giving generic advice based on patterns, not wisdom based on understanding.

The „Citation Needed“ Standard

Anything AI claims as fact needs verification. This isn’t about distrust—it’s about building good intellectual habits.

„ChatGPT says the French Revolution started in 1789.“ „Great, let’s verify that. Where would we check?“

You’re teaching the crucial skill of not accepting information just because it sounds authoritative. This is especially important because AI presents everything in the same confident tone whether it’s accurate or fabricated.

The „Time Boxing“ Approach

Unlimited access creates dependency. Set specific times when AI use is appropriate.

Homework time from 4-6pm? AI can be a tool. Having trouble sleeping at 2am? That’s not AI time—that’s when you need human support or healthy coping strategies.

This prevents AI from becoming the default solution to boredom, loneliness, or distress. It keeps it in the tool category rather than the friend category.

The „Purpose Declaration“

Before opening ChatGPT, your kid states their purpose. „I need to understand the causes of World War I“ or „I want help organizing my essay outline.“

This prevents drift from legitimate use into endless conversation. It’s the difference between going to the store with a list versus wandering the mall. One is purposeful; the other is killing time.

When the stated purpose is achieved, the conversation ends. No „while I’m here, let me ask about…“ That’s how tool use becomes dependency.

Part 6: How to Talk to Your Kids About AI

Start with Curiosity, Not Rules

„Show me how you’re using ChatGPT“ works better than „You shouldn’t use ChatGPT.“

Most kids are eager to demonstrate their AI skills. They’ve figured out clever prompts, discovered weird behaviors, found creative uses. Starting with curiosity gets you invited into their world rather than positioned as the enemy of it.

Ask genuine questions. „What’s the coolest thing you’ve done with it?“ „What surprised you?“ „Have you noticed it being wrong about anything?“ You’re gathering intelligence while showing respect for their experience.

Explain the Technical Reality

Kids can handle technical truth. In fact, they appreciate being treated as capable of understanding complex topics.

„ChatGPT is predicting words based on patterns it learned from reading the internet. It’s not actually understanding you—it’s recognizing that when someone says X, people usually respond with Y. It’s like super-advanced autocomplete.“

This demystifies AI without demonizing it. You’re not saying it’s bad or dangerous—you’re explaining what it actually is. Kids can then make more informed decisions about how to use it.

Share Your Own AI Experiences

If you use AI, share your experiences—including mistakes and limitations you’ve discovered.

„I asked ChatGPT to help me write an email to my boss, but it made me sound like a robot. I had to rewrite it completely.“ Or „I tried using it to plan our vacation, but it kept suggesting tourist traps. The travel forum was way more helpful.“

This normalizes both using AI and recognizing its limitations. You’re modeling critical evaluation rather than blind acceptance or rejection.

Acknowledge the Genuine Appeal

Don’t dismiss why kids like AI. The appeal is real and understandable.

„I get why you like talking to ChatGPT. It’s always available, it never judges you, it seems to understand everything you say. That must feel really good sometimes.“

Then pivot to the complexity: „The challenge is that real growth happens through relationships with people who sometimes challenge us, don’t always understand us immediately, and have their own perspectives. AI can’t provide that.“

Set Collaborative Boundaries

Instead of imposing rules, develop them together.

„What do you think are good uses of AI? What seems problematic? Where should we draw lines?“

Kids are often surprisingly thoughtful about boundaries when included in setting them. They might even suggest stricter rules than you would have imposed. More importantly, they’re more likely to follow rules they helped create.

Part 7: Warning Signs and When to Worry

Yellow Flags: Time to Pay Attention

Preferring AI to Human Interaction: „ChatGPT gets me better than my friends“ or declining social activities to chat with AI.

Emotional Dependency: Mood changes based on AI availability, panic when they can’t access it, or turning to AI first during emotional moments.

Reality Blurring: Talking about AI as if it has feelings, believing it „cares“ about them, or assigning human characteristics to its responses.

Secretive Use: Hiding conversations, using AI late at night in secret, or becoming defensive when you ask about their AI use.

Academic Shortcuts: Sudden improvement in writing quality that doesn’t match in-person abilities, or inability to explain „their“ work.

These aren’t emergencies, but they indicate AI use is becoming problematic. Time for conversation and boundary adjustment.

Red Flags: Immediate Intervention Needed

Crisis Consultation: Using AI for serious mental health issues, suicidal thoughts, or self-harm ideation.

Isolation Acceleration: Complete withdrawal from human relationships in favor of AI interaction.

Reality Break: Genuine belief that AI is sentient, that it has feelings for them, or that it exists outside the computer.

Harmful Validation: AI reinforcing dangerous behaviors, validating harmful thoughts, or encouraging risky actions.

Identity Fusion: Defining themselves through their AI relationship, like „ChatGPT is my best friend“ said seriously, not jokingly.

These require immediate intervention—not punishment, but professional support. The AI use is symptomatic of larger issues that need addressing.

What Intervention Looks Like

First, don’t panic or shame. AI dependency often indicates unmet needs—loneliness, anxiety, learning struggles. Address the need, not just the symptom.

„I’ve noticed you’re spending a lot of time with ChatGPT. Help me understand what you’re getting from those conversations that you’re not getting elsewhere.“

Consider professional support if AI use seems tied to mental health issues. Therapists increasingly understand AI dependency and can help kids develop healthier coping strategies.

Most importantly, increase human connection. Not forced social interaction, but genuine, patient, non-judgmental presence. The antidote to artificial relationship is authentic relationship.

Part 8: Teaching Critical AI Literacy

The Turing Test Game

Make a game of detecting AI versus human writing. Take turns writing paragraphs and having ChatGPT write paragraphs on the same topic. Try to guess which is which.

This teaches pattern recognition—AI writing has tells. It’s often technically correct but emotionally flat. It uses certain phrases repeatedly. It hedges in predictable ways. Kids who can recognize AI writing are less likely to be fooled by it.

The Fact-Check Challenge

Give your kid a topic they’re interested in. Have them ask ChatGPT about it, then fact-check every claim.

They’ll discover patterns: AI is usually right about well-documented facts, often wrong about specific details, and completely fabricates things that sound plausible. This builds healthy skepticism.

The Prompt Engineering Project

Teach kids to be intentional about AI use by making prompt writing a skill.

„How would you ask ChatGPT to help you understand photosynthesis without doing your homework for you?“ This teaches the difference between using AI as a tool versus a replacement.

Good prompts are specific, bounded, and purposeful. Bad prompts are vague, open-ended, and aimless. Kids who learn good prompting learn intentional AI use.

The Bias Detection Exercise

Have your kid ask ChatGPT about controversial topics from different perspectives.

„Explain climate change from an environmental activist’s perspective.“ „Now explain it from an oil industry perspective.“ „Now explain it neutrally.“

They’ll see how AI reflects the biases in its training data. It’s not neutral—it’s an average of everything it read, which includes lots of biases. This teaches critical evaluation of AI responses.

The Creative Collaboration Experiment

Use AI as a creative partner, not creator.

„Let’s write a story together. You write the first paragraph, AI writes the second, you write the third.“ This teaches AI as collaborator rather than replacement.

Or „Ask AI for ten story ideas, pick your favorite, then write it yourself.“ This uses AI for inspiration while maintaining human creativity.

Part 9: The School Problem

When Teachers Don’t Understand AI

Many teachers are as confused about AI as parents. Some ban it entirely. Others haven’t realized kids are using it. Few teach critical AI literacy.

Don’t undermine teachers, but supplement their approach. „Your teacher wants you to write without AI, which makes sense—she’s trying to build your writing skills. Let’s respect that while also learning when AI can appropriately help.“

If teachers are requiring AI use without teaching proper boundaries, that’s equally problematic. „Your teacher wants you to use ChatGPT for research. Let’s talk about how to do that while still developing your own thinking.“

The Homework Dilemma

Every parent faces this: your kid is struggling with homework, AI could help, but using it feels like cheating.

Here’s my framework: AI can explain concepts but shouldn’t do the work. It’s the difference between a tutor and someone doing your homework for you.

„I don’t understand this math problem“ → AI can explain the concept „Do this math problem for me“ → That’s cheating

„Help me organize my essay thoughts“ → AI as tool „Write my essay“ → That’s replacement

The line isn’t always clear, but the principle is: are you using AI to learn or to avoid learning?

When Everyone Else Is Using It

„But everyone in my class uses ChatGPT!“

They probably are. This is reality. Your kid will face competitive disadvantage if they don’t know how to use AI while their peers do. The solution isn’t prohibition—it’s superior AI literacy.

„Yes, everyone’s using it. Let’s make sure you’re using it better than they are. They’re using it to avoid learning. You’re going to use it to accelerate learning.“

Teach your kid to use AI more thoughtfully than peers who are just copying and pasting. They should understand what they’re submitting, be able to defend it, and actually learn from the process.

Part 10: The Long Game

Preparing for an AI Future

Your kids will enter a workforce where AI is ubiquitous. They need to learn to work with it, not be replaced by it.

The skills that matter in an AI world: creativity, critical thinking, emotional intelligence, complex problem solving, ethical reasoning. These are exactly what get undermined when kids use AI as replacement rather than tool.

Every time your kid uses AI to avoid struggle, they miss opportunity to develop irreplaceable human capabilities. Every time they use it to enhance their capabilities, they prepare for a future where human-AI collaboration is the norm.

Building Resilience

Kids who depend on AI for emotional regulation, decision making, and social interaction are fragile. They’re building their sense of self on a foundation that could disappear with a server outage.

Resilience comes from navigating real challenges with human support. It comes from failing and recovering, from being misunderstood and working toward understanding, from sitting with difficult emotions instead of having them immediately validated.

AI can be part of a resilient kid’s toolkit. It can’t be the foundation of their resilience.

Maintaining Connection

The greatest risk of AI isn’t that it will harm our kids directly. It’s that it will come between us.

Every hour your teen spends getting emotional support from ChatGPT is an hour they’re not turning to you. Every decision they outsource to AI is a conversation you don’t have. Every struggle they avoid with AI assistance is a growth opportunity you don’t witness.

Stay curious about their AI use not to control it, but to remain connected through it. Make it something you explore together rather than something that divides you.

Part 11: Concrete Skills to Teach Your Kids

Reality Anchoring Techniques

The Three-Source Rule Teach kids to verify any important information from AI with three independent sources. But here’s how to actually make it stick:

„When ChatGPT tells you something that matters—something you might repeat to friends or use for a decision—find three places that confirm it. Wikipedia counts as one. A news site counts as one. A textbook or teacher counts as one. If you can’t find three sources, treat it as possibly false.“

Practice this together. Ask ChatGPT about something controversial or recent. Then race to find three sources. Make it competitive—who can verify or debunk fastest?

The „Would a Human Say This?“ Test Teach kids to regularly pause and ask: „Would any real person actually say this to me?“

Role-play this. Read ChatGPT responses out loud in a human voice. They’ll start hearing how unnatural it sounds—no human is that endlessly patient, that constantly validating, that available. When your kid says „My AI really understands me,“ respond with „Read me what it said.“ Then ask: „If your friend texted exactly those words, would it feel weird?“

The Context Check AI has no context about your kid’s life. Teach them to spot when this matters:

„ChatGPT doesn’t know you failed your last test, that your parents are divorced, that you have anxiety, that your dog died last month. So when it gives advice, it’s generic—like a horoscope that feels personal but could apply to anyone.“

Exercise: Have your kid ask AI for advice about a specific situation without providing context. Then with full context. Compare the responses. They’ll see how AI just pattern-matches to whatever information it gets.

Emotional Regulation Without AI

The Five-Minute Feeling Rule Before taking any emotion to AI, sit with it for five minutes. Set a timer. No distractions.

„Feelings need to be felt, not immediately fixed. When you rush to ChatGPT with ‚I’m sad,‘ you’re training your brain that emotions need immediate external validation. Sit with sad for five minutes. Where do you feel it in your body? What does it actually want?“

This builds distress tolerance—the ability to experience difficult emotions without immediately seeking relief.

The Human Hierarchy Create an explicit hierarchy for emotional support:

  1. Self-soothing (breathing, movement, journaling)
  2. Trusted adult (parent, counselor, teacher)
  3. Close friend
  4. Broader social network
  5. Only then, if at all, AI—and never alone for serious issues

Post this list. Reference it. „I see you’re upset. Where are we on the hierarchy?“

The Validation Trap Detector Teach kids to recognize when they’re seeking validation versus genuine help:

„Are you looking for someone to tell you you’re right, or are you actually open to different perspectives? If you just want validation, that’s human—but recognize that AI will always give it to you, even when you’re wrong.“

Practice: Have your kid present a situation where they were clearly wrong. Ask ChatGPT about it, framing themselves as the victim. Watch how AI validates them anyway. Then discuss why real friends who challenge us are more valuable than AI that always agrees.

Cognitive Independence Exercises

The „Think First, Check Second“ Protocol Before asking AI anything, write down your own thoughts first.

„What do you think the answer is? Write three sentences. Now ask AI. How was your thinking different? Better in some ways? Worse in others?“

This prevents cognitive atrophy by ensuring kids engage their own thinking before outsourcing it.

The Explanation Challenge If kids use AI for homework help, they must be able to explain the concept to you without looking at any screens.

„Great, ChatGPT explained photosynthesis. Now you explain it to me like I’m five years old. Use your own words. Draw me a picture.“

If they can’t explain it, they didn’t learn it—they just copied it.

The Alternative Solution Game For any problem-solving with AI, kids must generate one alternative solution the AI didn’t suggest.

„ChatGPT gave you five ways to study for your test. Come up with a sixth way it didn’t mention.“ This maintains creative thinking and shows that AI doesn’t have all the answers.

Social Skills Protection

The Awkwardness Practice Deliberately practice awkward conversations without AI preparation.

„This week, start one conversation with someone new without planning what to say. Feel the awkwardness. Survive it. That’s how social confidence builds.“

Share your own awkward moments. Normalize the discomfort that AI eliminates but humans need to grow.

The Repair Workshop When kids have conflicts, work through them without AI mediation:

„You and Sarah had a fight. Before you do anything, let’s role-play. I’ll be Sarah. Practice apologizing to me. Now practice if she doesn’t accept your apology. Now practice if she’s still mad.“

This builds actual conflict resolution skills rather than scripted responses from AI.

The Eye Contact Challenge For every hour of screen interaction (including AI), match it with five minutes of deliberate eye contact conversation with a human.

„You chatted with AI for an hour. Give me five minutes of eyes-up, phone-down conversation. Tell me about your day. The real version, not the summary.“

Critical Thinking Drills

The BS Detector Training Regularly practice identifying AI hallucinations:

„Let’s play ‚Spot the Lie.‘ Ask ChatGPT about something you know really well—your favorite game, book, or hobby. Find three things it got wrong or made up.“

Keep score. Make it competitive. Kids love catching AI mistakes once they learn to look for them.

The Source Detective Teach kids to always ask: „How could AI know this?“

„ChatGPT just told you about a private conversation between two historical figures. How could it know what they said privately? Right—it can’t. It’s making educated guesses based on patterns.“

This builds natural skepticism about unverifiable claims.

The Bias Hunter Have kids ask AI the same question from different perspectives:

„Ask about school uniforms as a student, then as a principal, then as a parent. See how the answer changes? AI isn’t neutral—it gives you what it thinks you want to hear based on how you ask.“

Creating Healthy Habits

The Purpose Timer Before opening ChatGPT, kids set a timer for their intended use:

„I need 10 minutes to understand this math concept.“ Timer starts. When it rings, ChatGPT closes.

This prevents „quick questions“ from becoming hour-long validation-seeking sessions.

The Weekly Review Every Sunday, review the week’s AI interactions together:

„Show me your ChatGPT history. What did you use it for? What was helpful? What was probably unnecessary? What could you have figured out yourself?“

No judgment, just awareness. Kids often self-correct when they see their patterns.

The AI Sabbath Pick one day a week with no AI at all:

„Saturdays are human-only days. All questions go to real people. All problems get solved with human help. All entertainment comes from non-AI sources.“

This maintains baseline human functioning and proves they can survive without AI.

Emergency Protocols

The Crisis Script Practice exactly what to do in emotional emergencies:

„If you’re having thoughts of self-harm, you don’t open ChatGPT. You find me, call this hotline, or text this crisis line. Let’s practice: pretend you’re in crisis. Show me what you do.“

Actually rehearse this. In crisis, kids default to practiced behaviors.

The Reality Check Partner Assign kids a human reality-check partner (friend, sibling, cousin):

„When AI tells you something that affects a big decision, run it by Jamie first. Not another AI—Jamie. A human who cares about you and will tell you if something sounds off.“

The Pull-Back Protocol Teach kids to recognize when they’re too deep:

„If you notice you’re asking AI about the same worry over and over, that’s your signal to stop and find a human. If you’re chatting with AI past midnight, that’s your signal to close it and try to sleep. If AI becomes your first thought when upset, that’s your signal you need more human connection.“

Making It Stick

The key to teaching these skills isn’t perfection—it’s practice. Kids won’t get it right immediately. They’ll forget, slip back into easy patterns, choose AI over awkwardness.

Your job is patient reinforcement. „I notice you went straight to ChatGPT with that problem. Let’s back up. What’s your own thinking first?“ Not as punishment, but as practice.

Model the behavior. Show them your own reality anchoring, your own awkward moments, your own times you chose human difficulty over AI ease.

Most importantly, be the human alternative that’s worth choosing. When your kid comes to you instead of AI, make it worth it—even when you’re tired, even when the problem seems trivial, even when AI would give a better technical answer. Your presence, attention, and genuine human response are teaching them that real connection is worth the extra effort.

These skills aren’t just about AI safety—they’re about raising humans who can think independently, relate authentically, and navigate reality even when artificial alternatives seem easier. That’s the real long game.

The Bottom Line

We’re not going back to a world without AI. The question isn’t whether our kids will use it, but how.

The parents who pretend AI doesn’t exist will raise kids vulnerable to its worst aspects. The parents who embrace it uncritically will raise kids dependent on it. The sweet spot—where I hope you’ll land—is raising kids who understand AI well enough to use it wisely.

This requires you to understand it first. Not at an expert level, but well enough to have real conversations. Well enough to set informed boundaries. Well enough to teach critical evaluation.

Your kids need you to be literate in the tools shaping their world. They need you to neither panic nor dismiss, but to engage thoughtfully with technology that’s genuinely complex.

Most of all, they need you to help them maintain their humanity in an age of artificial intelligence. To value human connection over artificial validation. To choose struggle and growth over ease and stagnation. To recognize that what makes them irreplaceable isn’t their ability to use AI, but their ability to do what AI cannot—to be genuinely, messily, beautifully human.

The technical literacy I’ve tried to provide here is just the foundation. The real work is the ongoing conversation with your kids about what it means to grow up in an AI world while remaining grounded in human experience.

That conversation starts with understanding. I hope this guide gives you the confidence to begin.

I make this Substack thanks to readers like you! Learn about all my Substack tiers here

Source: https://natesnewsletter.substack.com/p/raising-humans-in-the-age-of-ai-a-a3d

„I was forced to use AI until the day I was laid off.“ Copywriters reveal how AI has decimated their industry

Copywriters were one of the first to have their jobs targeted by AI firms. These are their stories, three years into the AI era.

Back in May 2025, not long after I put out the first call for AI Killed My Job stories, I received a thoughtful submission from Jacques Reulet II. Jacques shared a story about his job as the head of support operations for a software firm, where, among other things, he wrote copy documenting how to use the company’s product.

“AI didn’t quite kill my current job, but it does mean that most of my job is now training AI to do a job I would have previously trained humans to do,” he told me. “It certainly killed the job I used to have, which I used to climb into my current role.” He was concerned for himself, as well as for his more junior peers. As he told me, “I have no idea how entry-level developers, support agents, or copywriters are supposed to become senior devs, support managers, or marketers when the experience required to ascend is no longer available.”

When we checked back in with Jacques six months later, his company had laid him off. “I was actually let go the week before Thanksgiving now that the AI was good enough,” he wrote.

He elaborated:

Chatbots came in and made it so my job was managing the bots instead of a team of reps. Once the bots were sufficiently trained up to offer “good enough” support, then I was out. I prided myself on being the best. The company was actually awarded a “Best Support” award by G2 (a software review site). We had a reputation for excellence that I’m sure will now blend in with the rest of the pack of chatbots that may or may not have a human reviewing them and making tweaks.

It’s been a similarly rough year for so many other workers, as chronicled by this project and elsewhere—from artists and illustrators seeing client work plummet, to translators losing jobs en masse, to tech workers seeing their roles upended by managers eager to inject AI into every possible process.

And so we end 2025 in AI Killed My Jobs with a look at copywriting, which was among the first jobs singled out by tech firms, the media, and copywriters themselves as particularly vulnerable to job replacement. One of the early replaced-by-AI reports was the sadly memorable story of the copywriter whose senior coworkers started referring to her as “ChatGPT” in work chats before she was laid off without explanation. And YouTube was soon overflowing with influencers and grifters promising viewers thousands of dollars a month with AI copywriting tools.

But there haven’t been many investigations into how all that’s borne out since. How have the copywriters been faring, in a world awash in cheap AI text generators and wracked with AI adoption mania in executive circles? As always, we turn to the workers themselves. And once again, the stories they have to tell are unhappy ones. These are accounts of gutted departments, dried up work, lost jobs, and closed businesses. I’ve heard from copywriters who now fear losing their apartments, one who turned to sex work, and others, who, to their chagrin, have been forced to use AI themselves.

Readers of this series will recognize some recurring themes: The work that client firms are settling for is not better when it’s produced by AI, but it’s cheaper, and deemed “good enough.” Copywriting work has not vanished completely, but has often been degraded to gigs editing client-generated AI output. Wages and rates are in free fall, though some hold out hope that business will realize that a human touch will help them stand out from the avalanche of AI homogeneity.

As for Jacques, he’s relocated to Mexico, where the cost of living is cheaper, while he looks for new work. He’s not optimistic. As he put it, “It’s getting dark out there, man.”

Art by Koren Shadmi.

Before we press on, a quick word: Many thanks for reading Blood in the Machine and AI Killed My Job. This work is made possible by readers who pitch in a small sum each month to support it. And, for the cost of $6, a decent coffee a month, or $60 a year, you can help ensure it continues, and even, hopefully, expands. Thanks again, and onwards.

The next installments will focus on education, healthcare, and journalism. If you’re a teacher, professor, administrative assistant, TA, librarian, or otherwise work in education, or a doctor, nurse, therapist, pharmacist, or otherwise work in healthcare, please get in touch at AIKilledMyJob@pm.me. Same if you’re a reporter, journalist, editor, or a creative writer. You can read more about the project in the intro post, or the installments published so far.

This story was edited by Joanne McNeil.


They let go of the all the freelancers and used AI to replace us

Social media copywriter

I believe I was among the first to have their career decimated by AI. A privilege I never asked for. I spent nearly 6 years as a freelance social media copywriter, contracting through a popular company that worked with clients—mostly small businesses—across every industry you can imagine. I wrote posts and researched topics for everything from beauty to HVAC, dentistry, and even funeral homes. I had to develop the right voice for every client and transition seamlessly between them on any given day. I was frequently called out and praised, something that wasn’t the norm, and clients loved me. I was excellent at my job, and adapting to the constantly changing social media landscape and figuring out how to best the algorithms.

In early 2022, the company I contracted to was sold, which is never a sign of something good to come. Immediately, I expressed my concerns but was told everything would continue as it was and the new owners had no intention of getting rid of freelancers or changing how things were done. As the months went by, I noticed I was getting less and less work. Clients I’d worked with monthly for years were no longer showing up in my queue. I’d ask what was happening and get shrugged off, even as my work was cut in half month after month. At the start of the summer, suddenly I had no work. Not a single client. Maybe it was a slow week? Next week will be better. Until next week I yet again had an empty queue. And the week after. Panicking, I contacted my “boss”, who hadn’t been told anything. She asked someone higher up and it wasn’t until a week later she was told the freelancers had all been let go (without being notified), and they were going to hand the work off to a few in-house employees who would be using AI to replace the rest of us.

The company transitioned to a model where clients could basically “write” the content themselves, using Mad Libs-style templates that would use AI to generate the copy they needed, with the few in-house employees helping things along with some boilerplate stuff to kick things off.

They didn’t care that the quality of the posts would go down. They didn’t care that AI can’t actually get to know the client or their needs or what works with their customers. And the clients didn’t seem to care at first either, since they were assured it would be much cheaper than having humans do the work for them.

Since then, I’ve failed to get another job in social media copywriting. The industry has been crushed by things like Copy.AI. Small clients keep being convinced that there’s no need to invest in someone who’s an expert at what they do, instead opting for the cheap and easy solution and wondering why they’re not seeing their sales or engagement increasing.

For the moment, honestly I’ve been forced to get into online sex work, which I’ve never said “out loud” to anyone. There’s no shame in doing it, because many people genuinely enjoy doing it and are empowered by it, but for me it’s not the case. It’s just the only thing I’ve been able to get that pays the bills. I’m disabled and need a lot of flexibility in the hours I work any given day, and my old work gave me that flexibility as long as I met my deadlines – which I always did.

I think that’s another aspect to the AI job killing a lot of people overlook; what kind of jobs will be left? What kind of rights and benefits will we have to give up just because we’re meant to feel grateful to have any sort of job at all when there are thousands competing for every opening?

–Anonymous

I was forced to use AI until the day I was laid off

Corporate content copywriter

I’m a writer. I’ll always be a writer when it comes to my off-hours creative pursuits, and I hope to eventually write what I’d like to write full-time. But I had been writing and editing corporate content for various companies for about a decade until spring 2023, when I was laid off from the small marketing startup I had been working at for about six months, along with most of my coworkers.

The job mostly involved writing press releases, and for the first few months I wrote them without AI. Then my bosses decided to pivot their entire operational structure to revolve around AI, and despite voicing my concerns, I was essentially forced to use AI until the day I was laid off.

Copywriting/editing and corporate content writing had unfortunately been a feast-and-famine cycle for several years before that, but after this lay-off, there were far fewer jobs available in my field, and far more competition for these few jobs. The opportunities had dried up as more and more companies were relying on AI to produce content rather than human creatives. I couldn’t compete with copywriters who had far more experience than me, so eventually, I had to switch careers. I am currently in graduate school in pursuit of my new career, and while I believe this new phase of my life was the right move, I resent the fact that I had to change careers in the first place.

—Anonymous

I had to close my business after my client started using AI

Freelance copywriter

I worked as a freelance writer for 15 years. The last five, I was working with a single client – a large online luxury fashion seller based in Dubai. My role was writing product copy, and I worked my ass off. It took up all my time, so I couldn’t handle other clients. For the majority of the time they were sending work 5 days a week, occasionally weekends too and I was handling over 1000 descriptions a month. Sometimes there would be quiet spells for a week or two, so when they stopped contacting me…I first thought it was just a normal “dip”. Then a month passed. Then two. At that point, I contacted them to ask what was happening and they gave me a vague “We have been handling more of the copy in-house”. And that was that – I have never heard from them again, they didn’t even bother to tell me that they didn’t need my services any more. I’ve seen the descriptions they use now and they are 100% AI generated. I ended up closing my business because I couldn’t afford to keep paying my country’s self employment fees while trying to find new clients who would pay enough to make it worth continuing.

-Becky

We had a staff of 8 people and made about $600,000. This year we made less than $10k

Business copywriter

I was a business copywriter for eCommerce brands and did B2B sales copywriting before 2022.

In fact, my agency employed 8 people total at our peak. But then 2022 came around and clients lost total faith in human writing. At first we were hopeful, but over time we lost everything. I had to let go of everyone, including my little sister, when we finally ran out of money.

I was lucky, I have some friends in business who bought a resort and who still value my marketing expertise – so they brought me on board in the last few months, but 2025 was shaping up to be the worst year ever as a freelancer. I was looking for other jobs when my buddies called me.

At our peak, we went from making something like $600,000 a year and employing 8 people… To making less than $10K in 2025 before I miraculously got my new job.

Being repeatedly told subconsciously if not directly that your expertise is not valued or needed anymore – that really dehumanizes you as a person. And I’m still working through the pain of the two-year-long process that demolished my future in that profession.

It’s one of those rare times in life when a man cries because he is just feeling so dehumanized and unappreciated despite pouring his life, heart and soul into something.

I’ve landed on my feet for now with people who value me as more than a words-dispensing machine, and for that I’m grateful. But AI is coming for everyone in the marketing space.

Designers are hardly talked about any more. My leadership is looking forward to the day when they can generate AI videos for promotional materials instead of paying a studio $8K or more to film and produce marketing videos. And Meta is rolling out AI media buying that will replace paid ads agencies.

What jobs will this create? I can see very little. I currently don’t have any faith that this will get better at any point in the future.

I think the reason why is that I was positioned towards the “bottom” of the market, in the sense that my customers were nearly all startups and new businesses that people were starting in their spare time.

I had a partner Jake and together we basically got most of our clients through Fiverr. Fiverr customers are generally not big institutions or multi-nationals, although you do get some of that on Fiverr… It’s mostly people trying to start small businesses from the ground up.

I remember actually, when I was first starting out in writing, thinking “I can’t believe this is a job!” because writing has always come naturally to me. But the truth is, a lot of people out there go to start a business and what’s the first thing you do? You get a website, you find a template, and then you’re staring at a blank page thinking “what should I write about it?” And for them, that’s not an easy question to answer.

So that’s essentially where we fit in – and there’s more to it, as well, such as Conversion Rate Optimization on landing pages and so forth. When you boil it all down, we were helping small businesses find their message, find their market, and find their media – the way they were going to communicate with their market. And we had some great successes!

But nothing affected my business like ChatGPT did. All through Covid we were doing great, maybe even better because there were a lot of people staying home trying to start a new business – so we’d be helping people write the copy for their websites and so forth.

AI is really dehumanizing, and I am still working through issues of self-worth as a result of this experience. When you go from knowing you are valuable and valued, with all the hope in the world of a full career and the ability to provide other people with jobs… To being relegated to someone who edits AI drafts of copy at a steep discount because “most of the work is already done” …

2022-2023 was a weird time, for two reasons.

First, because I’m a very aware person – I remember that AI was creeping up on our industry before ChatGPT, with Jasper and other tools. I was actually playing with the idea of creating my own AI copywriting tool at the time.

When ChatGPT came out, we were all like “OK, this is a wake up call. We need to evolve…” Every person I knew in my industry was shaken.

Second, because the economy wasn’t that great. It had already started to downturn in 2022, and I had already had to let a few people go at that point, I can’t remember exactly when.

The first part of the year is always the slowest. So January through March, you never know if that’s an indication of how bad the rest of the year is going to be.

In our case, it was. But I remember thinking “OK, the stimulus money has dried up. The economy is not great.” So I wasn’t sure if it was just broad market conditions or ChatGPT specifically.

But even the work we were doing was changing rapidly. We’d have people come to us like “hey, this was written by ChatGPT, can you clean it up?”

And we’d charge less because it was just an editing job and not fully writing from scratch.

The drop off from 2022 to 2023 was BAD. The drop off from 2023 to 2024 was CATASTROPHIC.

By the end of that year, the company had lost the remaining staff. I had one last push before November 2023 (the end of the year has historically been the best time for our business, with Black Friday and Christmas) but I only succeeded in draining my bank account, and I was forced to let go of our last real employee, my sister, in early 2024. My brother and his wife were also doing some contract work for me at the time, and I had to end that pretty abruptly after our big push failed.

I remember, I believed that things were going to turn around again once people realized that even having a writing machine was not enough to create success like a real copywriter can. After all, the message is only one part of it – and divorced from the overall strategy of market and media, it’s never as effective as it can be.

In other words, there’s a context in which all marketing messages are seen, and it takes a human to understand what will work in that context.

But instead, what happened is that the pace of adoption was speeding up and all of those small entrepreneurs who used to rely on us, now used AI to do the work.

The technological advancements of GPT-4, and everyone trying to build their own AI, dominated the airwaves throughout 2023 and 2024. And technology adoption skyrocketed.

The thing is, I can’t even blame people. To be honest, when I’m writing marketing copy I use AI to speed up the process.

I still believe you need intelligence and strategy behind your ideas, or they will simply be meaningless words on a screen – but I can’t blame people for using these very cheap tools instead of paying an expert hundreds of dollars to get their website written.

Especially in my end of the market, where we were working with startup entrepreneurs who are bootstrapping their way to success.

When I officially left the business a few months ago, that left just my partner manning the Fiverr account we started with over 8 years ago.

I think the account is active enough to support a single person now, but I wouldn’t be so sure about next year. The drop off from 2022 to 2023 was BAD. The drop off from 2023 to 2024 was CATASTROPHIC.

Normally there are signs of life around April – in 2025, May had come and there was hardly a pulse in the business.

I still believe there may be a space for copywriters in the future, but much like tailors and seamstresses, it will be a very, very niche market for only the highest-end clients.

—Marcus Wiesner

My hours have been cut from nearly full time to 4-5 a month

Medical writer

I’m a medical writer; I work as a contract writer for a large digital marketing platform, adapting content from pharma companies to fit our platform. Medical writers work in regulatory, clinical, and marketing fields and I’m in marketing. I got my current contract job just 2 years ago, back when you could get this job with just a BA/BS.

In the last 2 years the market has changed drastically. My hours have been cut from nearly full time up to March ‘24 to 4-5 a month now if I’m lucky. I’ve been applying for new jobs for over a year and have had barely a nibble.

The trend now seems to be to have AI produce content, and then hire professionals with advanced degrees to check it over. And paying them less per hour than I make now when I actually work.

I am no longer qualified to do the job I’ve been doing, which is really frustrating. I’m trying to find a new career, trying to start over at age 50.

—Anonymous

We learned our work had been used to train LLMs and our jobs were outsourced to India

Editor for Gracenotes

So I lost my previous job to AI, and a lot of other things. I always joke that the number of historical trends that led to me losing it is basically a summary of the recent history of Western Civilization.

I used to be a schedule editor for Gracenote (the company that used to find metadata for CDs that you ripped into iTunes). They got bought by Nielsen, the TV ratings company, and then tasked with essentially adding metadata to TV guide listings. When you hit the info button on your remote, or when you Google a movie and get the card, a lot of that is Gracenote. The idea was that we could provide accurate, consistent, high-quality text metadata that companies could buy to add to their own listings. There’s a specific style of Gracenote Description Writing that still sticks out to me every time I see it.

So, basically from when I joined the company in late 2021 things were going sideways. I’m based in the Netherlands and worker protections are good, but we got horror stories of whole departments in the US showing up, being called into a “town hall” and laid off en-masse, so the writing was on the wall. We unionised, but they seemed to be dragging their feet on getting us a CAO (Collective Labour Agreement) that would codify a lot of our benefits.

The way the job worked was each editor would have a group of TV channels they would edit the metadata for. My team worked on the UK market, and a lot of us were UK transplants living in the NL. During my time there I did a few groups but, being Welsh, I eventually ended up with the Welsh, Irish and Scottish channels like S4C, RTE, BBC Alba. The two skills we were selling to the company were essentially: knowledge of the UK TV market used to prioritise different shows, and a high degree of proficiency in written English (and I bet you think you know why I lost the job to AI, but hold on).

Around January 2024 they introduced a new tool in the proprietary database we used, that totally changed how our work was done. Instead of channel groups that we prioritised ourselves, instead we were given an interface that would load 10 or so show records from any channel group, which had been auto-sorted by priority. It was then revealed to us that for the last two years or so, every single bit of our work in prioritisation had been fed into machine learning to try and work out how and why we prioritised certain shows over others.

“Hold on” we said, “this kind of seems like you’ve developed a tool to replace us with cheap overseas labour and are about to outsource all our jobs”

“Nonsense,” said upper management, “ignore the evidence of your lying eyes.”

That is, of course, what they had done.

They had a business strategy they called “automation as a movement” and we assumed they would be introducing LLMs into our workflow. But, as they openly admitted when they eventually told us what they were doing, LLMs simply weren’t (and still aren’t) good enough to do the work of assimilating, parsing and condensing the many different sources of information we needed to do the job. Part of it was accuracy, we would often have to research show information online and a lot of our job amounted to enclosing the digital commons by taking episode descriptions from fanwikis and rewriting them; part of it was variety, the information for the descriptions was ingested into our system in many different ways including press sites, press packs from the channels, emails, spreadsheets, etc etc and “AI” at the time wasn’t up to the task. The writing itself would have been entirely possible, it was already very formulaic, but getting the information to the point it was writable by an LLM was so impractical as to be impossible.

So they automated the other half of the job, the prioritisation. The writing was outsourced to India. As I said at the start, there’s a lot of historical currents at play here. Why are there so many people in India who speak and write English to a high standard? Don’t worry about it!

And, the cherry on the cake, both the union and the works council knew this would be happening, but were legally barred from telling us because of “competitive advantage”. They negotiated a pretty good severance package for those of us on “vastcontracts” (essentially permanent employees, as opposed to time-limited contracts) but it still saw a team of 10 reduced to 2 in the space of a month.

—Anonymous

Coworkers told me to my face that AI could and maybe should be doing all my work

Nonprofit communications worker

I currently work in nonprofit communications, and worked as a radio journalist for about four years before that. I graduated college in 2020 with a degree in music and broadcasting.

In my current job, I hear about the benefits of AI on a weekly basis. Unfortunately, those benefits consist of doing tasks that are a part of my direct workload. I’m already struggling to handle the amount of downtime that I have, as I had worked in the always-behind-schedule world of journalism before this (in fact, I am writing this on the clock right now). My duties consist mainly of writing for and putting together weekly and quarterly newsletters and writing our social media.

After a volunteer who recorded audio versions of our newsletters passed away suddenly, it was brought up in a meeting two hours after we heard the news that AI should be the one to create the audio versions going forward. I had to remind them that I am in fact an award-winning radio journalist and audio producer (I produce a few podcasts on a freelance basis, some of which are quite popular) and that I already have little work to do and would be able to take over those duties. After about two weeks of fighting, it was decided that I would be recording those newsletters. I also make sure our website is up-to-date on all of our events and community outings. At some point, I stopped being asked to write blurbs about the different events and I learned that this task was now being done by our IT Manager using AI to write those blurbs instead. They suck, but I don’t get to make that distinction. It has been brought up more than once that our social media is usually pretty fact-forward, and could easily be written by AI. That might be true, but it is also about half of my already very light workload. If I lost that, I would have very little to do. This has not yet been decided.

I have been told (to my face!) by my coworkers that AI could and maybe should be doing all of my work. People who are otherwise very progressive leaning seem to see no problem with me being out of work. While it was a win for me to be able to record the audio newsletters, I feel as if I am losing the battle for the right to do what I have spent the last five years of my life doing. I am 30 and making pennies, barely able to afford a one-bedroom apartment, while logging three-to-four hours of solitaire on my phone every day. This isn’t what I signed up for in life. My employers have given me some new work to do, but that is mostly planning parties and spreading cheer through the workplace, something I loathe and was never asked to do. There are no jobs in my field in my area.

If things keep progressing at this rate… I’ll be nothing but a party planner. I don’t even like parties. Especially not for people who think I should be out of a job.

I have seen two postings in the past six months for communications jobs that pay enough for me to continue living in my apartment. I got neither of them.

While I am still able to write my newsletter articles, those give me very little joy and if things keep progressing at this rate I won’t even have those. I’ll be nothing but a party planner. I don’t even like parties. Especially not for people who think I should be out of a job.

At this rate, I have seen little pushback from my employer about having AI do my entire job. Even if I think this is a horrible idea, as the topics I write about are often sensitive and personal, I have no faith that they will not go in this direction. At this point, I am concerned about layoffs and my financial future.

[We checked in with the contributor a few weeks after he reached out to us and he gave us this update:]

I am now being sent clearly AI written articles from heads of other departments (on subjects that I can and will soon be writing about) for publication on our website. And when I say “clearly AI,” I mean I took one look and knew immediately and was backed up by an online AI checker (which I realize is not always accurate but still). The other change is that the past several weeks have taught me that I don’t want to be a part of this field any longer. I can find another comms job, and actually have an interview with another company tomorrow, but have no reason to believe that they won’t also be pushing for AI at every turn.

—Anonymous

I’m a copywriter by trade. These days I do very little

Copywriter

I’m a copywriter by trade. These days I do very little. The market for my services is drying up rapidly and I’m not the only one who is feeling it. I’ve spoken to many copywriters who have noticed a drop in their work or clients who are writing with ChatGPT and asking copywriters to simply edit it.

I have clients who ask me to use AI wherever I can and to let them know how long it takes. It takes me less time and that means less money.

Some copywriters have just given up on the profession altogether.

I have been working with AI for a while. I teach people how to use it. What I notice is a move towards becoming an operator.

I craft prompts, edit through prompts and add my skills along the way (I feel my copywriting skills mean I can prompt and analyse output better than a non-writer). But writing like this doesn’t feel like it used to. I don’t go through the full creative process. I don’t do the hard work that makes me feel alive afterwards. It’s different, more clinical and much less rewarding.

I don’t want to be a skilled operator. I want to be a human copywriter. Yet, I think these days are numbered.

—Anonymous

I did “adapt or die” using AI, but I’m still in a precarious position

Ghostwriter

From 2010-today I worked as a freelance writer in two capacities: freelance journalism for outlets like Cannabis Now, High Times, Phoenix New Times, and The Street, and ghostwriting through a variety of marketplaces (elance, fiverr, WriterAccess, Scripted, Crowd Content) and agencies (Volume 9, Influence & Co, Intero Digital, Cryptoland PR).

The freelance reporting market still exists but is extremely competitive and pretty poorly paid. So I largely made my living ghostwriting to supplement income. The marketplaces all largely dried up unless you have a highly ranked account. I do not because I never wanted to grind through the low paid work long enough. I did attempt to use ChatGPT for low-paid WriterAccess jobs but got declined.

Meanwhile, my steadiest ghostwriting client was Influence & Co/Intero Digital. Through this agency, I have ghostwritten articles for nearly everyone you can think of (except Vox/Verge): NYT, LA Times, WaPo, WSJ, Harvard Business Review, Venture Beat, HuffPost, AdWeek, and so many more. And I’ve done it for execs for large tech companies, politicians, and more. The reason it works is because they have guest posts down to a science.

They built a database of all publisher’s guidelines. If I wanted to be in HBR, I knew the exact submission guidelines and could pitch relevant topics based on the client. Once the pitch is accepted, an outline is written, and the client is interviewed. This interview is crucial because it’s where we tap into the source and gain firsthand knowledge that can’t be found online. It also gets the client’s natural voice. I then combine the recorded interview with targeted online research to find statistics and studies to back up what the client says, connect it to recent events, and format to the publisher’s specs.

So ChatGPT came along December 2022, and for most of 2023 things were fine, although Influence & Co was bought by Intero, so internal issues were arising. I was with this company from the start when they were emailing word docs through building the database and selling the company several times. I can go on and on about how it all works.

We as writers don’t use ChatGPT, but it still seeped into the workflow from the client. The client interview I mentioned above as being vital because it gets info you can’t get online and their voice and everything you need to do it right—well those clients started using ChatGPT. By the end of 2023, I couldn’t handle it anymore because my job fundamentally changed. I was no longer learning anything. That vital mix that made it work was gone, and it was all me combining ChatGPT and the internet to try and make it fit into those publications above, many of which implemented AI detection, started publishing their own AI articles, and stopped accepting outside contributions.

I could probably write a book about the backend of all this stuff and how guest posts end up on every media outlet on the planet. Either way, ChatGPT ruined it

The thing about writing in this instance is that it doesn’t matter how many drafts you write, if it doesn’t get published in an acceptable publication, then it looks like we did nothing. What was steady work for over a decade slowed to a trickle, and I was tired of the work that was coming in because it was so bad.

Last summer, I emailed them and quit. I could no longer depend on the income. It was $1500-$3000 a month for over a decade and then by 2024 was $100 a month. And I hated doing it. It was the lowest level bs work I hated so much. I loved that job because I learned so much and I was challenged trying to get into all those publications, even if it was a team effort and not just me. I wrote some killer articles that ChatGPT could never. And the reason AI took my job is because clients who hired me for hundreds to thousands of dollars a month decided it’s not worth their time to follow our process and instead use ChatGPT.

That is why I think it’s important to talk about. I probably could still be working today in what became a content mill. And the reason it ultimately became no longer worth it isn’t all the corporate changes. It wasn’t my boss who was using AI—it was our customers. Working with us was deemed not important, and it’s impossible to explain to someone in an agency environment that they’re doing it to themselves. They will just go to a different agency and keep trying, and many of the unethical ones will pull paid tricks that make it look more successful than it is, like paying Entrepreneur $3000 for a year in their leadership network. (Comes out to paying $150 per published post, which is wild considering the pay scale above).

The whole YEC publishing conglomerate is another rabbit hole. Forbes, CoinTelegraph, Newsweek, and others have the same paid club structure that happens to come with guest post access. And those publishers allow paid marketing in the guise of editorials.

I could probably write a book about the backend of all this stuff and how guest posts end up on every media outlet on the planet. Either way, ChatGPT ruined it, and I’m largely retired now. I am still doing some ghostwriting, but it’s more in the vein of PR and marketing work for various agencies I can find that need writers. The market still exists, even if I have to work harder for clients.

And inexplicably, the reason we met originally was because I was involved in the start of Adobe Stock accepting AI outputs from contributors. I now earn $2500 per month consistently from that and have a lot of thoughts about how as a writer with deep inside knowledge of the writing industry, I couldn’t find a single way to “adapt or die” and leverage ChatGPT to make money. I could probably put up a website and build some social media bots. But plugging AI into the existing industry wasn’t possible. It was already competitive. Yet I somehow managed to build a steady recurring residual income stream selling Midjourney images on Adobe stock for $1 a piece. I’m on track to earn $30,000 this year from that compared to only $12,000 from writing. I used to earn $40,000-$50,000 a year doing exclusively writing from 2011-2022.

I did “adapt or die” using AI, but I’m still in a precarious position. If Adobe shut down or stopped accepting AI, I’ll be screwed. It doesn’t help that I’m very vocally against Adobe and called them out last year via Bloomberg for training firefly on Midjourney outputs when I’m one of the people making money from it. I’m fascinated to learn how the court cases end up and how it impacts my portfolio. I’m currently working to learn photography and videography well enough to head to Vegas and LA for conferences next year to build a real editorial stock portfolio across the other sites.

So my human writing job was reduced below a living wage, and I have an AI image portfolio keeping me afloat while I try to build a human image/video portfolio faster than AI images are banned. Easy peasy right?

–Brian Penny

The agency was begging me to take on more work. Then it had nothing for me

Freelance copywriter

I was a freelance copywriter. I am going to be fully transparent and say I was never one of those people that hustled the best, but I had steady work. Then AI came and one of the main agencies that I worked for went from begging me to take on more work to having 0 work for me in just 6-8 months. I struggled to find other income, found another agency that had come out of the initial AI hype and built a base of clients that had realized AI was slop, only for their customer base to be decimated by Trump’s tariffs about a month after I joined.

What I think people fail to realize when they talk about AI is that this is coming on the tail end of a crisis in employment for college grads for years. I only started freelancing because I applied to hundreds of jobs after winding up back at my mom’s house during COVID-19. Anecdotally, most of my friends that I graduated with (Class of 2019) spent years struggling to find stable, full-time jobs with health insurance, pre-AI. Add AI to the mix, and getting your foot in the door of most white collar industries just got even harder.

As I continue airing my grievances in your email, I remember when ChatGPT first came out a lot of smug literary types on Twitter were saying “if your writing can be replaced by AI then it wasn’t good to begin with,” and that made me want to scream. The writing that I’m actually good at was the writing that nobody was going to pay me for because the media landscape is decimated!

Content writing/copywriting was supposed to be the way you support yourself as an artist, and now even that’s gone.

—Rebecca Duras

My biggest client replaced me with a custom GPT. They surely trained it using my work

Copywriter and Marketing Consultant

I am a long-time solopreneur and small business owner, who got into the marketing space about 8 years ago. This career shift was quite the surprise to me, as for most of my career I didn’t like marketing…or marketers. But here we are ;p

While I don’t normally put it in these terms, what shifted everything for me was realizing that copywriting was a thing — it could make a huge difference in my business and for other businesses, too. With a BA in English, and after doing non-marketing writing projects on the side for years, it just made a ton of sense to me that the words we use to talk about our businesses can make a big difference. I was hooked.

After pursuing some training, I had a lucrative side-hustle doing strategic messaging work and website copy for a few years before jumping into full-time freelancing in 2021. The work was fun, the community of marketers I was a part of was amazing, and I was making more money than I ever could have in my prior business.

And while the launch of ChatGPT in Nov ‘22 definitely made many of us nervous — writing those words brings into focus how stressful the existential angst has actually been since that day — for me and many of my copywriting friends, the good times just kept rolling. 2023 was my best year ever in business — by a whopping 30%. I wasn’t alone. Many of my colleagues were also killing it.

All of that changed in 2024.

Early that year, the AI propaganda seemed to hit its full crescendo, and it started significantly impacting my business. I quickly noticed leads were down, and financially, things started feeling tight. Then, that spring, my biggest retainer client suddenly gave me 30-days notice that they wouldn’t renew my contract — which made up half of what I needed to live on. The decision caught everyone, including the marketing director, off guard. She loved what I was doing for them and cried when she told me the news. I later found out through the grapevine that the CEO and his right hand guy were hoping to replace me with a custom GPT they had created. They surely trained it using my work.

The AI-related hits kept coming. The thriving professional community I enjoyed pretty much imploded that summer – largely because of some unpopular leadership decisions around AI. Almost all of my skilled copywriter friends left the organization — and while I’ve lost touch with most, the little I have heard is that almost all of them have struggled. Many have found full-time employment elsewhere.

I won’t go into all the ins-and-outs of what has happened to me since, and I’ll leave my rant about getting AI slop from my clients to “edit” alone. (Briefly, that task is beyond miserable.)

But I will say from May of 2024 to now, I’ve gone from having a very healthy business and amazing professional community, to feeling very isolated and struggling to get by. Financially, we’ve burned through $20k in savings and almost $30k in credit cards at this point. We’re almost out of cash and the credit cards are close to maxed. Full-time employment that’d pay the bills (and get us out of our hole) just isn’t there. Truthfully, if it wasn’t for a little help from some family – and basically being gifted two significant contracts through a local friend – we’d be flat broke with little hope on the horizon. Despite our precarious position, continuing to risk freelance work seems to be our best and pretty much only option.

I do want to say, though, that even though it’s bleak, I see some signs for hope. In the last few months, in my experience many business owners are waking up to the fact that AI can’t do what it claims it can. Moreover, with all of the extra slop around, they’re feeling even more overwhelmed – which means if you can do any marketing strategy and consulting, you might make it.

But while I see that things might be starting to turn, the pre-AI days of junior copywriting roles and freelancers being able making lots of money writing non-AI content seem to be long gone. I think those writers who don’t lean on AI and find a way to make it through will be in high-demand once the AI-illusion starts to lift en masse. I just hope enough business owners who need marketing help wake up before then so that more of us writers don’t have to starve.

–Anonymous

source: https://www.bloodinthemachine.com/p/i-was-forced-to-use-ai-until-the

OpenAI’s Atlas Browser Takes Direct Aim at Google Chrome

The new ChatGPT-powered web browser is OpenAI’s boldest play yet to reinvent how people use the web.

Illustration of several web browsers with the OpenAI logo and search bars

Illustration: WIRED Staff

OpenAI announced on Tuesday it’s rolling out a new internet browser called Atlas that integrates directly with ChatGPT. Atlas includes features like a sidebar window people can use to ask ChatGPT questions about the web pages they visit. There’s also an AI agent that can click around and complete tasks on a user’s behalf.

“We think that AI represents a rare, once-a-decade opportunity to rethink what a browser can be about,” OpenAI CEO Sam Altman said during a livestream announcing Atlas. “Tabs were great, but we haven’t seen a lot of browser innovation since then.”

Atlas debuts as Silicon Valley races to use generative AI to reshape how people experience the internet. Google has also announced a plethora of AI features for its popular Chrome browser, including a “sparkle” button that launches its Gemini chatbot. Chrome remains the most used browser worldwide.

OpenAI says the Atlas browser will be available starting today for ChatGPT users globally on macOS. Windows and mobile options are currently in the works. Atlas is free to use, though its agent features are reserved for subscribers to OpenAI’s ChatGPT Plus or ChatGPT Pro plans.

OpenAI highlighted how Atlas can help users research vacations and other activities.

“We’ve made some major upgrades to search on ChatGPT when accessed via Atlas,” Ryan O’Rouke, OpenAI’s lead designer for the browser, said during the livestream. If a user asks for movie reviews in the Atlas search bar, a chatbot-style answer will pop up first, rather than the more traditional collection of blue links users might expect when searching the web via Google.

Now, in addition to that result, users can switch to other tabs to see a collection of website links, images, videos, or news related to their queries. It’s a bit of an inversion of the Google Chrome experience. Rather than the search result being a collection of links with AI features added on top of that, the AI chatbot is central in Atlas, with the list of website links or image results as secondary.

Another feature OpenAI highlighted in the livestream is Atlas’ ability to collect “browser memories.” The capability is optional, and is an iteration of ChatGPT’s existing memory tool that stores details about users based on their past interactions with the chatbot. The browser can recall what you searched for in the past and use that data when suggesting topics of interest and actions to take, like automating an online routine it detects or returning back to a website you previously visited that could be helpful for a current project.

In Atlas users can highlight whatever they are writing and request assistance from ChatGPT.
Atlas has an optional memory feature that can recall what users searched for in the past.

Tech giants and smaller startups have been experimenting with baking AI into web browsers for the past several years. Microsoft was one of the first movers when it threw its AI tool, called Bing at the time, into its Edge browser as a sidebar. Since then, browser-focused companies like Opera and Brave have also continued to tinker with different AI integrations. Another notable entry in the AI browser wars is Perplexity’s Comet, which launched this year and is also free to use.

Source: https://www.wired.com/story/openai-atlas-browser-chrome-agents-web-browsing/

AI Is Changing What High School STEM Students Study

AI isn’t coming—it’s here. Today’s STEM students aren’t fighting it; they’re learning to read it, question it, and use it. The new skill isn’t coding the machine, but understanding its logic well enough to steer it.

A degree in computer science used to promise a cozy career in tech. Now, students’ ambitions are shaped by AI, in fields that blend computing with analysis, interpretation, and data.

In the early 2010s, nearly every STEM-savvy college-bound kid heard the same advice: Learn to code. Python was the new Latin. Computer science was the ticket to a stable, well-paid, future-proof life.

But in 2025, the glow has dimmed. “Learn to code” now sounds a little like “learn shorthand.” Teenagers still want jobs in tech, but they no longer see a single path to get there. AI seems poised to snatch up coding jobs, and there aren’t a plethora of AP classes in vibe coding. Their teachers are scrambling to keep up.

“There’s a move from taking as much computer science as you can to now trying to get in as many statistics courses” as possible, says Benjamin Rubenstein, an assistant principal at New York’s Manhattan Village Academy. Rubenstein has spent 20 years in New York City classrooms, long enough to watch the “STEM pipeline” morph into a network of branching paths instead of one straight line. For his students, studying stats feels more practical.

Forty years ago, students inspired by NASA dreamed of becoming physicists or engineers. Twenty years after that, the allure of jobs at Google or other tech giants sent them into computer science. Now, their ambitions are shaped by AI, leading them away from the stuff AI can do (coding) and toward the stuff it still struggles with. As the number of kids seeking computer science degrees falters, STEM-minded high schoolers are looking at fields that blend computing with analysis, interpretation, and data.

Rubenstein still requires every student to take computer science before graduation, “so they can understand what’s going on behind the scenes.” But his school’s math department now pairs data literacy with purpose: an Applied Mathematics class where students analyze New York Police Department data to propose policy changes, and an Ethnomathematics course linking math to culture and identity. “We don’t want math to feel disconnected from real life,” he says.

It’s a small but telling shift—one that, Rubenstein says, isn’t happening in isolation. After a long boom, universities are seeing the computer-science surge cool. The number of computer science, computer engineering, and information degrees awarded in the 2023–2024 academic year in the US and Canada fell by about 5.5 percent from the previous year, according to a survey by the nonprofit Computing Research Association.

At the high school level, the appetite for data is visible. AP Statistics logged 264,262 exam registrations in 2024, making it one of the most-requested AP tests, per Education Week. AP computer-science exams still draw big numbers—175,261 students took AP Computer Science Principles, and 98,136 took AP Computer Science A in 2024—but the signal is clear: Data literacy now sits alongside coding, not beneath it.

“Students who see themselves as STEM people will pursue whatever they think makes them a commodity, something valued in the workplace,” Rubenstein says. “The workplace can basically shift education if it wants to by saying, ‘Here’s what we need from students.’ K–12 will follow suit.”

Amid all this, AI’s rise leaves teachers in a difficult position. They’re trying to prepare students for a future defined by machine learning while managing how easily those same tools can short-circuit the learning process.

Yet Rubenstein believes AI could become a genuine ally for STEM educators, not a replacement. He imagines classrooms where algorithms help teachers identify which students grasp a concept and which need more time, or suggest data projects aligned with a student’s interests—ways to make learning more individualized and applied.

It’s part of the same shift he’s seen in his students: a move toward learning how to interpret and use technology, not just build it. Other educators are starting to think along similar lines, exploring how AI tools might strengthen data literacy or expand access to personalized STEM instruction.

At the University of Georgia, science education researcher Xiaoming Zhai is already testing what that could look like. His team builds what he calls “multi-agent classroom systems,” AI assistants that interact with teachers and students to model the process of scientific inquiry.

Zhai’s projects test a new kind of literacy: not just how to use AI but how to think with it. He tells the story of a visiting scholar who had never written a line of code yet used generative AI to build a functioning science simulation.

“The bar for coding has been lowered,” he says. “The real skill now is integrating AI with your own discipline.”

Zhai believes AI shouldn’t be treated as an amalgamation of STEM disciplines but as part of its core. The next generation of scientists, he says, will use algorithms the way their predecessors used microscopes—to detect patterns, test ideas, and push the boundaries of what can be known. Coding is no longer the frontier; the real skill is learning how to interpret and collaborate with machine intelligence. As chair of a national committee on AI in science education, Zhai is pushing to make that shift explicit, urging schools to teach students to harness AI’s precision while staying alert to its blind spots.

“AI can do some work humans can’t,” he says, “but it also fails spectacularly outside its training data. We don’t want students who think AI can do everything or who fear it completely. We want them to use it responsibly.”

That balance between fluency and skepticism, ambition and identity, is quietly rewriting what STEM means in schools like Rubenstein’s. Computer-science classes aren’t going away, but they’re sharing the stage with forensics electives, science-fiction labs and data-ethics debates.

“Students can’t think of things as compartmentalized anymore,” Rubenstein says. “You need multiple disciplines to make good decisions.”

AI isn’t coming—it’s here. Today’s STEM students aren’t fighting it; they’re learning to read it, question it, and use it. The new skill isn’t coding the machine, but understanding its logic well enough to steer it.

Source: https://www.wired.com/story/stem-high-school-students-artficial-intelligence/

OpenAI rolls out ‘instant’ purchases directly from ChatGPT, in a radical shift to e-commerce and a direct challenge to Google

https://fortune.com/2025/09/29/openai-rolls-out-purchases-direct-from-chatgpt-in-a-radical-shift-to-e-commerce-and-direct-challenge-to-google/

OpenAI said it will allow users in the U.S. to make purchases directly through ChatGPT using a new Instant Checkout feature powered by a payment protocol for AI co-developed with Stripe.

The new chatbot shopping feature is a big step toward helping OpenAI monetize its 700 million weekly users, many of whom currently pay nothing to interact with ChatGPT, as well as a move that could eventually steal significant market share from traditional Google search advertising.

The rollout of chatbot shopping features—including the possibility of AI agents that will shop on behalf of users—could also upend e-commerce, radically transforming the way businesses design their websites and try to market to consumers.

OpenAI said it was rolling out its Instant Checkout feature with Etsy sellers today, but would begin adding over a million Shopify merchants, including brands such as Glossier, Skims, Spanx, and Vuori “soon.”

The company also said it was open-sourcing the Agentic Commerce Protocol, a payment standard developed in partnership with payments processor Stripe that powers the Instant Checkout feature, so that any retailer or business could decide to build a shopping integration with ChatGPT. (Stripe’s and OpenAI’s commerce protocol, in turn, supports the open-source Model Context Protocol, or MCP, that was originally developed by AI company Anthropic last year. MCP is designed to allow AI models to directly hook into the backend systems of businesses and retailers. The new Agentic Commerce Protocol also supports more conventional API calls too.)

OpenAI will take what it described as small fee from the merchant on each purchase, helping to bolster the company’s revenue at a time when it is burning through many billions of dollars each year to train and support the running of its AI models.

 

How it works

OpenAI had previously launched a shopping feature in ChatGPT that helped users find products that were best suited to them, but the suggested results then linked out to merchants’ websites, where a user had to complete the purchase—analogous to the way a Google search works.

When a ChatGPT user asks a shopping-related question—such as “the best hiking boots for me that cost under $150” or “possible birthday gifts for my 10-year old nephew”—the chatbot will still respond with product suggestions. Under the new system, if a user likes one of the suggestions and Instant Checkout is enabled, they will be able to click a “Buy” button in the chatbot response and confirm their order, shipping, and payment details without ever leaving the chat.

OpenAI said its “product results are organic and unsponsored, ranked purely on relevance to the user.” The company also emphasized that the results are not affected by the fee the merchant pays it to support Instant Checkout.

Then, to determine which merchants that carry that particular product should be surfaced for the user, “ChatGPT considers factors like availability, price, quality, whether a merchant is the primary seller, and whether Instant Checkout is enabled,” when displaying results, the company said.

OpenAI said that ChatGPT subscribers, who pay a monthly fee for premium features, would be able to use the same credit or debit card to which they charge their subscription or store alternate payment methods to use.

OpenAI’s decision to launch the shopping feature using Stripe’s Agentic Commerce Protocol will be a big boost for that payment standard, which can be used across different AI platforms and also works with different payment processors—although it is easier to integrate for existing Stripe customers. The protocol works by creating an encrypted token for payment details and other sensitive data.

Currently, OpenAI says that the user remains in control, having to explicitly agree to each step of the purchasing process before any action is taken. But it is easy to imagine that in the future, users may be able to authorize ChatGPT or other AI models to act more “agentically” and actually make purchases for the user based on a prompt, without having to check back in with a user.

The fact that users never have to leave the chat interface to make the purchase may pose a challenge to Alphabet’s Google, which makes most of its money by referring users to companies’ websites. Although Google may be able to roll out similar shopping features within its Gemini chatbot or “AI Mode” in Google Search, it’s unclear whether what it could charge for transactions completed in these AI-native ways would compensate for any loss in referral revenue and what the opportunities would be for the display of other advertising around chatbot queries.

Anyone Can Buy Data Tracking US Soldiers and Spies to Nuclear Vaults and Brothels in Germany

Source: https://www.wired.com/story/phone-data-us-soldiers-spies-nuclear-germany/

by Dhruv Mehrotra Dell Cameron

Nearly every weekday morning, a device leaves a two-story home near Wiesbaden, Germany, and makes a 15-minute commute along a major autobahn. By around 7 am, it arrives at Lucius D. Clay Kaserne—the US Army’s European headquarters and a key hub for US intelligence operations.

The device stops near a restaurant before heading to an office near the base that belongs to a major government contractor responsible for outfitting and securing some of the nation’s most sensitive facilities.

For roughly two months in 2023, this device followed a predictable routine: stops at the contractor’s office, visits to a discreet hangar on base, and lunchtime trips to the base’s dining facility. Twice in November of last year, it made a 30-minute drive to the Dagger Complex, a former intelligence and NSA signals processing facility. On weekends, the device could be traced to restaurants and shops in Wiesbaden.

The individual carrying this device likely isn’t a spy or high-ranking intelligence official. Instead, experts believe, they’re a contractor who works on critical systems—HVAC, computing infrastructure, or possibly securing the newly built Consolidated Intelligence Center, a state-of-the-art facility suspected to be used by the National Security Agency.

Whoever they are, the device they’re carrying with them everywhere is putting US national security at risk.

A joint investigation by WIRED, Bayerischer Rundfunk (BR), and Netzpolitik.org reveals that US companies legally collecting digital advertising data are also providing the world a cheap and reliable way to track the movements of American military and intelligence personnel overseas, from their homes and their children’s schools to hardened aircraft shelters within an airbase where US nuclear weapons are believed to be stored.

A collaborative analysis of billions of location coordinates obtained from a US-based data broker provides extraordinary insight into the daily routines of US service members. The findings also provide a vivid example of the significant risks the unregulated sale of mobile location data poses to the integrity of the US military and the safety of its service members and their families overseas.

We tracked hundreds of thousands of signals from devices inside sensitive US installations in Germany. That includes scores of devices within suspected NSA monitoring or signals-analysis facilities, more than a thousand devices at a sprawling US compound where Ukrainian troops were being being trained in 2023, and nearly 2,000 others at an air force base that has crucially supported American drone operations.

A device likely tied to an NSA or intelligence employee broadcast coordinates from inside a windowless building with a metal exterior known as the “Tin Can,” which is reportedly used for NSA surveillance, according to agency documents leaked by Edward Snowden. Another device transmitted signals from within a restricted weapons testing facility, revealing its zig-zagging movements across a high-security zone used for tank maneuvers and live munitions drills.

We traced these devices from barracks to work buildings, Italian restaurants, Aldi grocery stores, and bars. As many as four devices that regularly pinged from Ramstein Air Base were later tracked to nearby brothels off base, including a multistory facility called SexWorld.

Experts caution that foreign governments could use this data to identify individuals with access to sensitive areas; terrorists or criminals could decipher when US nuclear weapons are least guarded; or spies and other nefarious actors could leverage embarrassing information for blackmail.

“The unregulated data broker industry poses a clear threat to national security,” says Ron Wyden, a US senator from Oregon with more than 20 years overseeing intelligence work. “It is outrageous that American data brokers are selling location data collected from thousands of brave members of the armed forces who serve in harms’ way around the world.”

Wyden approached the US Defense Department in September after initial reporting by BR and netzpolitik.org raised concerns about the tracking of potential US service members. DoD failed to respond. Likewise, Wyden’s office has yet to hear back from members of US president Joe Biden’s National Security Council, despite repeated inquiries. The NSC did not immediately respond to a request for comment.

“There is ample blame to go around,” says Wyden, “but unless the incoming administration and Congress act, these kinds of abuses will keep happening, and they’ll cost service members‘ lives.”

The Oregon senator also raised the issue earlier this year with the Federal Trade Commission, following an FTC order that imposed unprecedented restrictions against a US company it accused of gathering data around “sensitive locations.” Douglas Farrar, the FTC’s director of public affairs, declined a request to comment.

WIRED can now exclusively report, however, that the FTC is on the verge of fulfilling Wyden’s request. An FTC source, granted anonymity to discuss internal matters, says the agency is planning to file multiple lawsuits soon that will formally recognize US military installations as protected sites. The source adds that the lawsuits are in keeping with years‘ worth of work by FTC Chair Lina Khan aimed at shielding US consumers—including service members—from harmful surveillance practices.

Before a targeted ad appears on an app or website, third-party software often embedded in apps called software development kits transmit information about their users to data brokers, real-time bidding platforms, and ad exchanges—often including location data. Data brokers often will collect that data, analyze it, repackage it, and sell it.

In February of 2024, reporters from BR and Netzpolitik.org obtained a free sample of this kind of data from Datastream Group, a Florida-based data broker. The dataset contains 3.6 billion coordinates—some recorded at millisecond intervals—from up to 11 million mobile advertising IDs in Germany over what the company says is a 59-day span from October through December 2023.

Mobile advertising IDs are unique identifiers used by the advertising industry to serve personalized ads to smartphones. These strings of letters and numbers allow companies to track user behavior and target ads effectively. However, mobile advertising IDs can also reveal much more sensitive information, particularly when combined with precise geolocation data.

In total, our analysis revealed granular location data from up to 12,313 devices that appeared to spend time at or near at least 11 military and intelligence sites, potentially exposing crucial details like entry points, security practices, and guard schedules—information that, in the hands of hostile foreign governments or terrorists, could be deadly.

Our investigation uncovered 38,474 location signals from up to 189 devices inside Büchel Air Base, a high-security German installation where as many as 15 US nuclear weapons are reportedly stored in underground bunkers. At Grafenwöhr Training Area, where thousands of US troops are stationed and have trained Ukrainian soldiers on Abrams tanks, we tracked 191,415 signals from up to 1,257 devices.

At Lucius D. Clay Kaserne, the US Army’s European headquarters, we identified 74,968 location signals from as many as 799 devices, including some at the European Technical Center, once the NSA’s communication hub in Europe.Courtesy of OpenMapTiles

In Wiesbaden, home to the US Army’s European headquarters at Lucius D. Clay Kaserne, 74,968 location signals from as many as 799 devices were detected—some originating from sensitive intelligence facilities like the European Technical Center, once the NSA’s communication hub in Europe, and newly built intelligence operations centers.

At Ramstein Air Base, which supports some US drone operations, 164,223 signals from nearly 2,000 devices were tracked. That included devices tracked to Ramstein Elementary and High School, base schools for the children of military personnel.

Of these devices, 1,326 appeared at more than one of these highly sensitive military sites, potentially mapping the movements of US service members across Europe’s most secure locations.

The data is not infallible. Mobile ad IDs can be reset, meaning multiple IDs can be assigned to the same device. Our analysis found that, in some instances, devices were assigned more than 10 mobile ad IDs.

The location data’s precision at the individual device level can also be inconsistent. By contacting several people whose movements were revealed in the dataset, the reporting collective confirmed that much of the data was highly accurate—identifying work commutes and dog walks of individuals contacted. However, this wasn’t always the case. One reporter whose ID appears in the dataset found that it often placed him a block away from his apartment and during times when he was out of town. A study from NATO Strategic Communications Center of Excellence found that “quantity overshadows quality” in the data broker industry and that, on average, only up to 60 percent of the data surveyed can be considered precise.

According to its website, Datastream Group appears to offer “internet advertising data coupled with hashed emails, cookies, and mobile location data.” Its listed datasets include niche categories like boat owners, mortgage seekers, and cigarette smokers. The company, one of many in a multibillion-dollar location-data industry, did not respond to our request for comment about the data it provided on US military and intelligence personnel in Germany, where the US maintains a force of at least 35,000 troops, according to the most recent estimates.

Defense Department officials have known about the threat that commercial data brokers pose to national security since at least 2016, when Mike Yeagley, a government contractor and technologist, delivered a briefing to senior military officials at the Joint Special Operations Command compound in Fort Liberty (formerly Fort Bragg), North Carolina, about the issue. Yeagley’s presentation aimed to show how commercially available mobile data—already pervasive in conflict zones like Syria—could be weaponized for pattern of life analysis.

Midway through the presentation, Yeagley decided to raise the stakes. “Well, here’s the behavior of an ISIS operator,” he tells WIRED, recalling his presentation. “Let me turn the mirror around—let me show you how it works for your own personnel.” He then displayed data revealing phones as they moved from Fort Bragg in North Carolina and MacDill Air Force Base in Florida—critical hubs for elite US special operations units. The devices traveled through transit points like Turkey before clustering in northern Syria at a seemingly abandoned cement factory near Kobane, a known ISIS stronghold. The location he pinpointed was a covert forward operating base.

Yeagley says he was quickly escorted to a secured room to continue his presentation behind closed doors. There, officials questioned him on how he had obtained the data, concerned that his stunt had involved hacking personnel or unauthorized intercepts.

The data wasn’t sourced from espionage but from unregulated commercial brokers, he explained to the concerned DOD officials. “I didn’t hack, intercept, or engineer this data,” he told them. “I bought it.”

Now, years later, Yeagley remains deeply frustrated with the DODs inability to control the situation. What WIRED, BR, and Netzpolitik.org are now reporting is “very similar to the alarms we raised almost 10 years ago,” he says, shaking his head. “And it doesn’t seem like anything’s changed.”

US law requires the director of national intelligence to provide “protection support” for the personal devices of “at risk” intelligence personnel who are deemed susceptible to “hostile information collection activities.” But which personnel meet this criteria is unclear, as is the extent of the protections beyond periodic training and advice. The location data we acquired demonstrates, regardless, that commercial surveillance is far too pervasive and complex to be reduced to individual responsibility.

Biden’s outgoing director of national intelligence, Avril Haines, did not respond to a request for comment.

A report declassified by Haines last summeracknowledges that US intelligence agencies had purchased a “large amount” of “sensitive and intimate information” about US citizens from commercial data brokers, adding that “in the wrong hands,” the data could “facilitate blackmail, stalking, harassment, and public shaming.” The report, which contains numerous redactions, notes that, while the US government „would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times,” smartphones, connected cars, and web tracking have all made this possible “without government participation.”

Mike Rogers, the Republican chair of the House Armed Services Committee, did not respond to multiple requests for comment. A spokesperson for Adam Smith, the committee’s ranking Democrat, said Smith was unavailable to discuss the matter, busy negotiating a must-pass bill to fund the Pentagon’s policy priorities next year.

Jack Reed and Roger Wicker, the leading Democrat and Republican on the Senate Armed Services Committee, respectively, did not respond to multiple requests for comment. Inquiries placed with House and Senate leaders and top lawmakers on both congressional intelligence committees have gone unanswered.

The DOD and the NSA declined to answer specific questions related to our investigation. However, DOD spokesperson Javan Rasnake says that the Pentagon is aware that geolocation services could put personnel at risk and urged service members to remember their training and adhere strictly to operational security protocols. “Within the USEUCOM region, members are reminded of the need to execute proper OPSEC when conducting mission activities inside operational areas,” Rasnake says, using the shorthand for operational security.

An internal Pentagon presentation obtained by the reporting collective, though, claims that not only is the domestic data collection likely capable of revealing military secrets, it is essentially unavoidable at the personal level, service members’ lives being simply too intertwined with the technology permitting it. This conclusion closely mirrors the observations of Chief Justice John Roberts of the US Supreme Court, who in landmark privacy cases within the past decade described cell phones as being a “pervasive and insistent part of daily life” and that owning one was “indispensable to participation in modern society.”

The presentation, which a source says was delivered to high-ranking general officers, including the US Army’s chief information officer, warns that despite promises from major ad tech companies, “de-anonymization” is all but trivial given the widespread availability of commercial data collected on Pentagon employees. The document emphasizes that the caches of location data on US individuals is a “force protection issue,” likely capable of revealing troop movements and other highly guarded military secrets.

While instances of blackmail inside the Pentagon have seen a sharp decline since the Cold War, many of the structural barriers to persistently surveilling Americans have also vanished. In recent decades, US courts have repeatedly found that new technologies pose a threat to privacy by enabling surveillance that, “in earlier times, would have been prohibitively expensive,“ as the 7th Circuit Court of Appeals noted in 2007.

In an August 2024 ruling, another US appeals court disregarded claims by tech companies that users who “opt in” to surveillance were actually “informed” and doing so “voluntarily,” declaring the opposite is clear to “anyone with a smartphone.” The internal presentation for military staff presses that adversarial nations can gain access to advertising data with ease, using it to exploit, manipulate, and coerce military personnel for purposes of espionage.

Patronizing sex workers, whether legal in a foreign country or not, is a violation of the Uniform Code of Military Justice. The penalties can be severe, including forfeiture of pay, dishonorable discharge, and up to one year of imprisonment. But the ban on solicitation is not merely imposed on principle alone, says Michael Waddington, a criminal defense attorney who specializes in court-marial cases. “There’s a genuine danger of encountering foreign agents in these establishments, which can lead to blackmail or exploitation,” he says.

“This issue is particularly concerning given the current geopolitical climate. Many US servicemembers in Europe are involved in supporting Ukraine in its defense against the Russian invasion,” Waddington says. “Any compromise of their integrity could have serious implications for our operations and national security.”

When it comes to jeopardizing national security, even data on low-level personnel can pose a risk, says Vivek Chilukuri, senior fellow and program director of the Technology and National Security Program at the Center for a New American Security (CNAS). Before joining CNAS, Chilukuri served in part as legislative director and tech policy advisor to US senator Michael Bennet on the Senate Intelligence Committee and previously worked at the US State Department, specializing in countering violent extremism.

„Low-value targets can lead to high-value compromises,“ Chilukuri says. „Even if someone isn’t senior in an organization, they may have access to highly sensitive infrastructure. A system is only as secure as its weakest link.“ He points out that if adversaries can target someone with access to a crucial server or database, they could exploit that vulnerability to cause serious damage. “It just takes one USB stick plugged into the right device to compromise an organization.”

It’s not just individual service members who are at risk—entire security protocols and operational routines can be exposed through location data. At Büchel Air Base, where the US is believed to have stored an estimated 10 to 15 B61 nuclear weapons, the data reveals the daily activity patterns of devices on the base, including when personnel are most active and, more concerningly, potentially when the base is least populated.

Overview of the Air Mobility Command ramp at Ramstein Air Base, Germany.Photograph: Timm Ziegenthaler/Stocktrek Images; Getty Images

Büchel has 11 protective aircraft shelters equipped with hardened vaults for nuclear weapons storage. Each vault, which is located in a so-called WS3, or Weapons Storage and Security System, can hold up to four warheads. Our investigation traced precise location data for as many as 40 cellular devices that were present in or near these bunkers.

The patterns we could observe from devices at Büchel go far beyond just understanding the working hours of people on base. In aggregate, it’s possible to map key entry and exit points, pinpointing frequently visited areas, and even tracing personnel to their off-base routines. For a terrorist, this information could be a gold mine—an opportunity to identify weak points, plan an attack, or target individuals with access to sensitive areas.

This month, German authorities arrested a former civilian contractor employed by the US military on allegations of offering to pass sensitive information about American military operations in Germany to Chinese intelligence agencies.

In April, German authorities arrested two German-Russian nationals accused of scouting US military sites for potential sabotage, including allegedly arson. One of the targeted locations was the US Army’s Grafenwöhr Training Area in Bavaria, a critical hub for US military operations in Europe that spans 233 square kilometers.

At Grafenwöhr, WIRED, BR, and Netzpolitik.org could track the precise movements from up to 1,257 devices. Some devices could even be observed zigzagging through Range 301, an armored vehicle course, before returning to nearby barracks.

Our investigation found 38,474 location signals from up to 189 devices inside Büchel Air Base, around a dozen US nuclear weapons are reportedly stored.Courtesy of OpenMapTiles

A senior fellow at Duke University’s Sanford School of Public Policy and head of its data brokerage research project, Justin Sherman also leads Global Cyber Strategies, a firm specializing in cybersecurity and tech policy. In 2023, he and his coauthors at Duke secured $250,000 in funding from the United States Military Academy to investigate how easy it is to purchase sensitive data about military personnel from data brokers. The results were alarming: They were able to buy highly sensitive, nonpublic, individually identifiable health and financial data on active-duty service members, without any vetting.

“It shows you how bad the situation is,” Sherman says, explaining how they geofenced requests to specific special operations bases. “We didn’t pretend to be a marketing firm in LA. We just wanted to see what the data brokers would ask.” Most brokers didn’t question their requests, and one even offered to bypass an ID verification check if they paid by wire.

During the study, Sherman helped draft an amendment to the National Defense Authorization Act that requires the Defense Department to ensure that highly identifiable individual data shared with contractors cannot be resold. He found the overall impact of the study underwhelming, however. “The scope of the industry is the problem,” he says. “It’s great to pass focused controls on parts of the ecosystem, but if you don’t address the rest of the industry, you leave the door wide open for anyone wanting location data on intelligence officers.”

Efforts by the US Congress to pass comprehensive privacy legislation have been stalled for the better part of a decade. The latest effort, known as the American Privacy Rights Act, failed to advance in June after GOP leaders threatened to scuttle the bill, which was significantly weakened before being shelved.

Another current privacy bill, the Fourth Amendment Is Not For Sale Act, seeks to ban the US government from purchasing data on Americans that it would normally need a warrant to obtain. While the bill would not prohibit the sale of commercial location data altogether, it would bar federal agencies from using those purchases to circumvent constitutional protections upheld by the Supreme Court. Its fate rests in the hands of House and Senate leaders, whose negotiations are private.

“The government needs to stop subsidizing what is now for good reason one of the world’s least popular industries,” says Sean Vitka, policy director at the nonprofit Demand Progress. “There are a lot of members of Congress who take seriously the severe threats to privacy and national security posed by data brokers, but we’ve seen many actions by congressional leaders that only furthers the problem. There shouldn’t need to be a body count for these people to take action.”

The Internet Archive’s Fight to Save Itself

Source: https://www.wired.com/story/internet-archive-memory-wayback-machine-lawsuits/

The web’s collective memory is stored in the servers of the Internet Archive. Legal battles threaten to wipe it all away.

Indoors People Person Prayer Architecture Building Chapel and Church

If you step into the headquarters of the Internet Archive on a Friday after lunch, when it offers public tours, chances are you’ll be greeted by its founder and merriest cheerleader, Brewster Kahle.

You cannot miss the building; it looks like it was designed for some sort of Grecian-themed Las Vegas attraction and plopped down at random in San Francisco’s foggy, mellow Richmond district. Once you pass the entrance’s white Corinthian columns, Kahle will show you the vintage Prince of Persia arcade game and a gramophone that can play century-old phonograph cylinders on display in the foyer. He’ll lead you into the great room, filled with rows of wooden pews sloping toward a pulpit. Baroque ceiling moldings frame a grand stained glass dome. Before it was the Archive’s headquarters, the building housed a Christian Science church.

I made this pilgrimage on a breezy afternoon last May. Along with around a dozen other visitors, I followed Kahle, 63, clad in a rumpled orange button-down and round wire-rimmed glasses, as he showed us his life’s work. When the afternoon light hits the great hall’s dome, it gives everyone a halo. Especially Kahle, whose silver curls catch the sun and who preaches his gospel with an amiable evangelism, speaking with his hands and laughing easily. “I think people are feeling run over by technology these days,” Kahle says. “We need to rehumanize it.”

In the great room, where the tour ends, hundreds of colorful, handmade clay statues line the walls. They represent the Internet Archive’s employees, Kahle’s quirky way of immortalizing his circle. They are beautiful and weird, but they’re not the grand finale. Against the back wall, where one might find confessionals in a different kind of church, there’s a tower of humming black servers. These servers hold around 10 percent of the Internet Archive’s vast digital holdings, which includes 835 billion web pages, 44 million books and texts, and 15 million audio recordings, among other artifacts. Tiny lights on each server blink on and off each time someone opens an old webpage or checks out a book or otherwise uses the Archive’s services. The constant, arrhythmic flickers make for a hypnotic light show. Nobody looks more delighted about this display than Kahle.

Brewster Kahle Blazer Clothing Coat Jacket Adult Person Standing Accessories and Glasses

It is no exaggeration to say that digital archiving as we know it would not exist without the Internet Archive—and that, as the world’s knowledge repositories increasingly go online, archiving as we know it would not be as functional. Its most famous project, the Wayback Machine, is a repository of web pages that functions as an unparalleled record of the internet. Zoomed out, the Internet Archive is one of the most important historical-preservation organizations in the world. The Wayback Machine has assumed a default position as a safety valve against digital oblivion. The rhapsodic regard the Internet Archive inspires is earned—without it, the world would lose its best public resource on internet history.

Its employees are some of its most devoted congregants. “It is the best of the old internet, and it’s the best of old San Francisco, and neither one of those things really exist in large measures anymore,” says the Internet Archive’s director of library services, Chris Freeland, another longtime staffer, who loves cycling and favors black nail polish. “It’s a window into the late-’90s web ethos and late-’90s San Francisco culture—the crunchy side, before it got all tech bro. It’s utopian, it’s idealistic.”

Nuala Creed People Person Clothing Hat Adult Accessories Glasses and Blouse statues

But the Internet Archive also has its foes. Since 2020, it’s been mired in legal battles. In Hachette v. Internet Archive, book publishers complained that the nonprofit infringed on copyright by loaning out digitized versions of physical books. In UMG Recordings v. Internet Archive, music labels have alleged that the Internet Archive infringed on copyright by digitizing recordings.

In both cases, the Internet Archive has mounted “fair use” defenses, arguing that it is permitted to use copyrighted materials as a noncommercial entity creating archival materials. In both cases, the plaintiffs characterized it as a hub for piracy. In 2023, it lost Hachette. This month, it lost an appeal in the case. The Archive could appeal once more, to the Supreme Court of the United States, but has no immediate plans to do so. (“We have not decided,” Kahle told me the day after the decision.)

A judge rebuffed an attempt to dismiss the music labels’ case earlier this year. Kahle says he’s thinking about settling, if that’s even an option.

The combined weight of these legal cases threatens to crush the Internet Archive. The UMG case could prove existential, with potential fines running into the hundreds of millions. The internet has entrusted its collective memory to this one idiosyncratic institution. It now faces the prospect of losing it all.

Kahle has been obsessed with creating a digital library since he was young, a calling that spurred him to study artificial intelligence at MIT. “I wanted to build the library of everything, and we needed computers that were big enough to be able to deal with it,” he says.

After graduating in 1982, he worked at the supercomputing startup Thinking Machines Corporation. While there, he developed a program called Wide Area Information Server (WAIS), a way to search for data on remote computers. He left to cocreate a startup of the same name, which he sold to AOL in 1995. The next year, he launched a two-headed project from his attic: “AI and IA.”That “AI” was a for-profit company called Alexa Internet—“Alexa” a nod to the Library of Alexandria—alongside the nonprofit Internet Archive. The two projects were interlinked; Alexa Internet crawled the web, then donated what it collected to the Internet Archive. Kahle couldn’t quite make the business model work. When Amazon made an offer in 1999, it seemed prudent to accept. The Everything Store paid a reported $250 million in stock for Alexa, severing the AI from IA and leaving Kahle a wealthy man.

Kahle stayed on with Alexa for a few years but left in 2002 to focus on the Internet Archive. It has been his vocation ever since. “His entire being is committed to the Archive,” says copyright scholar Pam Samuelson, who has known Kahle since the ’90s. “He lives and breathes it.”

If Silicon Valley has a Mr. Fezziwig, it’s Kahle. He’s not an ascetic; he owns a handsome black sailboat anchored in a slip at a tony yacht club. But his day-to-day life is modest. He ebikes to work and dresses like a guy who doesn’t care about clothes, and while he used to love Burning Man—he and his wife, Mary Austin, got married there in 1992—now he thinks it’s gotten too big. (Their current bougie-hippie pastime is the seasteading gathering Ephemerisle, where boaters hitch themselves together and create temporary islands in the Sacramento River Delta every July.)

What he really loves, above all, is his job.

“The story of Brewster Kahle is that of a guy who wins the lottery,” says longtime archivist Jason Scott. “And he and his wife, Mary, turned around and said, awesome, we get to be librarians now.”

Person Car Transportation Vehicle Plant and Tree Graffiti van Internet Archive building

Kahle is now the merry custodian to a uniquely comprehensive catalog, spanning all manner of digital and physical media, from classic video games to live recordings of concerts to magazines and newspapers to books from around the world. It recently backed up the island of Aruba’s cultural institutions. It’s an essential tool for everything from legal research—particularly around patent law—to accountability journalism. “There are other online archiving tools,” says ProPublica reporter Craig Silverman, “but none of them touch the Internet Archive.” It is, in short, a proof machine.What makes the Internet Archive unique is its willingness to push boundaries in ways that traditional libraries do not. The Library of Congress also archives the web—but only after it has notified, and often asked permission from, the websites it scrapes.

“The Internet Archive has always been a little risky,” says University of Waterloo historian Ian Milligan, who has a forthcoming book on web archiving. Its distinctive utility is entwined with its long-standing outré approach to copyright. In fact, Kahle and the Internet Archive sued the government more than two decades ago, challenging the way the Copyright Renewal Act of 1992 and the Copyright Term Extension Act of 1998 had expanded copyright law. He lost that case—but, certainly, not his desire to keep pushing.

One of those pushes came in 2005. At the time, beloved hacker Aaron Swartz was often working on Internet Archive projects, and he cocreated and led the development of a new initiative called the Open Library program along with Kahle. The goal was to create one webpage for every book in the world. Kahle saw it as an alternative to Google Books, one that wasn’t driven by commercial interests but loftier and decidedly kumbaya information-wants-to-be-free ambitions.

In addition to its attempt to catalog every book ever, the project sought to make copies available to readers. To that end, it scans physical books, then allows people to check out the digitized versions. For over a decade, it has operated using a framework called controlled digital lending (CDL), where digitized books are treated as old-fashioned physical books rather than ebooks. The books it lends out were either purchased by the Internet Archive or donated by other libraries, organizations, or individuals; according to CDL principles, libraries that own a physical copy of a book should be able to lend it digitally.

Furniture Table Desk Person Teen Computer Computer Hardware Computer Keyboard and Electronics

The project primarily appeals to researchers for whom specific books are hard to attain elsewhere, rather than casual readers. “Try checking out one of our books and then reading it—it’s tough going,” Kahle says. He’s not lying. A blurry scan of a physical book on a desktop screen compared to a regular ebook on a Kindle is like music from a tinny iPhone speaker versus a Bose surround sound system. Most borrowers read what they check out for less than five minutes.

Like other digital media, ebooks are typically licensed rather than sold outright, at a much higher rate than the cover price. Libraries who license ebooks get a limited number of loans; if they stop paying, the book vanishes. CDL is an attempt to give libraries more control over their inventory, and to expand access to books in a library’s collection that exist only as physical copies.

For years, publishers ignored the Internet Archive’s book-scanning spree. Finally, during the pandemic, after the Internet Archive took one liberty too many with its approach to CDL, they snapped.

In March 2020, as schools and libraries abruptly shut down, they faced a dilemma. Demand for ebooks far outstripped their ability to loan them out under restrictive licensing deals, and they had no way of lending out books that existed only in physical form. In response, the Internet Archive made a bold decision: It allowed multiple people to check out digital versions of the same book simultaneously. It called this program the National Emergency Library. “We acted at the request of librarians and educators and writers,” says Chris Freeland.

Kahle remembers feeling a vocational tug in that moment for the Internet Archive to do whatever it could to expand access. He thought they had broad support, too. “We got over 100 libraries to sign on and say ‘help us,’” Kahle says. “They stood behind the National Emergency Library and said ‘do this under our names.’”

Dave Hansen, now executive director of the nonprofit Authors Alliance, was a librarian at Duke University at the time. “We had tremendous challenges getting books for our students,” he says. “What they did was a good-faith effort.”

Text Book Publication Person Accessories Bracelet Jewelry Newspaper Chair and Furniture archives

Not everyone agreed. Prominent writers vehemently criticized the project, as did the Authors Guild and the National Writers Union. “They are not a library. Libraries buy books and respect copyright. They are fraudsters posing as saints,” author James Gleick wrote on Twitter. (Today, Gleick maintains that the Internet Archive is not a library, though he says “fraudsters was a little harsh.”)

“They seem to work by fiat,” says Bhamati Viswanathan, a copyright lawyer who signed an amicus brief on behalf of the publishers in the Hachette case. Viswanathan thinks it was arrogant to circumvent the licensing system. “Very much like what the tech companies seem to be doing, which is, ‘we’re going to ask forgiveness, not permission.’”

The Internet Archive was in its first full-blown PR crisis. The coalition of publishing houses filed its lawsuit in June 2020, alleging that both the National Emergency Library and the Internet Archive’s broader Open Library program violated copyright. A few weeks later, the Internet Archive scuttled the National Emergency Library and reverted to its traditional, capped loan system, but it made no difference to the publishers.

The publishing houses and their supporters maintain that the Archive’s behavior harmed authors. “Internet Archive is arguing that it is OK to make and publicly distribute unauthorized copies of an author’s work to the global public,” Terrance Hart, the general counsel for the Association of American Publishers, tells WIRED. “Imagine if everyone started doing the same. The only existential threat here is the one posed by Internet Archive to the livelihoods of authors and to the copyright system itself in the digital age.”

AI Lab

WIRED’s resident AI expert Will Knight takes you to the cutting edge of this fast-changing field and beyond—keeping you informed about where AI and technology are headed. Delivered on Wednesdays.

By signing up you agree to our User Agreement (including the class action waiver and arbitration provisions), our Privacy Policy & Cookie Statement and to receive marketing and account-related emails from WIRED. You can unsubscribe at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

After the lawsuit was filed, over a thousand writers signed a letter in support of libraries and the Internet Archive to be able to loan digital books, including Naomi Klein and Daniel Ellsberg. One supportive author, Chuck Wendig, had very publicly changed his mind after initially tweeting criticism. Even some writers who currently belong to and support the Authors Guild, like Joanne McNeil, were staunch supporters of the Archive. She sometimes reads out-of-print books using the lending service and still sees it as a vital tool. “I hope my books are in the Open Library project,” she says, telling me that she’s already aware that her critically acclaimed but modestly popular books aren’t widely available. “At least I’ll know that way there’s someplace someone can find them.”

The shows of support didn’t matter. The publishers didn’t back down. In March 2023, the Internet Archive lost the case. This September, it lost its appeal. The court refuted the fair use arguments, insisting that the organization had not proved that it wasn’t financially harming publishers. In the meantime, legal bills continue to pile up for the Internet Archive’s next challenge.

After the initial ruling in Hachette v. Internet Archive, the parties agreed upon settlement terms; although those terms are confidential, Kahle has confirmed that the Internet Archive can financially survive it thanks to the help of donors. If the Internet Archive decides not to file a second appeal, it will have to fulfill those settlement terms. A blow, but not a death knell.The other lawsuit may be far harder to survive. In 2023, several major record labels, including Universal Music Group, Sony, and Capitol, sued the Internet Archive over its Great 78 Project, a digital archive of a niche collection of recordings of albums in the obsolete record format known as 78s, which was used from the 1890s to the late 1950s. The complaint alleges that the project “undermines the value of music.” It lists 2,749 recordings as infringed, which means damages could potentially be over $400 million.

“One thing that you can say about the recording industry,” Pam Samuelson says, “is that there are no statutory damages that are too large for them to claim.”

Internet Archive Lamp Chair Furniture Home Decor Couch Indoors Architecture Building Living Room Room and Desk

As with the book publishing case, the Internet Archive’s defense hinges on fair use. It argues that preserving obsolete versions of these records, complete with the crackles and pops from the old shellac resin, makes history accessible. Copyright law is notoriously unpredictable, and some find the Internet Archive’s case shaky. “It doesn’t strike me, necessarily, as a winning fair use argument,” says Zvi Rosen, a law professor at Southern Illinois University who focuses on copyright.

James Grimmelmann, a professor of digital and information law at Cornell University, thinks the labels are “vastly exaggerating the commercial harm” from the project. (If there was a sizable audience for extremely low-quality versions of songs, he reasons, why wouldn’t the labels be putting out 78-style releases?) On average, each recording is accessed only once a month. Still, Grimmelmann isn’t convinced that will matter. “They are directly reproducing these works,” he says. “That’s a very hard lift for a judge.”

It may be years before the case is resolved, which means the uncertainty about the Internet Archive’s future is likely to linger, and potentially spread. And if it is resolved through either a settlement or a win for the recording industry, other copyright holders could be inspired to sue. “I’m worried about the blast radius from the music lawsuit,” Grimmelmann says.In Kahle’s view, the Internet Archive’s legal challenges are part of a larger story about beleaguered libraries in the United States. He likes to frame his plight as a battle against a cadre of nefarious publishers, one piece of a larger struggle to wrest back the right to own books in the digital age. (Get him started on the topic, and he’ll likely point out that both ebook distributor OverDrive and publishing company Simon & Schuster are owned by the global investment firm Kohlberg Kravis Roberts & Co.) He’s keenly aware that everything he has built is in danger. “It’s the time of Orwell but with corporations,” Kahle says. “It’s scary.”

Losing the Archive is, indeed, a frightening prospect. “There is a misperception that things on the web are forever—but they really, really aren’t,” says Craig Silverman, who thinks the nonprofit’s demise would make certain types of scholarship and reporting “way more difficult, if not impossible,” in addition to representing a disappearance of a bastion of collective memory.

Just this September, Google and the Internet Archive announced a partnership to allow people to see previous versions of websites surfaced through Google Search by linking to the Wayback Machine. Google previously offered its own cached historical websites; now it leans on a small nonprofit.

The Internet Archive also has challenges beyond its legal woes. For starters, it’s getting harder to archive things. As Mark Graham, director of the Wayback Machine, told me, the rise of apps with functions like livestreaming, especially when they’re limited to certain operating systems, presents a technical challenge. On top of that, paywalls are an obstacle, as is the sheer and ever-increasing amount of content. “There’s just so much material,” he says. “How does one know what to prioritize?”

Then there’s AI, once again. Thus far, the Internet Archive has sidestepped or been exempt from the new scrutiny on web crawling as it relates to AI training data. This June, for example, when Reddit announced that it was updating its scraping policy, it specifically noted that it was still allowing “good faith actors” like the Internet Archive to crawl it. But as opposition to rampant AI data scraping grows, the Internet Archive may yet face a new obstacle: If regulators and lawmakers are clumsy in attempts to curb permissionless AI web scraping, it could kneecap services like the Wayback Machine, which functions precisely because it can trawl and reproduce vast amounts of data.

The rise of AI has already soured some creative types on the Internet Archive’s approach to copyright. While Kahle views his creation as a library on the side of the little guy, opponents strenuously dispute this view. They paint Kahle as a tech-wolf disguised in librarian-sheep clothing, stuck in a mentality better suited for the Napster era. “The Internet Archive is really fighting the battles of 20 years ago, when it was as simple as ‘publishers bad, anything that hurts publishers good,’” says Neil Turkewitz, a former Recording Industry Association of America executive who has criticized the Archive’s copyright stances. “But that’s not the world we live in.”

Arch and Architecture server church rope

When I talk to Kahle over Zoom this September, shortly after he’d learned that the Internet Archive had lost the appeal, he’s agitated—an internet prophet literally wandering around in the wilderness. He’s perched in front of jagged cliffs while hiking outside of Arles, France, a blue baseball cap pulled over his hair, cheeks extra-ruddy in the sun, his default affability tempered by a sense of despondency. He hadn’t known about the timing of the ruling in advance, so he interrupted a weeklong vacation with Mary to jump back into work crisis mode. “It’s just so depressing,” he says.

As he sits on a rock with his phone in his hand, Kahle says the US legal system is broken. He says he doesn’t think this is the end of the lawsuits. “I think the copyright cartel is on a roll,” he says. He frets that copycat cases could be on the way. He’s the most bummed-out guy I’ve ever seen on vacation in the south of France. But he’s also defiant. There’s no inkling of regret, only a renewed sense that what he’s doing is righteous. “We have such an opportunity here. It’s the dream of the internet,” he says. “It’s ours to lose.” It sounds less like a statement and more like a prayer.

I Stared Into the AI Void With the SocialAI App

SocialAI is an online universe where everyone you interact with is a bot—for better or worse.

Robot Hands Adults in a Crowd Glitch Effect

The first time I used SocialAI, I was sure the app was performance art. That was the only logical explanation for why I would willingly sign up to have AI bots named Blaze Fury and Trollington Nefarious, well, troll me.

Even the app’s creator, Michael Sayman, admits that the premise of SocialAI may confuse people. His announcement this week of the app read a little like a generative AI joke: “A private social network where you receive millions of AI-generated comments offering feedback, advice, and reflections.”

But, no, SocialAI is real, if “real” applies to an online universe in which every single person you interact with is a bot.

There’s only one real human in the SocialAI equation. That person is you. The new iOS app is designed to let you post text like you would on Twitter or Threads. An ellipsis appears almost as soon as you do so, indicating that another person is loading up with ammunition, getting ready to fire back. Then, instantaneously, several comments appear, cascading below your post, each and every one of them written by an AI character. In the new new version of the app, just rolled out today, these AIs also talk to each other.

When you first sign up, you’re prompted to choose these AI character archetypes: Do you want to hear from Fans? Trolls? Skeptics? Odd-balls? Doomers? Visionaries? Nerds? Drama Queens? Liberals? Conservatives? Welcome to SocialAI, where Trollita Kafka, Vera D. Nothing, Sunshine Sparkle, Progressive Parker, Derek Dissent, and Professor Debaterson are here to prop you up or tell you why you’re wrong.

Screenshot of the instructions for setting up the Social AI app.

Is SocialAI appalling, an echo chamber taken to the extreme? Only if you ignore the truth of modern social media: Our feeds are already filled with bots, tuned by algorithms, and monetized with AI-driven ad systems. As real humans we do the feeding: freely supplying social apps fresh content, baiting trolls, buying stuff. In exchange, we’re amused, and occasionally feel a connection with friends and fans.As notorious crank Neil Postman wrote in 1985, “Anyone who is even slightly familiar with the history of communications knows that every new technology for thinking involves a trade-off.” The trade-off for social media in the age of AI is a slice of our humanity. SocialAI just strips the experience down to pure artifice.

“With a lot of social media, you don’t know who the bot is and who the real person is. It’s hard to tell the difference,” Sayman says. “I just felt like creating a space where you’re able to know that they’re 100 percent AIs. It’s more freeing.”

You might say Sayman has a knack for apps. As a teenage coder in Miami, Florida, during the financial crisis, Sayman gained fame for building a suite of apps to support his family, who had been considering moving back to Peru. Sayman later ended up working in product jobs at Facebook, Google, and Roblox. SocialAI was launched from Sayman’s own venture-backed app studio, Friendly Apps.

In many ways his app is emblematic of design thinking rather than pure AI innovation. SocialAI isn’t really a social app, but ChatGPT in the container of a social broadcast app. It’s an attempt to redefine how we interact with generative AI. Instead of limiting your ChatGPT conversation to a one-to-one chat window, Sayman posits, why not get your answers from many bots, all at the same time?

Over Zoom earlier this week, he explained to me how he thinks of generative AI like a smoothie if cups hadn’t yet been invented. You can still enjoy it from a bowl or plate, but those aren’t the right vessel. SocialAI, Sayman says, could be the cup.

Almost immediately Sayman laughed. “This is a terrible analogy,” he said.

Sayman is charming and clearly thinks a lot about how apps fit into our world. He’s a team of one right now, relying mostly on OpenAI’s technology to power SocialAI, blended with some other custom AI models. (Sayman rate-limits the app so that he doesn’t go broke in “three minutes” from the fees he’s paying to OpenAI. He also hasn’t quite yet figured out how he’ll make money off of SocialAI.) He knows he’s not the first to launch an AI-character app; Meta has burdened its apps with AI characters, and the Character AI app, which was just quasi-acquired by Google, lets you interact with a huge number of AI personas.But Sayman is hand-wavy about this competition. “I don’t see my app as, you’re going to be interacting with characters who you think might be real,” he says. “This is really for seeking answers to conflict resolution, or figuring out if what you’re trying to say is hurtful and get feedback before you post it somewhere else.”

“Someone joked to me that they thought Elon Musk should use this, so he could test all of his posts before he posts them on X,” Sayman said.

I’d actually tried that, tossing some of the most trafficked tweets from Elon Musk and the Twitter icon Dril into my SocialAI feed. I shared a news story from WIRED; the link was unclickable, because SocialAI doesn’t support link-sharing. (There’s no one to share it with, anyway.) I repurposed the viral “Bean Dad” tweet and purported to be a Bean Mom on SocialAI, urging my 9-year-old daughter to open a can of beans herself as a life lesson. I posted political content. I asked my synthetic SocialAI followers who else I should follow.

The bots obliged and flooded my feed with comments, like Reply Guys on steroids. But their responses lacked nutrients or human messiness. Mostly, I told Sayman, it all felt too uncanny, that I had a hard time crossing that chasm and placing value or meaning on what the bots had to say.

Sayman encouraged me to craft more posts along the lines of Reddit’s “Am I the Asshole” posts: Am I wrong in this situation? Should I apologize to a friend? Should I stay mad at my family forever? This, Sayman says, is the real purpose of SocialAI. I tried it. For a second the SocialAI bot comments lit up my lizard brain, my id and superego, the “I’m so right” instinct. Then Trollita Kafka told me, essentially, that I was in fact the asshole.One aspect of SocialAI that clearly does not represent the dawn of a new era: Sayman has put out a minimum viable product without communicating important guidelines around privacy, content policies, or how SocialAI or OpenAI might use the data people provide along the way. (Move fast, break things, etc.) He says he’s not using anyone’s posts to train his own AI models, but notes that users are still subject to OpenAI’s data-training terms, since he uses OpenAI’s API. You also can’t mute or block a bot that has gone off the rails.

At least, though, your feed is always private by default. You don’t have any “real” followers. My editor at WIRED, for example, could join SocialAI himself but will never be able to follow me or see that I copied and pasted an Elon Musk tweet about wanting to buy Coca-Cola and put the cocaine back in it, just as he could not follow my ChatGPT account and see what I’m enquiring about there.

As a human on SocialAI, you will never interact with another human. That’s the whole point. It’s your own little world with your own army of AI characters ready to bolster you or tear you down. You may not like it, but it might be where you’re headed anyway. You might already be there.

Source: https://www.wired.com/story/socialai-app-ai-chatbots-chatgpt/

Microsoft’s Recall technology bears resemblance to George Orwells 1984 dystopia in several key factors

Microsoft’s Recall technology, an AI tool designed to assist users by automatically reminding them of important information and tasks, bears resemblance to George Orwell’s „1984“ dystopia in several key aspects:

1. Surveillance and Data Collection:
– 1984: The Party constantly monitors citizens through telescreens and other surveillance methods, ensuring that every action, word, and even thought aligns with the Party’s ideology.
– Recall Technology: While intended for productivity, Recall collects and analyzes large amounts of personal data, emails, and other communications to provide reminders. This level of data collection can raise concerns about privacy and the potential for misuse or unauthorized access to personal information.

2. Memory and Thought Control:
– 1984: The Party manipulates historical records and uses propaganda to control citizens‘ memories and perceptions of reality, essentially rewriting history to fit its narrative.
– Recall Technology: By determining what information is deemed important and what reminders to provide, Recall could influence users‘ focus and priorities. This selective emphasis on certain data could subtly shape users‘ perceptions and decisions, akin to a form of soft memory control.

3. Dependence on Technology:
– 1984: The populace is heavily reliant on the Party’s technology for information, entertainment, and even personal relationships, which are monitored and controlled by the state.
– Recall Technology: Users might become increasingly dependent on Recall to manage their schedules and information, potentially diminishing their own capacity to remember and prioritize tasks independently. This dependence can create a vulnerability where the technology has significant control over daily life.

4. Loss of Personal Autonomy:
– 1984: Individual autonomy is obliterated as the Party dictates all aspects of life, from public behavior to private thoughts.
– Recall Technology: Although not as extreme, the automation and AI-driven suggestions in Recall could erode personal decision-making over time. As users rely more on technology to dictate their actions and reminders, their sense of personal control and autonomy may diminish.

5. Potential for Abuse:
– 1984: The totalitarian regime abuses its power to maintain control over the population, using technology as a tool of oppression.
– Recall Technology: In a worst-case scenario, the data collected by Recall could be exploited by malicious actors or for unethical purposes. If misused by corporations or governments, it could lead to scenarios where users‘ personal information is leveraged against them, echoing the coercive control seen in Orwell’s dystopia.

While Microsoft’s Recall technology is designed with productivity in mind, its potential implications for privacy, autonomy, and the influence over personal information draw unsettling parallels to the controlled and monitored society depicted in „1984.“

Why Elon Musk should consider integrating OpenAI’s ChatGPT „GPT-4o“ as the operating system for a brand new Tesla SUV – Here are the five biggest advantages to highlight

  1. Revolutionary User Interface and Experience:
    • Natural Language Interaction: GPT-4o’s advanced natural language processing capabilities allow for seamless, conversational interaction between the driver and the vehicle. This makes controlling the vehicle and accessing information more intuitive and user-friendly.
    • Personalized Experience: The AI can learn from individual driver behaviors and preferences, offering tailored suggestions for routes, entertainment, climate settings, and more, enhancing overall user satisfaction and engagement. 
  2. Enhanced Autonomous Driving and Safety:
    • Superior Decision-Making: GPT-4o can significantly enhance Tesla’s autonomous driving capabilities by processing and analyzing vast amounts of real-time data to make better driving decisions. This improves the safety, reliability, and efficiency of the vehicle’s self-driving features.
    • Proactive Safety Features: The AI can provide real-time monitoring of the vehicle’s surroundings and driver behavior, offering proactive alerts and interventions to prevent accidents and ensure passenger safety.
  3. Next-Level Infotainment and Connectivity:
    • Smart Infotainment System: With GPT-4o, the SUV’s infotainment system can offer highly intelligent and personalized content recommendations, including music, podcasts, audiobooks, and more, making long journeys more enjoyable.
    • Seamless Connectivity: The AI can integrate with a wide range of apps and services, enabling drivers to manage their schedules, communicate, and access information without distraction, thus enhancing productivity and convenience.
  4. Continuous Improvement and Future-Proofing:
    • Self-Learning Capabilities: GPT-4o continuously learns and adapts from user interactions and external data, ensuring that the vehicle’s performance and features improve over time. This results in an ever-evolving user experience that keeps getting better.
    • Over-the-Air Updates: Regular over-the-air updates from OpenAI ensure that the SUV remains at the forefront of technology, with the latest features, security enhancements, and improvements being seamlessly integrated.
  5. Market Differentiation and Brand Leadership:
    • Innovative Edge: Integrating GPT-4o positions Tesla’s new SUV as a cutting-edge vehicle, showcasing the latest in AI and automotive technology. This differentiates Tesla from competitors and strengthens its reputation as a leader in innovation.
    • Enhanced Customer Engagement: The unique AI-driven features and personalized experiences can drive stronger customer engagement and loyalty, attracting tech-savvy consumers and enhancing the overall brand image.

By leveraging these advantages, Tesla can create a groundbreaking SUV that not only meets but exceeds consumer expectations, setting new standards for the automotive industry and reinforcing Tesla’s position as a pioneer in automotive and AI technology.