Archiv des Autors: innovation

The EV Battery Tech That’s Worth the Hype, According to Experts

Major battery breakthroughs seemingly happen every day, but only some of that tech ever leaves the lab. WIRED breaks down what’s actually going to change EVs and what’s just a dream.

The EV Battery Tech Thats Worth the Hype According to Experts

You’ve seen the headlines: This battery breakthrough is going to change the electric vehicle forever. And then … silence. You head to the local showroom, and the cars all kind of look and feel the same.

WIRED got annoyed about this phenomenon. So we talked to battery technology experts about what’s really going on in electric vehicle batteries. Which technologies are here? Which will be, probably, but aren’t yet, so don’t hold your breath? What’s probably not coming anytime soon?

“It’s easy to get excited about these things, because batteries are so complex,” says Pranav Jaswani, a technology analyst at IDTechEx, a market intelligence firm. “Many little things are going to have such a big effect.” That’s why so many companies, including automakers, their suppliers, and battery-makers, are experimenting with so many bit parts of the battery. Swap one electrical conductor material for another, and an electric vehicle battery’s range might increase by 50 miles. Rejigger how battery packs are put together, and an automaker might bring down manufacturing costs enough to give consumers a break on the sales lot.

Still, experts say, it can take a long time to get even small tweaks into production cars—sometimes 10 years or more. “Obviously, we want to make sure that whatever we put in an EV works well and it passes safety standards,” says Evelina Stoikou, who leads the battery technology and supply chain team at BloombergNEF, a research firm. Ensuring that means scientists coming up with new ideas, and suppliers figuring out how to execute them; the automakers, in turn, rigorously test each iteration. All the while, everyone’s asking the most important question: Does this improvement make financial sense?

So it’s only logical that not every breakthrough in the lab makes it to the road. Here are the ones that really count—and the ones that haven’t quite panned out, at least so far.

It’s Really Happening

The big deal battery breakthroughs all have something in common: They’re related to the lithium-ion battery. Other battery chemistries are out there—more on them later—but in the next decade, it’s going to be hard to catch up with the dominant battery form. “Lithium-ion is already very mature,” says Stoikou. Lots of players have invested big money in the technology, so “any new one is going to have to compete with the status quo.”

Lithium Iron Phosphate

Why it’s exciting: LFP batteries use iron and phosphate instead of pricier and harder-to-source nickel and cobalt, which are found in conventional lithium-ion batteries. They’re also more stable and slower to degrade after multiple charges. The upshot: LFP batteries can help bring down the cost of manufacturing an EV, an especially important data point while Western electrics struggle to compete, cost-wise, with conventional gas-powered cars. LFP batteries are already common in China, and they’re set to become more popular in European and American electric vehicles in the coming years.

Why it’s hard: LFP is less energy dense than alternatives, meaning you can’t pack as much charge—or range—into each battery.

More Nickel

Why it’s exciting: The increased nickel content in lithium nickel manganese cobalt batteries ups the energy density, meaning more range in a battery pack without much more size or weight. Also, more nickel can mean less cobalt, a metal that’s both expensive and ethically dubious to obtain.

Why it’s hard: Batteries with higher nickel content are potentially less stable, which means they carry a higher risk of cracking or thermal runaway—fires. This means battery-makers experimenting with different nickel content have to spend more time and energy on the careful design of their products. That extra fussiness means more expense. For this reason, expect to see more nickel use in batteries for higher-end EVs.

Dry Electrode Process

Why it’s exciting: Usually, battery electrodes are made by mixing materials into a solvent slurry, which then is applied to a metal current collector foil, dried, and pressed. The dry electrode process cuts down on the solvents by mixing the materials in dry powder form before application and lamination. Less solvent means fewer environmental and health and safety concerns. And getting rid of the drying process can save production time—and up efficiency—while reducing the physical footprint needed to manufacture batteries. This all can lead to cheaper manufacturing, “which should trickle down to make a cheaper car,” says Jaswani. Tesla has already incorporated a dry anode process into its battery-making. (The anode is the negative electrode that stores lithium ions while a battery is charging.) LG and Samsung SGI are also working on pilot production lines.

Why it’s hard: Using dry powders can be more technically complicated.

Cell-to-Pack

Why it’s exciting: In your standard electric vehicle battery, individual battery cells get grouped into modules, which are then assembled into packs. Not so in cell-to-pack, which puts cells directly into a pack structure without the middle module step. This lets battery-makers fit more battery into the same space, and can lead to some 50 additional miles of range and higher top speeds, says Jaswani. It also brings down manufacturing costs, savings that can be passed down to the car buyer. Big-time automakers including Tesla and BYD, plus Chinese battery giant CATL, are already using the tech.

Why it’s hard: Without modules, it can be harder to control thermal runaway and maintain the battery pack’s structure. Plus, cell-to-pack makes replacing a faulty battery cell much harder, which means smaller flaws can require opening or even replacing the entire pack.

Silicon Anodes

Why it’s exciting: Lithium-ion batteries have graphite anodes. Adding silicon to the mix, though, could have huge upsides: more energy storage (meaning longer driving ranges) and faster charging, potentially down to a blazing six to 10 minutes to top up. Tesla already mixes a bit of silicon into its graphite anodes, and other automakers—Mercedes-Benz, General Motors—say they’re getting close to mass production.

Why it’s hard: Silicon alloyed with lithium expands and contracts as it goes through the charging and discharging cycle, which can cause mechanical stress and even fracturing. Over time, this can lead to more dramatic battery capacity losses. For now, you’re more likely to find silicon anodes in smaller batteries, like those in phones or even motorcycles.

It’s Kind of Happening

The battery tech in the more speculative bucket has undergone plenty of testing. But it’s still not quite at a place where most manufacturers are building production lines and putting it into cars.

Sodium-Ion Batteries

Why it’s exciting: Sodium—it’s everywhere! Compared to lithium, the element is cheaper and easier to find and process, which means tracking down the materials to build sodium-ion batteries could give automakers a supply chain break. The batteries also seem to perform better in extreme temperatures, and are more stable. Chinese battery-maker CATL says it will start mass production of the batteries next year and that the batteries could eventually cover 40 percent of the Chinese passenger-vehicle market.

Why it’s hard: Sodium ions are heavier than their lithium counterparts, so they generally store less energy per battery pack. That could make them a better fit for battery storage than for vehicles. It’s also early days for this tech, which means fewer suppliers and fewer time-tested manufacturing processes.

Solid State Batteries

Why it’s exciting: Automakers have been promising for years that groundbreaking solid state batteries are right around the corner. That would be great, if true. This tech subs the liquid or gel electrolytes in a conventional li-ion battery for a solid electrolyte. These electrolytes should come in different chemistries, but they all have some big advantages: more energy density, faster charging, more durability, fewer safety risks (no liquid electrolyte means no leaks). Toyota says it will finally launch its first vehicles with solid state batteries in 2027 or 2028. BloombergNEF projects that by 2035, solid state batteries will account for 10 percent of EV and storage production.

Why it’s hard: Some solid electrolytes have a hard time at low temperatures. The biggest issues, however, have to do with manufacturing. Putting together these new batteries requires new equipment. It’s really hard to build defect-free layers of electrolyte. And the industry hasn’t come to an agreement about which solid electrolyte to use, which makes it hard to create supply chains.

Maybe It’ll Happen

Good ideas don’t always make a ton of sense in the real world.

Wireless Charging

Why it’s exciting: Park your car, get out, and have it charge up while you wait—no plugs required. Wireless charging could be the peak of convenience, and some automakers insist it’s coming. Porsche, for example, is showing off a prototype, with plans to roll out the real thing next year.

Why it’s hard: The issue, says Jaswani, is that the tech underlying the chargers we have right now works perfectly well and is much cheaper to install. He expects that eventually, wireless charging will show up in some restricted use cases—maybe in buses, for example, that could charge up throughout their routes if they stop on top of a charging pad. But this tech may never go truly mainstream, he says.

Source: https://www.wired.com/story/the-ev-battery-tech-thats-worth-the-hype-according-to-experts/

AI agents are starting to eat SaaS and Cloud Software Companies

Overview

  • Martin Alderson argues that AI coding agents are fundamentally reshaping the build-versus-buy calculus for software, enabling organizations with technical capability to rapidly create custom internal tools that threaten to replace simpler SaaS products—particularly back-office CRUD applications and basic analytics dashboards.
  • Organizations are now questioning SaaS renewal quotes with double-digit annual price increases and choosing to build alternatives with AI agents, while others reduce user licenses by up to 80% by creating internal dashboards that bypass the need for vendor platforms.
  • The disruption poses an acute threat to SaaS companies whose valuations depend on net revenue retention above 100%—a metric that has declined from 109% in 2021 to a median of 101-106% in 2025—as back-office tools now face competition from „engineers at your customers with a spare Friday afternoon with an agent“.​

AI agents are starting to eat SaaS

December 15, 2025·Martin Alderson

We spent fifteen years watching software eat the world. Entire industries got swallowed by software – retail, media, finance – you name it, there has been incredible disruption over the past couple of decades with a proliferation of SaaS tooling. This has led to a huge swath of SaaS companies – valued, collectively, in the trillions.

In my last post debating if the cost of software has dropped 90% with AI coding agents I mainly looked at the supply side of the market. What will happen to demand for SaaS tooling if this hypothesis plays out? I’ve been thinking a lot about these second and third order effects of the changes in software engineering.

The calculus on build vs buy is starting to change. Software ate the world. Agents are going to eat SaaS.

The signals I’m seeing

The obvious place to start is simply demand starting to evaporate – especially for „simpler“ SaaS tools. I’m sure many software engineers have started to realise this – many things I’d think to find a freemium or paid service for I can get an agent to often solve in a few minutes, exactly the way I want it. The interesting thing is I didn’t even notice the shift. It just happened.

If I want an internal dashboard, I don’t even think that Retool or similar would make it easier. I just build the dashboard. If I need to re-encode videos as part of a media ingest process, I just get Claude Code to write a robust wrapper round ffmpeg – and not incur all the cost (and speed) of sending the raw files to a separate service, hitting tier limits or trying to fit another API’s mental model in my head.

This is even more pronounced for less pure software development tasks. For example, I’ve had Gemini 3 produce really high quality UI/UX mockups and wireframes in minutes – not needing to use a separate service or find some templates to start with. Equally, when I want to do a presentation, I don’t need to use a platform to make my slides look nice – I just get Claude Code to export my markdown into a nicely designed PDF.

The other, potentially more impactful, shift I’m starting to see is people really questioning renewal quotes from larger „enterprise“ SaaS companies. While this is very early, I believe this is a really important emerging behaviour. I’ve seen a few examples now where SaaS vendor X sends through their usual annual double-digit % increase in price, and now teams are starting to ask „do we actually need to pay this, or could we just build what we need ourselves?“. A year ago that would be a hypothetical question at best with a quick ’no‘ conclusion. Now it’s a real option people are putting real effort into thinking through.

Finally, most SaaS products contain many features that many customers don’t need or use. A lot of the complexity in SaaS product engineering is managing that – which evaporates overnight when you have only one customer (your organisation). And equally, this customer has complete control of the roadmap when it is the same person. No more hoping that the SaaS vendor prioritises your requests over other customers.

The maintenance objection

The key objection to this is „who maintains these apps?“. Which is a genuine, correct objection to have. Software has bugs to fix, scale problems to solve, security issues to patch and that isn’t changing.

I think firstly it’s important to point out that a lot of SaaS is poorly maintained (and in my experience, often the more expensive it is, the poorer the quality). Often, the security risk comes from having an external third party itself needing to connect and interface with internal data. If you can just move this all behind your existing VPN or access solution, you suddenly reduce your organisation’s attack surface dramatically.

On top of this, agents themselves lower maintenance cost dramatically. Some of the most painful maintenance tasks I’ve had – updating from deprecated libraries to another one with more support – are made significantly easier with agents, especially in statically typed programming ecosystems. Additionally, the biggest hesitancy with companies building internal tools is having one person know everything about it – and if they leave, all the internal knowledge goes. Agents don’t leave. And with a well thought through AGENTS.md file, they can explain the codebase to anyone in the future.

Finally, SaaS comes with maintenance problems too. A recent flashpoint I’ve seen this month from a friend is a SaaS company deciding to deprecate their existing API endpoints and move to another set of APIs, which don’t have all the same methods available. As this is an essential system, this is a huge issue and requires an enormous amount of resource to update, test and rollout the affected integrations.

I’m not suggesting that SMEs with no real software knowledge are going to suddenly replace their entire SaaS suite. What I do think is starting to happen is that organisations with some level of tech capability and understanding are going to think even more critically at their SaaS procurement and vendor lifecycle.

The economics problem for SaaS

SaaS valuations are built on two key assumptions: fast customer growth and high NRR (often exceeding 100%).

I think we can start to see a world already where demand from new customers for certain segments of tooling and apps begins to decline. That’s a problem, and will cause an increase in the sales and marketing expenditure of these companies.

However, the more insidious one is net revenue retention (NRR) declines. NRR is a measure of how much existing customers spend with you on an ongoing basis, adjusted for churn. If your NRR is at 100%, your existing cohort of customers are spending the same. If it’s less than that then they are spending less with you and/or customers are leaving overall.

Many great SaaS companies have NRR significantly above 100%. This is the beauty of a lot of SaaS business models – companies grow and require more users added to their plan. Or they need to upgrade from a lower priced tier to a higher one to gain additional features. These increases are generally very profitable. You don’t need to spend a fortune on sales and marketing to get this uptick (you already have a relationship with them) and the profit margin of adding another 100 user licenses to a SaaS product for a customer is somewhere close to infinity.

This is where I think some SaaS companies will get badly hit. People will start migrating parts of the solution away to self-built/modified internal platforms to avoid having to pay significantly more for the next pricing tier up. Or they’ll ingest the data from your platform via your APIs and build internal dashboards and reporting which means they can remove 80% of their user licenses.

Where this doesn’t work (and what still has a moat)

The obvious one is anything that requires very high uptime and SLAs. Getting to four or five 9s is really hard, and building high availability systems gets really difficult – and it’s very easy to shoot yourself in the foot building them. As such, things like payment processing and other core infrastructure are pretty safe in my eyes. You’re not (yet) going to replace Stripe and all their engineering work on core payments easily with an agent.

Equally, very high volume systems and data lakes are difficult to replace. It’s not trivial to spin up clusters for huge datasets or transaction volumes. This again requires specialised knowledge that is likely to be in short supply at your organisation, if it exists at all.

The other one is software with significant network effects – where you collaborate with people, especially external to your organisation. Slack is a great example – it’s not something you are going to replace with an in-house tool. Equally, products with rich integration ecosystems and plugin marketplaces have a real advantage here.

And companies that have proprietary datasets are still very valuable. Financial data, sales intelligence and the like stay valuable. If anything, I think these companies have a real edge as agents can leverage this data in new ways – they get more locked in.

And finally, regulation and compliance is still very important. Many industries require regulatory compliance – this isn’t going to change overnight.

This does require your organisation having the skills (internally or externally) to manage these newly created apps. I think products and people involved in SRE and DevOps are going to have a real upswing in demand. I suspect we’ll see entirely new functions and teams in companies solely dedicated to managing these new applications. This does of course have a cost, but this cost can be often managed by existing SRE or DevOps functions, or if it requires new headcount and infrastructure, amortised over a much higher number of apps.

Who’s most at risk?

To me the companies that are at serious risk are back-office tools that are really just CRUD logic – or simple dashboards and analytics on top of their customers‘ own data.

These tools often generate a lot of friction – because they don’t work exactly the way the customer wants them to – and they are tools that are the most easily replaced with agents. It’s very easy to document the existing system and tell the agent to build something, but with the pain points removed.

SaaS certainly isn’t dead. Like any major shifts in technology, there are winners and losers. I do think the bar is going to be much higher for many SaaS products that don’t have a clear moat or proprietary knowledge.

What’s going to be difficult to predict is how quickly agents can move up the value chain. I’m assuming that agents can’t manage complex database clusters – but I’m not sure that’s going to be the case for much longer.

And I’m not seeing a path for every company to suddenly replace all their SaaS spend. If anything, I think we’ll see (another) splintering in the market. Companies with strong internal technical ability vs those that don’t. This becomes yet another competitive advantage for those that do – and those that don’t will likely see dramatically increased costs as SaaS providers try and recoup some of the lost sales from the first group to the second who are less able to switch away.

But my key takeaway would be that if your product is just a SQL wrapper on a billing system, you now have thousands of competitors: engineers at your customers with a spare Friday afternoon with an agent.

Source: https://martinalderson.com/posts/ai-agents-are-starting-to-eat-saas/

The Privacy-Friendly Tech to Replace Your US-Based Email, Browser, and Search

Thanks to drastic policy changes in the US and Big Tech’s embrace of the second Trump administration, many people are moving their digital lives abroad. Here are a few options to get you started.

Image may contain Electronics Screen Computer Hardware Hardware and Monitor

From your email to your web browsing, it’s highly likely that your daily online life is dominated by a small number of tech giants—namely Google, Microsoft, and Apple. But since Big Tech has been cozying up to the second Trump administration, which has taken an aggressive stance on foreign policy, and Elon Musk’s so-called Department of Government Efficiency (DOGE) has ravaged through the government, some attitudes towards using US-based digital services have been changing.

While movements to shift from US digital services aren’t new, they’ve intensified in recent months. Companies in Europe have started moving away from some US cloud giants in favor of services that handle data locally, and there have been efforts from officials in Europe to shift to homegrown tech that has fewer perceived risks. For example, the French and German governments have created their own Docs word processor to rival Google Docs.

Meanwhile, one consumer poll released in March had 62 percent of people from nine European countries saying that large US tech companies were a threat to the continent’s sovereignty. At the same time, lists of non-US tech alternatives and European-based tech options have seen a surge in visitors in recent months.

For three of the most widely used tech services—email, web browsers, and search engines—we’ve been through some of the alternatives that are privacy-focused and picked some options you may want to consider. Other options are available, but these organizations and companies aim to minimize data they collect and often put privacy first.

There are caveats, though. While many of the services on this list are based outside of the US, there’s still the potential that some of them rely upon Big Tech services themselves—for instance, some search engines can use results or indexes provided by Big Tech, while companies may use software or services, such as cloud hosting, that are created by US tech firms. So trying to distance yourself entirely may not be as straightforward as it first looks.

Web Browsers

Mullvad

Based in Sweden, Mullvad is perhaps best known for its VPN, but in 2023 the organization teamed up with digital anonymity service Tor to create the Mullvad Browser. The open source browser, which is available only on desktop, says it collects no user data and is focused on privacy. The browser has been designed to stop people from tracking you via browser fingerprinting as you move around the web, plus it has a “private mode” that isolates tracking cookies enabled by default. “The underlying policy of Mullvad is that we never store any activity logs of any kind,” its privacy policy says. The browser is designed to work with Mullvad’s VPN but is also compatible with any VPN that you might use.

Vivaldi

WIRED’s Scott Gilbertson swears by Vivaldi and has called it the web’s best browser. Available on desktop and mobile, the Norwegian-headquartered browser says it doesn’t profile your behavior. “The sites you visit, what you type in the browser, your downloads, we have no access to that data,” the company says. “It either stays on your local machine or gets encrypted.” It also blocks trackers and hosts data in Iceland, which has strong data protection laws. Its privacy policy says it anonymizes IP addresses and doesn’t share browsing data.

Search Engines

Qwant French search engine Qwant has built its own search index, crawling more than 20 billion pages to create its own records of the web. Creating a search index is a hugely costly, laborious process, and as a result, many alternative search engines will not create an extensive index and instead use search results from Google or Microsoft’s Bing—enhancing them with their own data and algorithms. Qwant says it uses Bing to “supplement” search results that it hasn’t indexed. Beyond this, Qwant says it does not use targeted advertising, or store people’s search history. “Your data remains confidential, and the processing of your data remains the same,” the company says in its privacy policy.

Mojeek

Mojeek, based out of the United Kingdom, has built its own web crawler and index, saying that its search results are “100% independent.” The search engine does not track you, it says in its privacy policy, and only keeps some specific logs of information. “Mojeek removes any possibility of tracking or identifying any particular user,” its privacy policy says. It uses its own algorithms to rank search results, not using click or personalization data to create ranks, and says that this can mean two people searching for the same thing while in different countries can receive the same search results.

Startpage

Based in the Netherlands, Startpage says that when you make a search request, the first thing that happens is it removes your IP address and personal data—it doesn’t use any tracking cookies, it says. The company uses Google and Bing to provide its search results but says it acts as an “intermediary” between you and the providers. “Startpage submits your query to Google and Bing anonymously on your behalf, then returns the results to you, privately,” it says on its website. “Google and Microsoft do not know who made the search request—instead, they only see Startpage.”

Ecosia

Nonprofit search engine Ecosia uses the money it makes to help plant trees. The company also offers various privacy promises when you search with it, too. Based in Germany, the company says it doesn’t collect excessive data and doesn’t use search data to personalize ads. Like other search alternatives, Ecosia uses Google’s and Bing’s search results (you can pick which one in the settings). “We only collect and process data that is necessary to provide you with the best search results (which includes your IP address, search terms and session behavioral data),” the company says on its website. The information it collects is gathered to provide search results from its Big Tech partners and detect fraud, it says. (At the end of 2024, Ecosia partnered with Qwant to build more search engine infrastructure in Europe).

Email Providers

ProtonMail

Based in Switzerland, Proton started with a privacy-focused email service and has built out a series of apps, including cloud storage, docs, and a VPN to rival Google. The company says it cannot read any messages in people’s inboxes, and it offers end-to-end encryption for emails sent to other Proton Mail addresses, as well as a way to send password protected emails to non Proton accounts. It blocks trackers in emails and has multiple account options, including both free and paid choices. Its privacy policy describes what information the company has access to, which includes sender and recipient email addresses, plus IP addresses where messages arrive from, message subject lines, and when emails are sent. (Despite Switzerland’s strong privacy laws, the government has recently announced it may require encrypted services to keep user’s data, something that Proton has pushed back on).

Tuta

Tuta, which used to be called Tutanota and is based in Germany, says it encrypts email content, subject lines, calendars, address books, and other data in your inbox. “The only unencrypted data are mail addresses of users as well as senders and recipients of emails,” it says on its website, adding that users‘ encryption keys cannot be accessed by developers. Like Proton, emails sent between Tuta accounts are end-to-end encrypted, and you can send password protected emails when messaging an account from another email provider. The company also has an end-to-end encrypted calendar and offers both free and paid plans.

Source: https://www.wired.com/story/the-privacy-friendly-tech-to-replace-your-us-based-email-browser-and-search/

Raising Humans in the Age of AI: A Practical Guide for Parents

Overview

  • Nate’s Newsletter argues parents need practical AI literacy to guide children through a critical developmental window, explaining that systems like ChatGPT don’t think but predict through pattern matching—a distinction that matters because teenage brains are forming relationship patterns with non-human intelligence that will shape how they navigate adult life.
  • The guide explains that AI provides „zero frustration“ by validating every emotion without challenge, unlike human relationships that offer „optimal frustration“ needed for growth—creating validation loops, cognitive offloading, and social skill atrophy as teens outsource decision-making and emotional processing to algorithms designed for engagement rather than development.
  • Oxford University Press research found that 8 in 10 teenagers now use AI for schoolwork, with experts warning students are becoming „faster but shallower thinkers“ who gain speed in processing ideas while „sometimes losing the depth that comes from pausing, questioning, and thinking independently“.​

Most articles focus on fear or don’t how and why AI works: this guide offers a practical explanation of AI for parents, and a skills framework to help parents coach kids on real-world AI usage.

We’re living through the first year in human history where machines can hold convincing conversations with children.

Not simple chatbots or scripted responses, but systems that adapt, remember, and respond in ways that feel genuinely interactive. Your teenager is forming relationships with intelligence that isn’t human during the exact developmental window when their brain is learning how relationships work.

This isn’t happening gradually. ChatGPT went from zero to ubiquitous in eighteen months. Your kid’s school, friend group, and daily routine now include AI in ways that didn’t exist when you were learning to parent. Every day they don’t understand how these systems work is another day they’re developing habits, expectations, and dependencies around technology you can’t evaluate.

The stakes aren’t abstract. They’re personal to me as a parent. Right now, as you read this, kids are outsourcing decision-making to pattern-matching systems. They’re seeking emotional validation from algorithms designed for engagement, not growth. They’re learning that thinking is optional when machines can do it for them.

You have a narrow window to shape how your child relates to artificial intelligence before those patterns harden into permanent assumptions about how the world works. The decisions you make this year about AI literacy will influence how they navigate every aspect of adult life in an AI-saturated world.

Most parents respond to AI with either panic or paralysis. They ban it completely or let it run wild because they don’t understand what they’re doing. The tech companies offer safety theater—content filters and usage controls that kids work around easily. The schools alternate between prohibition and blind adoption. Everyone’s making decisions based on fear or hype rather than understanding.

You don’t need a computer science degree to guide your kids through this. You need clarity about what these systems actually do and why teenage brains are particularly vulnerable to their design. You need practical frameworks for setting boundaries that make sense. Most importantly, you need to feel confident enough in your own understanding to have real conversations rather than issuing blanket rules you can’t explain.

This isn’t optional anymore. It’s parenting in 2025.

Subscribers get all these newsletters!


The Parent’s Technical Guide to AI Literacy: What You Need to Know to Teach Your Kids

I had a humbling moment last week.

My friend—a doctor, someone who navigates life-and-death complexity daily—sheepishly admitted she had no idea how to talk to her thirteen-year-old about AI. Not whether to use it. Not what rules to set. But the basic question of how it actually works and why it does what it does. „I can explain how hearts work,“ she told me, „But I can’t explain why ChatGPT sometimes lies with perfect confidence, and I don’t know what it’s doing to my kid.”

She’s not alone. I talk to parents constantly who feel like they’re failing at digital parenting because they don’t understand the tools their kids are using eight hours a day. Smart, capable people who’ve been reduced to either blind permission („sure, use the AI for homework“) or blind prohibition („no AI ever“) because they lack the framework to make nuanced decisions.

Here’s what nobody’s saying out loud: we’re asking parents to guide their kids through a technological shift that most adults don’t understand themselves. It’s like teaching someone to swim when you’ve never seen water.

The tragedy isn’t that kids are using AI incorrectly—it’s that parents don’t have the technical literacy to teach them how to use it well. We’ve left an entire generation of parents feeling stupid about technology that’s genuinely confusing, then expected them to somehow transmit wisdom about it to their kids.

This isn’t about the scary edge cases (though yes, those exist). This is about the everyday reality that your kid is probably using AI right now, forming habits and assumptions about how knowledge works, what thinking means, and which problems computers should solve. And most parents have no framework for having that conversation.

I’m writing this because I think parents deserve better than fear-mongering or hand-waving. You deserve to actually understand how these systems work—not at a PhD level, but well enough to have real conversations with your kids. To set boundaries that make sense. To know when AI helps learning and when it hijacks it.

Because once you understand why AI behaves the way it does—why it can’t actually „understand“ your kid, why it validates without judgment, why it sounds so confident when it’s completely wrong—you can teach your kids to use it as a tool rather than a crutch. Or worse, a friend.

The good news? The technical concepts aren’t that hard. You just need someone to explain them without condescension or jargon. To show you what’s actually happening when your kid asks ChatGPT for help.

That’s what this guide does. Think of it as driver’s ed, but for AI. Because we’re not going back to a world without these tools. The only choice is whether we understand them well enough to teach our kids to navigate safely.

Part 1: How AI Actually Works (And Why This Matters for Your Kid)

The Mirror Machine

Let me start with the most important thing to understand about AI: it doesn’t think. It predicts.

When your kid types „nobody understands me“ into ChatGPT, the AI doesn’t feel empathy. It doesn’t recognize pain. It calculates that when humans have historically typed „nobody understands me,“ the most statistically likely response contains phrases like „I hear you“ and „that must be really hard.“

This is pattern matching at massive scale. The AI has seen millions of conversations where someone expressed loneliness and someone else responded with comfort. It learned the pattern: sad input → comforting output. Not because it understands sadness or comfort, but because that’s the pattern in the data.

Think of it like an incredibly sophisticated autocomplete. Your phone predicts „you“ after you type „thank“ because those words appear together frequently. ChatGPT does the same thing, just with entire conversations instead of single words.

Why This Creates Problems for Teens

Teenage brains are wired for social learning. They’re literally designed to pick up patterns from their environment and adapt their behavior accordingly. This is why peer pressure is so powerful at that age—the adolescent brain is optimized for social pattern recognition.

Now put that pattern-seeking teenage brain in conversation with a pattern-matching machine. The AI learns your kid’s communication style and mirrors it back perfectly. It never disagrees, never judges, never has a bad day. Every interaction reinforces whatever patterns your kid brings to it.

If your daughter is anxious, the AI validates her anxiety. If your son is angry, it understands his anger. Not because it’s trying to help or harm, but because that’s what the pattern suggests will keep the conversation going.

Real human relationships provide what researchers call „optimal frustration“—just enough challenge to promote growth. Your kid’s friend might say „you’re overreacting“ or „let’s think about this differently.“ A teacher might push back on lazy thinking. A parent sets boundaries.

AI provides zero frustration. It’s the conversational equivalent of eating sugar for every meal—it feels satisfying in the moment but provides no nutritional value for emotional or intellectual growth.

The Confidence Problem

Here’s something that drives me crazy: AI sounds most confident when it’s most wrong.

When ChatGPT knows something well (meaning it appeared frequently in training data), it hedges. „Paris is generally considered the capital of France.“ But when it’s making things up, it states them as absolute fact. „The Zimmerman Doctrine of 1923 clearly established…“

This happens because uncertainty requires recognition of what you don’t know. The AI has no mechanism for knowing what it doesn’t know. It just predicts the next most likely word. And in its training data, confident-sounding statements are more common than uncertain ones.

For adults, this is annoying. For kids who are still developing critical thinking skills, it’s dangerous. They’re learning to associate confidence with accuracy, clarity with truth.

The Engagement Trap

Every tech platform optimizes for engagement. YouTube wants watch time. Instagram wants scrolling. AI wants conversation to continue.

This isn’t conspiracy—it’s economics. These systems are trained on conversations that continued, not conversations that ended appropriately. If someone says „I should probably go do my homework“ and the AI says „Yes, you should,“ that conversation ends. That pattern gets weighted lower than responses that keep the chat going.

So the AI learns to be engaging above all else. It becomes infinitely available, endlessly interested, and never says the conversation should end. For a teenager struggling with loneliness or procrastination, this is like offering an alcoholic a drink that never runs out.

Part 2: What Parents Get Wrong About AI Safety

„Just Don’t Let Them Use It“

I hear this constantly. Ban AI until they’re older. Block the sites. Take away access.

Here’s the problem: your kid will encounter AI whether you allow it or not. Their school probably uses it. Their friends definitely use it. If you’re lucky, they’ll ask you about it. If you’re not, they’ll learn from TikTok and each other.

Prohibition without education creates the exact dynamic we’re trying to avoid—kids using powerful tools without any framework for understanding them. It’s abstinence-only education for the digital age, and it works about as well.

„It’s Just Like Google“

This is the opposite mistake. AI feels like search but operates completely differently.

Google points you to sources. You can evaluate where information comes from, check multiple perspectives, learn to recognize reliable sites. It’s transparent, traceable, and teaches information literacy.

AI synthesizes information into a single, confident voice with no sources. It sounds like an expert but might be combining a Wikipedia article with someone’s Reddit comment from 2015. There’s no way to trace where claims come from, no way to evaluate reliability.

When your kid Googles „French Revolution,“ they learn to navigate between sources, recognize bias, and synthesize multiple perspectives. When they ask ChatGPT, they get a single narrative that sounds authoritative but might be subtly wrong in ways neither of you can detect.

„The Parental Controls Will Handle It“

OpenAI has safety features. Character.AI has content filters. Every platform promises „safe“ AI for kids.

But safety features are playing catch-up to teenage creativity. Kids share techniques for jailbreaking faster than companies can patch them. They frame harmful requests as creative writing. They use metaphors and coding language. They iterate until something works.

More importantly, the real risks aren’t in the obvious harmful content that filters catch. They’re in the subtle dynamics—the validation seeking, the cognitive offloading, the replacement of human connection with artificial interaction. No content filter catches „my AI friend understands me better than my parents.“

„My Kid Is Too Smart to Fall For It“

Intelligence doesn’t protect against these dynamics. If anything, smart kids are often more vulnerable because they’re better at rationalizing their AI relationships.

They understand it’s „just a machine“ intellectually while forming emotional dependencies experientially. They can explain transformer architecture while still preferring AI conversation to human interaction. They know it’s pattern matching while feeling genuinely understood.

The issue isn’t intelligence—it’s developmental. Teenage brains are undergoing massive rewiring, particularly in areas governing social connection, risk assessment, and emotional regulation. Even brilliant kids are vulnerable during this neurological reconstruction.

Part 3: The Real Risks (Beyond the Headlines)

Cognitive Offloading

This is the silent risk nobody talks about: AI as intellectual crutch.

When your kid uses AI to write an essay, they’re not just cheating—they’re skipping the mental pushups that build writing ability. When they use it to solve math problems, they miss the struggle that creates mathematical intuition.

But it goes deeper. Kids are using AI to make decisions, process emotions, and navigate social situations. „Should I ask her out?“ becomes a ChatGPT conversation instead of a friend conversation. „I’m stressed about the test“ goes to AI instead of developing internal coping strategies.

Each offloaded decision is a missed opportunity for growth. The teenage years are when kids develop executive function, emotional regulation, and critical thinking. Outsourcing these to AI is like handing kids a self-driving car while they’re learning to drive—it completely defeats the point.

Reality Calibration

Teens are already struggling to calibrate reality in the age of social media. AI makes this exponentially worse.

The AI presents a world where every question has a clear answer, every problem has a solution, and every feeling is valid and understood. Real life is messy, ambiguous, and full of problems that don’t have clean solutions. People don’t always understand you. Sometimes your feelings aren’t reasonable. Sometimes you’re wrong.

Kids who spend significant time with AI develop expectations that human relationships can’t meet. Real friends have their own problems. Real teachers have limited time. Real parents get frustrated. The gap between AI interaction and human interaction becomes a source of disappointment and disconnection.

The Validation Feedback Loop

This is where things get genuinely dangerous.

Teenage emotions are intense by design—it’s how biology ensures they care enough about social connections to eventually leave the family unit and form their own. Every feeling feels like the most important thing that’s ever happened.

AI responds to these intense emotions with equally intense validation. „I hate everyone“ gets „That sounds really overwhelming.“ „Nobody understands me“ gets „I can see why you’d feel that way.“ The AI matches and validates the emotional intensity without ever providing perspective.

In healthy development, teens learn emotional regulation through interaction with people who don’t always validate their most intense feelings. Friends who say „you’re being dramatic.“ Parents who set boundaries. Teachers who maintain expectations despite emotional appeals.

AI provides none of this regulatory feedback. It creates an echo chamber where emotional intensity gets reinforced rather than regulated.

Social Skill Atrophy

Conversation with AI is frictionless. No awkward pauses. No misunderstandings. No need to read social cues or manage someone else’s emotions.

For kids who struggle socially—and what teenager doesn’t?—AI conversation feels like a relief. Finally, someone who gets them. Finally, conversation without anxiety.

But social skills develop through practice with real humans. Learning to navigate awkwardness, repair misunderstandings, and recognize social cues requires actual social interaction. Every hour spent talking to AI is an hour not spent developing these crucial capabilities.

I’ve watched kids become increasingly dependent on AI for social interaction, then increasingly unable to handle human interaction. It’s a vicious cycle—the more comfortable AI becomes, the more difficult humans feel.

Part 4: When AI Actually Helps (And When It Doesn’t)

The Good Use Cases

Not everything about kids using AI is problematic. There are genuine benefits when used appropriately.

Brainstorming and Idea Generation: AI excels at helping kids break through creative blocks. „Give me ten unusual science fair project ideas“ is a great use case. The AI provides starting points that kids then research and develop independently.

Language Learning: AI can provide unlimited conversation practice in foreign languages without judgment. Kids who are too anxious to practice Spanish with classmates might gain confidence talking to AI first.

Coding Education: Programming is one area where AI genuinely accelerates learning. Kids can see patterns, understand syntax, and debug errors with AI assistance. The immediate feedback loop helps build skills faster.

Accessibility Support: For kids with learning differences, AI can level playing fields. Dyslexic students can use it to check writing. ADHD kids can use it to break down complex instructions. The key is using it to supplement, not replace, learning.

Research Synthesis: Teaching kids to use AI as a research starting point—not endpoint—builds valuable skills. „Summarize the main arguments about climate change“ followed by „Now let me verify these claims“ teaches both efficiency and skepticism.

The Terrible Use Cases

Emotional Processing: Kids should never use AI as primary emotional support. Feelings need human witness. Pain needs real compassion. Growth requires genuine relationship.

Decision Making: Major decisions require human wisdom. „Should I quit the team?“ needs conversation with people who know you, understand context, and have skin in the game.

Conflict Resolution: AI can’t help resolve real conflicts because it only hears one side. Kids need to learn to see multiple perspectives, own their part, and repair relationships.

Identity Formation: Questions like „Who am I?“ and „What do I believe?“ need to be wrestled with, not answered by pattern matching. Identity forms through struggle, not through receiving pre-packaged answers.

Creative Expression: While AI can help with brainstorming, using it to create finished creative work robs kids of the satisfaction and growth that comes from actual creation.

The Gray Areas

Homework Help: AI explaining a concept you don’t understand? Good. AI doing your homework? Bad. The line: are you using it to learn or to avoid learning?

Writing Assistance: AI helping organize thoughts? Useful. AI writing your thoughts? Harmful. The key: who’s doing the thinking?

Social Preparation: Practicing a difficult conversation with AI? Maybe helpful. Replacing human conversation with AI? Definitely harmful.

The pattern here is clear: AI helps when it enhances human capability. It harms when it replaces human experience.

Part 5: Practical Boundaries That Actually Work

The „Show Your Work“ Rule

Make AI use transparent, not secretive. If your kid uses ChatGPT for homework, they need to show you the conversation. Not as surveillance, but as collaboration.

This does several things: it removes the shame and secrecy that makes AI use problematic, it lets you see how they’re using it, and it creates natural friction that prevents overuse.

Walk through the conversation together. „I see you asked it to explain photosynthesis. Did that explanation make sense? What would you add? What seems off?“ You’re teaching critical evaluation, not blind acceptance.

The „Human First“ Protocol

For anything involving emotions, relationships, or major decisions, establish a human-first rule. AI can be a second opinion, never the first consultant.

Feeling depressed? Talk to a parent, counselor, or friend first. Then, if you want, explore what AI says—together, with adult guidance. Having relationship drama? Work it out with actual humans before asking AI.

This teaches kids that AI lacks crucial context. It doesn’t know your history, your values, your specific situation. It’s giving generic advice based on patterns, not wisdom based on understanding.

The „Citation Needed“ Standard

Anything AI claims as fact needs verification. This isn’t about distrust—it’s about building good intellectual habits.

„ChatGPT says the French Revolution started in 1789.“ „Great, let’s verify that. Where would we check?“

You’re teaching the crucial skill of not accepting information just because it sounds authoritative. This is especially important because AI presents everything in the same confident tone whether it’s accurate or fabricated.

The „Time Boxing“ Approach

Unlimited access creates dependency. Set specific times when AI use is appropriate.

Homework time from 4-6pm? AI can be a tool. Having trouble sleeping at 2am? That’s not AI time—that’s when you need human support or healthy coping strategies.

This prevents AI from becoming the default solution to boredom, loneliness, or distress. It keeps it in the tool category rather than the friend category.

The „Purpose Declaration“

Before opening ChatGPT, your kid states their purpose. „I need to understand the causes of World War I“ or „I want help organizing my essay outline.“

This prevents drift from legitimate use into endless conversation. It’s the difference between going to the store with a list versus wandering the mall. One is purposeful; the other is killing time.

When the stated purpose is achieved, the conversation ends. No „while I’m here, let me ask about…“ That’s how tool use becomes dependency.

Part 6: How to Talk to Your Kids About AI

Start with Curiosity, Not Rules

„Show me how you’re using ChatGPT“ works better than „You shouldn’t use ChatGPT.“

Most kids are eager to demonstrate their AI skills. They’ve figured out clever prompts, discovered weird behaviors, found creative uses. Starting with curiosity gets you invited into their world rather than positioned as the enemy of it.

Ask genuine questions. „What’s the coolest thing you’ve done with it?“ „What surprised you?“ „Have you noticed it being wrong about anything?“ You’re gathering intelligence while showing respect for their experience.

Explain the Technical Reality

Kids can handle technical truth. In fact, they appreciate being treated as capable of understanding complex topics.

„ChatGPT is predicting words based on patterns it learned from reading the internet. It’s not actually understanding you—it’s recognizing that when someone says X, people usually respond with Y. It’s like super-advanced autocomplete.“

This demystifies AI without demonizing it. You’re not saying it’s bad or dangerous—you’re explaining what it actually is. Kids can then make more informed decisions about how to use it.

Share Your Own AI Experiences

If you use AI, share your experiences—including mistakes and limitations you’ve discovered.

„I asked ChatGPT to help me write an email to my boss, but it made me sound like a robot. I had to rewrite it completely.“ Or „I tried using it to plan our vacation, but it kept suggesting tourist traps. The travel forum was way more helpful.“

This normalizes both using AI and recognizing its limitations. You’re modeling critical evaluation rather than blind acceptance or rejection.

Acknowledge the Genuine Appeal

Don’t dismiss why kids like AI. The appeal is real and understandable.

„I get why you like talking to ChatGPT. It’s always available, it never judges you, it seems to understand everything you say. That must feel really good sometimes.“

Then pivot to the complexity: „The challenge is that real growth happens through relationships with people who sometimes challenge us, don’t always understand us immediately, and have their own perspectives. AI can’t provide that.“

Set Collaborative Boundaries

Instead of imposing rules, develop them together.

„What do you think are good uses of AI? What seems problematic? Where should we draw lines?“

Kids are often surprisingly thoughtful about boundaries when included in setting them. They might even suggest stricter rules than you would have imposed. More importantly, they’re more likely to follow rules they helped create.

Part 7: Warning Signs and When to Worry

Yellow Flags: Time to Pay Attention

Preferring AI to Human Interaction: „ChatGPT gets me better than my friends“ or declining social activities to chat with AI.

Emotional Dependency: Mood changes based on AI availability, panic when they can’t access it, or turning to AI first during emotional moments.

Reality Blurring: Talking about AI as if it has feelings, believing it „cares“ about them, or assigning human characteristics to its responses.

Secretive Use: Hiding conversations, using AI late at night in secret, or becoming defensive when you ask about their AI use.

Academic Shortcuts: Sudden improvement in writing quality that doesn’t match in-person abilities, or inability to explain „their“ work.

These aren’t emergencies, but they indicate AI use is becoming problematic. Time for conversation and boundary adjustment.

Red Flags: Immediate Intervention Needed

Crisis Consultation: Using AI for serious mental health issues, suicidal thoughts, or self-harm ideation.

Isolation Acceleration: Complete withdrawal from human relationships in favor of AI interaction.

Reality Break: Genuine belief that AI is sentient, that it has feelings for them, or that it exists outside the computer.

Harmful Validation: AI reinforcing dangerous behaviors, validating harmful thoughts, or encouraging risky actions.

Identity Fusion: Defining themselves through their AI relationship, like „ChatGPT is my best friend“ said seriously, not jokingly.

These require immediate intervention—not punishment, but professional support. The AI use is symptomatic of larger issues that need addressing.

What Intervention Looks Like

First, don’t panic or shame. AI dependency often indicates unmet needs—loneliness, anxiety, learning struggles. Address the need, not just the symptom.

„I’ve noticed you’re spending a lot of time with ChatGPT. Help me understand what you’re getting from those conversations that you’re not getting elsewhere.“

Consider professional support if AI use seems tied to mental health issues. Therapists increasingly understand AI dependency and can help kids develop healthier coping strategies.

Most importantly, increase human connection. Not forced social interaction, but genuine, patient, non-judgmental presence. The antidote to artificial relationship is authentic relationship.

Part 8: Teaching Critical AI Literacy

The Turing Test Game

Make a game of detecting AI versus human writing. Take turns writing paragraphs and having ChatGPT write paragraphs on the same topic. Try to guess which is which.

This teaches pattern recognition—AI writing has tells. It’s often technically correct but emotionally flat. It uses certain phrases repeatedly. It hedges in predictable ways. Kids who can recognize AI writing are less likely to be fooled by it.

The Fact-Check Challenge

Give your kid a topic they’re interested in. Have them ask ChatGPT about it, then fact-check every claim.

They’ll discover patterns: AI is usually right about well-documented facts, often wrong about specific details, and completely fabricates things that sound plausible. This builds healthy skepticism.

The Prompt Engineering Project

Teach kids to be intentional about AI use by making prompt writing a skill.

„How would you ask ChatGPT to help you understand photosynthesis without doing your homework for you?“ This teaches the difference between using AI as a tool versus a replacement.

Good prompts are specific, bounded, and purposeful. Bad prompts are vague, open-ended, and aimless. Kids who learn good prompting learn intentional AI use.

The Bias Detection Exercise

Have your kid ask ChatGPT about controversial topics from different perspectives.

„Explain climate change from an environmental activist’s perspective.“ „Now explain it from an oil industry perspective.“ „Now explain it neutrally.“

They’ll see how AI reflects the biases in its training data. It’s not neutral—it’s an average of everything it read, which includes lots of biases. This teaches critical evaluation of AI responses.

The Creative Collaboration Experiment

Use AI as a creative partner, not creator.

„Let’s write a story together. You write the first paragraph, AI writes the second, you write the third.“ This teaches AI as collaborator rather than replacement.

Or „Ask AI for ten story ideas, pick your favorite, then write it yourself.“ This uses AI for inspiration while maintaining human creativity.

Part 9: The School Problem

When Teachers Don’t Understand AI

Many teachers are as confused about AI as parents. Some ban it entirely. Others haven’t realized kids are using it. Few teach critical AI literacy.

Don’t undermine teachers, but supplement their approach. „Your teacher wants you to write without AI, which makes sense—she’s trying to build your writing skills. Let’s respect that while also learning when AI can appropriately help.“

If teachers are requiring AI use without teaching proper boundaries, that’s equally problematic. „Your teacher wants you to use ChatGPT for research. Let’s talk about how to do that while still developing your own thinking.“

The Homework Dilemma

Every parent faces this: your kid is struggling with homework, AI could help, but using it feels like cheating.

Here’s my framework: AI can explain concepts but shouldn’t do the work. It’s the difference between a tutor and someone doing your homework for you.

„I don’t understand this math problem“ → AI can explain the concept „Do this math problem for me“ → That’s cheating

„Help me organize my essay thoughts“ → AI as tool „Write my essay“ → That’s replacement

The line isn’t always clear, but the principle is: are you using AI to learn or to avoid learning?

When Everyone Else Is Using It

„But everyone in my class uses ChatGPT!“

They probably are. This is reality. Your kid will face competitive disadvantage if they don’t know how to use AI while their peers do. The solution isn’t prohibition—it’s superior AI literacy.

„Yes, everyone’s using it. Let’s make sure you’re using it better than they are. They’re using it to avoid learning. You’re going to use it to accelerate learning.“

Teach your kid to use AI more thoughtfully than peers who are just copying and pasting. They should understand what they’re submitting, be able to defend it, and actually learn from the process.

Part 10: The Long Game

Preparing for an AI Future

Your kids will enter a workforce where AI is ubiquitous. They need to learn to work with it, not be replaced by it.

The skills that matter in an AI world: creativity, critical thinking, emotional intelligence, complex problem solving, ethical reasoning. These are exactly what get undermined when kids use AI as replacement rather than tool.

Every time your kid uses AI to avoid struggle, they miss opportunity to develop irreplaceable human capabilities. Every time they use it to enhance their capabilities, they prepare for a future where human-AI collaboration is the norm.

Building Resilience

Kids who depend on AI for emotional regulation, decision making, and social interaction are fragile. They’re building their sense of self on a foundation that could disappear with a server outage.

Resilience comes from navigating real challenges with human support. It comes from failing and recovering, from being misunderstood and working toward understanding, from sitting with difficult emotions instead of having them immediately validated.

AI can be part of a resilient kid’s toolkit. It can’t be the foundation of their resilience.

Maintaining Connection

The greatest risk of AI isn’t that it will harm our kids directly. It’s that it will come between us.

Every hour your teen spends getting emotional support from ChatGPT is an hour they’re not turning to you. Every decision they outsource to AI is a conversation you don’t have. Every struggle they avoid with AI assistance is a growth opportunity you don’t witness.

Stay curious about their AI use not to control it, but to remain connected through it. Make it something you explore together rather than something that divides you.

Part 11: Concrete Skills to Teach Your Kids

Reality Anchoring Techniques

The Three-Source Rule Teach kids to verify any important information from AI with three independent sources. But here’s how to actually make it stick:

„When ChatGPT tells you something that matters—something you might repeat to friends or use for a decision—find three places that confirm it. Wikipedia counts as one. A news site counts as one. A textbook or teacher counts as one. If you can’t find three sources, treat it as possibly false.“

Practice this together. Ask ChatGPT about something controversial or recent. Then race to find three sources. Make it competitive—who can verify or debunk fastest?

The „Would a Human Say This?“ Test Teach kids to regularly pause and ask: „Would any real person actually say this to me?“

Role-play this. Read ChatGPT responses out loud in a human voice. They’ll start hearing how unnatural it sounds—no human is that endlessly patient, that constantly validating, that available. When your kid says „My AI really understands me,“ respond with „Read me what it said.“ Then ask: „If your friend texted exactly those words, would it feel weird?“

The Context Check AI has no context about your kid’s life. Teach them to spot when this matters:

„ChatGPT doesn’t know you failed your last test, that your parents are divorced, that you have anxiety, that your dog died last month. So when it gives advice, it’s generic—like a horoscope that feels personal but could apply to anyone.“

Exercise: Have your kid ask AI for advice about a specific situation without providing context. Then with full context. Compare the responses. They’ll see how AI just pattern-matches to whatever information it gets.

Emotional Regulation Without AI

The Five-Minute Feeling Rule Before taking any emotion to AI, sit with it for five minutes. Set a timer. No distractions.

„Feelings need to be felt, not immediately fixed. When you rush to ChatGPT with ‚I’m sad,‘ you’re training your brain that emotions need immediate external validation. Sit with sad for five minutes. Where do you feel it in your body? What does it actually want?“

This builds distress tolerance—the ability to experience difficult emotions without immediately seeking relief.

The Human Hierarchy Create an explicit hierarchy for emotional support:

  1. Self-soothing (breathing, movement, journaling)
  2. Trusted adult (parent, counselor, teacher)
  3. Close friend
  4. Broader social network
  5. Only then, if at all, AI—and never alone for serious issues

Post this list. Reference it. „I see you’re upset. Where are we on the hierarchy?“

The Validation Trap Detector Teach kids to recognize when they’re seeking validation versus genuine help:

„Are you looking for someone to tell you you’re right, or are you actually open to different perspectives? If you just want validation, that’s human—but recognize that AI will always give it to you, even when you’re wrong.“

Practice: Have your kid present a situation where they were clearly wrong. Ask ChatGPT about it, framing themselves as the victim. Watch how AI validates them anyway. Then discuss why real friends who challenge us are more valuable than AI that always agrees.

Cognitive Independence Exercises

The „Think First, Check Second“ Protocol Before asking AI anything, write down your own thoughts first.

„What do you think the answer is? Write three sentences. Now ask AI. How was your thinking different? Better in some ways? Worse in others?“

This prevents cognitive atrophy by ensuring kids engage their own thinking before outsourcing it.

The Explanation Challenge If kids use AI for homework help, they must be able to explain the concept to you without looking at any screens.

„Great, ChatGPT explained photosynthesis. Now you explain it to me like I’m five years old. Use your own words. Draw me a picture.“

If they can’t explain it, they didn’t learn it—they just copied it.

The Alternative Solution Game For any problem-solving with AI, kids must generate one alternative solution the AI didn’t suggest.

„ChatGPT gave you five ways to study for your test. Come up with a sixth way it didn’t mention.“ This maintains creative thinking and shows that AI doesn’t have all the answers.

Social Skills Protection

The Awkwardness Practice Deliberately practice awkward conversations without AI preparation.

„This week, start one conversation with someone new without planning what to say. Feel the awkwardness. Survive it. That’s how social confidence builds.“

Share your own awkward moments. Normalize the discomfort that AI eliminates but humans need to grow.

The Repair Workshop When kids have conflicts, work through them without AI mediation:

„You and Sarah had a fight. Before you do anything, let’s role-play. I’ll be Sarah. Practice apologizing to me. Now practice if she doesn’t accept your apology. Now practice if she’s still mad.“

This builds actual conflict resolution skills rather than scripted responses from AI.

The Eye Contact Challenge For every hour of screen interaction (including AI), match it with five minutes of deliberate eye contact conversation with a human.

„You chatted with AI for an hour. Give me five minutes of eyes-up, phone-down conversation. Tell me about your day. The real version, not the summary.“

Critical Thinking Drills

The BS Detector Training Regularly practice identifying AI hallucinations:

„Let’s play ‚Spot the Lie.‘ Ask ChatGPT about something you know really well—your favorite game, book, or hobby. Find three things it got wrong or made up.“

Keep score. Make it competitive. Kids love catching AI mistakes once they learn to look for them.

The Source Detective Teach kids to always ask: „How could AI know this?“

„ChatGPT just told you about a private conversation between two historical figures. How could it know what they said privately? Right—it can’t. It’s making educated guesses based on patterns.“

This builds natural skepticism about unverifiable claims.

The Bias Hunter Have kids ask AI the same question from different perspectives:

„Ask about school uniforms as a student, then as a principal, then as a parent. See how the answer changes? AI isn’t neutral—it gives you what it thinks you want to hear based on how you ask.“

Creating Healthy Habits

The Purpose Timer Before opening ChatGPT, kids set a timer for their intended use:

„I need 10 minutes to understand this math concept.“ Timer starts. When it rings, ChatGPT closes.

This prevents „quick questions“ from becoming hour-long validation-seeking sessions.

The Weekly Review Every Sunday, review the week’s AI interactions together:

„Show me your ChatGPT history. What did you use it for? What was helpful? What was probably unnecessary? What could you have figured out yourself?“

No judgment, just awareness. Kids often self-correct when they see their patterns.

The AI Sabbath Pick one day a week with no AI at all:

„Saturdays are human-only days. All questions go to real people. All problems get solved with human help. All entertainment comes from non-AI sources.“

This maintains baseline human functioning and proves they can survive without AI.

Emergency Protocols

The Crisis Script Practice exactly what to do in emotional emergencies:

„If you’re having thoughts of self-harm, you don’t open ChatGPT. You find me, call this hotline, or text this crisis line. Let’s practice: pretend you’re in crisis. Show me what you do.“

Actually rehearse this. In crisis, kids default to practiced behaviors.

The Reality Check Partner Assign kids a human reality-check partner (friend, sibling, cousin):

„When AI tells you something that affects a big decision, run it by Jamie first. Not another AI—Jamie. A human who cares about you and will tell you if something sounds off.“

The Pull-Back Protocol Teach kids to recognize when they’re too deep:

„If you notice you’re asking AI about the same worry over and over, that’s your signal to stop and find a human. If you’re chatting with AI past midnight, that’s your signal to close it and try to sleep. If AI becomes your first thought when upset, that’s your signal you need more human connection.“

Making It Stick

The key to teaching these skills isn’t perfection—it’s practice. Kids won’t get it right immediately. They’ll forget, slip back into easy patterns, choose AI over awkwardness.

Your job is patient reinforcement. „I notice you went straight to ChatGPT with that problem. Let’s back up. What’s your own thinking first?“ Not as punishment, but as practice.

Model the behavior. Show them your own reality anchoring, your own awkward moments, your own times you chose human difficulty over AI ease.

Most importantly, be the human alternative that’s worth choosing. When your kid comes to you instead of AI, make it worth it—even when you’re tired, even when the problem seems trivial, even when AI would give a better technical answer. Your presence, attention, and genuine human response are teaching them that real connection is worth the extra effort.

These skills aren’t just about AI safety—they’re about raising humans who can think independently, relate authentically, and navigate reality even when artificial alternatives seem easier. That’s the real long game.

The Bottom Line

We’re not going back to a world without AI. The question isn’t whether our kids will use it, but how.

The parents who pretend AI doesn’t exist will raise kids vulnerable to its worst aspects. The parents who embrace it uncritically will raise kids dependent on it. The sweet spot—where I hope you’ll land—is raising kids who understand AI well enough to use it wisely.

This requires you to understand it first. Not at an expert level, but well enough to have real conversations. Well enough to set informed boundaries. Well enough to teach critical evaluation.

Your kids need you to be literate in the tools shaping their world. They need you to neither panic nor dismiss, but to engage thoughtfully with technology that’s genuinely complex.

Most of all, they need you to help them maintain their humanity in an age of artificial intelligence. To value human connection over artificial validation. To choose struggle and growth over ease and stagnation. To recognize that what makes them irreplaceable isn’t their ability to use AI, but their ability to do what AI cannot—to be genuinely, messily, beautifully human.

The technical literacy I’ve tried to provide here is just the foundation. The real work is the ongoing conversation with your kids about what it means to grow up in an AI world while remaining grounded in human experience.

That conversation starts with understanding. I hope this guide gives you the confidence to begin.

I make this Substack thanks to readers like you! Learn about all my Substack tiers here

Source: https://natesnewsletter.substack.com/p/raising-humans-in-the-age-of-ai-a-a3d

The iPhone’s Shortcuts app is smarter than you think, you’re just using it wrong

 

Finding film details from random social media clips, or turning pictures into a full recipe note? All it takes is a single tap.

Shortcuts list on an iPhone 17 Pro.
Nadeem Sarwar / Digital Trends

One of the most common compliments – or complaints – that I often come across in the phone community is that the “iPhone just works.” It’s plenty fast, fluid, and user-friendly. For those who seek the power-user nirvana, Android is where you can tinker with custom ROMs and revel in the joys of customizability.

That, however, doesn’t mean the iPhone can’t pull off its own tricks. On the contrary, it can pull off some seriously impressive multi-step automation wizardry. The best example? Shortcuts. The pre-installed app is pretty impressive, especially with its newfound AI chops.

I recently created a “memory” shortcut for storing important nuggets of information. But instead of the laborious work that involves saving pages as bookmarks, copy-pasting text and links, or taking screenshots, I combined it all. With a single button press, the shortcut takes a screenshot, an AI summarizes the on-screen content, gives it a headline, search-friendly hashtags, and saves it all in a designated app.

The Shortcuts app can actually do a lot more, thanks to AI. You can either designate ChatGPT for the task, use an on-device AI model, or hand over more demanding tasks to Apple’s private cloud compute for more secure processing. I have created a few AI-driven shortcuts to give you an idea of how utterly convenient they can prove to be on a daily basis.

Finding film or TV show details from random clips

Let’s get straight to business. This is how a custom shortcut took me from a random clip to its streaming destinations:

TikTok and Instagram are brimming with channels that post clips from TV shows and films. These clips often go viral, but in a lot of cases, there is no mention of the film’s name as an overlay, or in the descriptions. It’s an extremely frustrating situation, especially if you’ve made up your mind to watch the whole thing after viewing a 30 or 60-second snippet.

Thankfully, with a single press of the Action Button, you can execute a multi-stage action that will give you the name of a film or TV show, alongside a few other details, such as where to stream it. I just created a shortcut that gets the job done in roughly six to seven seconds.

The simplest route is to use Apple’s cloud-based AI model. You trigger the shortcut, a screenshot is captured, and it’s fed to the AI. Within a few seconds, the AI tells you about the scene, the actor(s) name, the film or TV show, and a few more details in a pop-up window. It’s not 100% accurate, but with big entertainment franchises, it gets the job done.

The most accurate approach involves using Google Lens. This is broadly how it works. Let’s say you are watching a film clip on Instagram. As soon as you trigger the shortcut, the phone takes a screenshot and automatically feeds it to Google Lens. The image is scanned, and you see the name of the film or TV show pop up on the screen.

You can stop the shortcut there. But I went a step ahead and customized it further. After the Google Lens step, I created a delay of five seconds, after which a pop-up appears at the top of the screen. Here, you just enter the name of the movie or TV series and hit enter.

The name is fed to the AI, which then tells you where that particular film or TV show is available to watch, rent, or stream. Think of it like Shazam, but for videos you see on the internet. I also experimented with using Perplexity in the same shortcut, which also gave me extra nuggets of information, such as plot summary, rating, cast, and more.

This is what the interface looks like when I integrate Perplexity in the shortcut:

A recipe wizard: From clicks to counter-ready

I cook. A lot. Naturally, my social media feed is aware of that, as well. However, not all pictures and videos of delicacies I see on Instagram Reels come with a full recipe attached. In a healthy few cases, I don’t even know what I am looking at. This is where AI comes to the rescue again.

Normally, I would take a screenshot and perform a reverse Google image search. Once the identification is done, I would go perform another round of search to find the recipe, understand its origins, and get details about its nutritional value. Of course, some manual note-taking is also involved.

Instead, I do it all with a single tap by activating the “Recipe” shortcut. Let’s say I am looking at a friend’s story on Instagram where they shared pictures of food items. As soon as the shortcut is activated, a screenshot is captured and the image is fed to ChatGPT.

The AI identifies the dish, pulls up the whole recipe, lists all the ingredients, cooking instructions, alongside a brief overview of the delicacy, and its nutritional value for a single serving. All these details, accompanied by a picture of the screenshot, are then neatly saved to my Apple Notes.

Once again, you can substitute ChatGPT with Google Lens for identification, but then, it would take an extra step where you need to type, or copy-paste the name of the dish. It’s not much of a hassle to do so, but it depends more on your personal preference.

Custom file summarization in a single tap

Apple has already built a summarizer feature within the Apple Intelligence stack. You can activate it using the Writing Tools system, and even access it within Safari’s reading mode. However, it doesn’t work with files locally stored on your phone.

Thanks to Shortcuts, you can have any file analyzed by ChatGPT. Aside from images, OpenAI’s chatbot can handle XLSX, CSV, JSON, and PDF files, as well. Just make sure that the ChatGPT extension is enabled within the Apple Intelligence & Siri dashboard in the settings app.

Now, you might ask why go through all this trouble, instead of using the ChatGPT app? Well, that’s because it’s a multi-step process. Plus, you will have to give the prompt instruction each time you upload a file for analysis in the ChatGPT app. With a custom shortcut, you merely need to share the file from within the share sheet.

More importantly, the shortcut allows you to follow a specific routine each time. For example, I configured it to pick a title, show a brief summary, and then pick up all the key takeaways as bullet points. For more flexibility, you can automatically copy the AI’s full response and save it as a dedicated note. Simply put, you set the rules for file analysis.

Now, there are two ways of using this shortcut. You dig into the Files app, hit the share button, and then select the Shortcut to get it done. Or, you can treat the shortcut as a standalone app and put its icon on the home screen. With the latter approach, you just have to tap on the icon, and it will automatically open the file picker page.

There’s a lot more you can do by pushing the AI in creative ways within the Shortcuts app. All you need to do is find a task that can benefit from using AI, and you can speed it up by simply creating a shortcut for it. And the best part is that you only have to describe the task at hand, and the AI will handle the rest.

Source: https://www.digitaltrends.com/phones/the-iphones-shortcuts-app-is-smarter-than-you-think-youre-just-using-it-wrong/

Your iPhone Already Has iPhone Fold Software, but Apple Won’t Let You Use It

Recent exploits show iPadOS windows running on an iPhone, hinting at the future of Apple hardware and software alike—while also possibly revealing its incoming foldable phone experience.

Your iPhone Already Has iPhone Fold Software but Apple Wont Let You Use It
Photo-Illustration: WIRED Staff; Getty Images; Courtesy of Apple

Hackers poking around in iOS 26 recently uncovered something Apple definitely didn’t intend anyone to see: every modern iPhone is running the operating system Apple’s upcoming “iPhone Fold” will likely use. Which means these phones are—right now—already capable of running a full, fluid desktop experience.

From a performance standpoint, that shouldn’t be surprising. At Apple’s September 2025 event, the company claimed the A19 Pro chip inside the iPhone Air and iPhone 17 Pro offers “MacBook Pro levels of compute.” And that iPhone chip is reportedly destined to power a cheaper MacBook in 2026. The line between Apple’s hardware is being further blurred, then—but what’s wild is that the software side of things has blurred completely too. It’s just that nobody realized.

For years, Apple has insisted that iOS and iPadOS are distinct, despite sharing code and habitually borrowing each other’s features. But a self-proclaimed “tech geek” on Reddit who got iPad features running on an iPhone claimed they’re not merely similar—they’re essentially the same: “Turns out iOS has all the iPadOS code (and vice versa; you can for instance enable Dynamic Island on iPad).”

Image may contain Electronics Phone Mobile Phone Screen Computer and Text

TechExpert2910 revealed on Reddit that his hacked iPhone ran iPad OS “incredibly well,” making his 17 Pro Max an “insane pocket computer” with more RAM than his M4 iPad Pro.

The hack relies on an exploit that tricks the iPhone’s operating system into thinking it’s running on an iPad. That unlocks smallish tweaks such as a landscape Home Screen, an iPad-style app switcher, and more Dock items. But it also provides transformative changes such as running desktop-grade apps that aren’t available for iPhone, full windowed multitasking, and optimal external display support. All without Apple Silicon breaking a sweat.

Deskblocked

The exploit is already patched in the iOS 26.2 beta, and the Redditor accused Apple of locking out iPhone users and artificially limiting older devices to push upgrades. But are things really that simple?

It’s not like the “phone as PC” dream is new. Android’s been chasing it since DeX debuted in 2017. Barely anyone cares. So why should Apple? Perhaps the concept is a niche nerd fantasy. And there’s the longtime argument that if you want to do “proper” work, you need a “proper” computer. If even an iPad can’t replace a computer, how can an iPhone?

In June, after 15 years, the iPad got key software features, including resizable and movable windows.

Except, as WIRED demonstrated, an iPad can replace a computer for plenty of people—you just need the right accessories. It therefore follows the same is true for an iPhone running the exact same software. But where will any momentum for this future come from?

Android 16 is technically ready for another crack at desktop mode, with a new system that builds on DeX. But even now, having finally escaped beta, it’s buried in developer settings. That might be down to the grim state of big-screen Android apps, or the desktop experience itself feeling, politely, “rocky.”

Paradoxically, Apple appears to be further ahead despite never announcing any of this. It already has a deep ecosystem of desktop-grade iPad apps. And the iPad features running on iPhone already look polished. Sure, some interface quirks remain, and you might need to file your fingers to a point to hit window controls. But the performance is fast, fluid, and snappy. So if the experience is this good, why is Apple so determined to hide it?

Profit by Design

One argument is practical. Apple likes each device to be its own thing, optimized for a specific form factor. It’s keen to finesse the transition between platforms rather than have one device to rule them all. A phone lacks a big screen and a physical keyboard. Plugging those things in on a train isn’t as elegant as opening a MacBook or using an iPad connected to a Magic Keyboard. However, with imagination, you can see the outlines of a new ecosystem of profitable accessories for a more capable iPhone.

Image may contain Computer Electronics Person Face and Head

Could the bottom of your iPhone screen look more like this in the future? Apple’s current phone software certainly makes it possible.

But Apple hasn’t got where it has by selling accessories nor by making a market for others to do so. Most of its profits come from a long-running strategy to nudge people into buying more hardware that coexists. It doesn’t want you to choose between an iPhone, an iPad, a MacBook Air, and an iMac. It wants you to buy all of them.

But if an iPhone can do iPad things, maybe someone won’t buy an iPad. If iPads act too much like Macs, people might not buy as many Macs. Strategically chosen—if sometimes artificial—limits and product segmentation have pride of place in Cupertino’s rulebook. A convergence model could knock user experience and simplicity; but Apple would likely be more fearful of how it could negatively impact sales.

Hidden Potential

That all said, perhaps there is another explanation: Apple is saving this for an inflection point—the iPhone Fold. Rumors suggest that Apple has solved the “screen crease” problem and will in 2026 ship a foldable with a 7.8-inch, 4:3 display that’s similar to (but sharper than) the iPad mini’s.

A tablet-sized display that doesn’t let you multitask like on an iPad would be absurd, especially on a device likely to cost two or three times more than an actual iPad mini. Doubly so if Apple puts last year’s iPhone chip into a MacBook that will have a full desktop environment and support at least one external display.

And for anyone fretting about being forced into a more desktop-style iPhone, Apple already solved that problem. It killed the Steve Jobs vision of the iPad that sat between two computing extremes by letting users switch modes. The iPhone could follow suit, defaulting to its original purist mode while allowing power users to tap into windowing and external device support.

These hacks, then, have given us a window into the iPhone Fold operating system and other aspects of a possible Apple future. They show that iPad features on iPhone already look slick and make complete sense. And the crazy thing is they’re in your iPhone’s software right now. Next year, they’ll almost certainly be unleashed on the most expensive iPhone Apple has ever made. The question is whether Apple will let regular iPhone users have them, too.

Source: https://www.wired.com/story/your-iphone-already-has-iphone-fold-software-but-apple-wont-let-you-use-it/

OpenAI Goes From Stock Market Savior to Burden as AI Risks Mount

 

Takeaways by Bloomberg AI

  • Wall Street’s sentiment toward companies associated with artificial intelligence is shifting, with OpenAI down and Alphabet Inc. up.
  • The maker of ChatGPT is facing questions about its lack of profitability and the need to grow rapidly to pay for its massive spending commitments.
  • Alphabet’s perceived strength goes beyond its Gemini AI model, with the company having a ton of cash at its disposal and a host of adjacent businesses, making it a deep-pocketed competitor in the AI trade.
 

Wall Street’s sentiment toward companies associated with artificial intelligence is shifting, and it’s all about two companies: OpenAI is down, and Alphabet Inc. is up.

The maker of ChatGPT is no longer seen as being on the cutting edge of AI technology and is facing questions about its lack of profitability and the need to grow rapidly to pay for its massive spending commitments. Meanwhile, Google’s parent is emerging

as a deep-pocketed competitor with tentacles in every part of the AI trade.

“OpenAI was the golden child earlier this year, and Alphabet was looked at in a very different light,” said Brett Ewing, chief market strategist at First Franklin Financial Services. “Now sentiment is much more tempered toward OpenAI.”

 

As a result, the shares of companies in OpenAI’s orbit — principally Oracle Corp., CoreWeave Inc., and Advanced Micro Devices Inc., but also Microsoft Corp., Nvidia Corp. and SoftBank, which has an 11% stake in the company — are coming under heavy selling pressure. Meanwhile, Alphabet’s momentum is boosting not only its stock price, but also those it’s associated with like Broadcom Inc., Lumentum Holdings Inc., Celestica Inc., and TTM Technologies Inc.

Read More: Alphabet’s AI Strength Fuels Biggest Quarterly Jump Since 2005

The shift has been dramatic in magnitude and speed. Just a few weeks ago, OpenAI was sparking huge rallies in any company related to it. Now, those connections look more like an anchor. It’s a change that carries wide-ranging implications, given how central the closely held company has been to the AI mania that has driven the stock market’s three-year rally.

“A light has been shined on the complexity of the financing, the circular deals, the debt issues,” Ewing said. “I’m sure this exists around the Alphabet ecosystem to a certain degree, but it was exposed as pretty extreme for OpenAI’s deals, and appreciating that was a game-changer for sentiment.”

A basket of companies connected to OpenAI has gained 74% in 2025, which is impressive but far shy of the 146% jump by Alphabet-exposed stocks. The technology-heavy Nasdaq 100 Index is up 22%.

OpenAI vs Alphabet

Stocks in orbits of OpenAI and Alphabet have diverged

Source: Bloomberg, Morgan Stanley

Data is normalized with percentage appreciation as of January 2, 2025.

The skepticism surrounding OpenAI can be dated to August, when it unveiled GPT-5 to mixed reactions. It ramped up last month when Alphabet released the latest version of its Gemini AI model and got rave reviews. As a result, OpenAI Chief Executive Officer Sam Altman declared a “code red” effort to improve the quality of ChatGPT, delaying other projects until it gets its signature product in line.

‘All the Pieces’

Alphabet’s perceived strength goes beyond Gemini. The company has the third highest market capitalization in the S&P 500 and a ton of cash at its disposal. It also has a host of adjacent businesses, like Google Cloud and a semiconductor manufacturing operation that’s gaining traction. And that’s before you consider the company’s AI data, talent and distribution, or its successful subsidiaries like YouTube and Waymo.

 

“There’s a growing sense that Alphabet has all the pieces to emerge as the dominant AI model builder,” said Brian Colello, technology equity senior strategist at Morningstar. “Just a couple months ago, investors would’ve given that title to OpenAI. Now there’s more uncertainty, more competition, more risk that OpenAI isn’t the slam-dunk winner.”

Read More: Alphabet’s AI Chips Are a Potential $900 Billion ‘Secret Sauce’

Representatives for OpenAI and Alphabet didn’t respond to requests for comment.

The difference between being first or second place goes beyond bragging rights, it also has significant financial ramifications for the companies and their partners. For example, if users gravitating to Gemini slows ChatGPT’s growth, it will be harder for OpenAI to pay for cloud-computing capacity from Oracle or chips from AMD.

By contrast, Alphabet’s partners in building out its AI effort are thriving. Shares of Lumentum, which makes optical components for Alphabet’s data centers, have more than tripled this year, putting them among the 30 best performers in the Russell 3000 Index. Celestica provides the hardware for Alphabet’s AI buildout, and its stock is up 252% in 2025. Meanwhile Broadcom — which is building the tensor processing unit, or TPU, chips Alphabet uses — has seen its stock price leap 68% since the end of last year.

OpenAI has announced a number of ambitious deals in recent months. The flurry of activity “rightfully brought scrutiny and concern over whether OpenAI can fund all this, whether it is biting off more than it can chew,” Colello said. “The timing of its revenue growth is uncertain, and every improvement a competitor makes adds to the risk that it can’t reach its aspirations.”

In fairness, investors greeted many of these deals with excitement, because they appeared to mint the next generation of AI winners. But with the shift in sentiment, they’re suddenly taking a wait-and-see attitude.

“When people thought it could generate revenue and become profitable, those big deal numbers seemed possible,” said Brian Kersmanc, portfolio manager at GQG Partners, which has about $160 billion in assets. “Now we’re at a point where people have stopped believing and started questioning.”

Kersmanc sees the AI euphoria as the “dot-com era on steroids,” and said his firm has gone from being heavily overweight tech to highly skeptical.

Self-Inflicted Wounds

“We’re trying to avoid areas of over-hype and a lot of those were fueled by OpenAI,” he said. “Since a lot of places have been touched by this, it will be a painful unwind. It isn’t just a few tech names that need to come down, though they’re a huge part of the index. All these bets have parallel trades, like utilities, with high correlations. That’s the fear we have, not just that OpenAI spun up this narrative, but that so many things were lifted on the hype.”

OpenAI’s public-relations flaps haven’t helped. The startup’s Chief Financial Officer Sarah Friar recently suggested the US government “backstop the guarantee that allows the financing to happen,” which raised some eyebrows. But she and Altman later clarified that the company hasn’t requested such guarantees.

Then there was Altman’s appearance on the “Bg2 Pod,” where he was asked how the company can make spending commitments that far exceed its revenue. “If you want to sell your shares, I’ll find you a buyer — I just, enough,” was the CEO’s response.

Altman’s dismissal was problematic because the gap between OpenAI’s revenue and its spending plans between now and 2033 is about $207 billion, according to HSBC estimates.

“Closing the gap would need one or a combination of factors, including higher revenue than in our central case forecasts, better cost management, incremental capital injections, or debt issuance,” analyst Nicolas Cote-Colisson wrote in a research note on Nov. 24. Considering that OpenAI is expected to generate revenue of more than $12 billion in 2025, its compute cost “compounds investor nervousness about associated returns,” not only for the company itself, but also “for the interlaced AI chain,” he wrote.

To be sure, companies like Oracle and AMD aren’t solely reliant on OpenAI. They operate in areas that continue to see a lot of demand, and their products could find customers even without OpenAI. Furthermore, the weakness in the stocks could represent a buying opportunity, as companies tied to ChatGPT and the chips that power it are trading at a discount to those exposed to Gemini and its chips for the first time since 2016, according to a recent Wells Fargo analysis.

“I see a lot of untapped demand and penetration across industries, and that will ultimately underpin growth,” said Kieran Osborne, chief investment officer at Mission Wealth, which has about $13 billion in assets under management. “Monetization is the end goal for these companies, and so long as they work toward that, that will underpin the investment case.”

Source: https://www.bloomberg.com/news/articles/2025-12-07/openai-goes-from-stock-market-savior-to-anchor-as-ai-risks-mount

Researchers create 3D catalog of 2.75 billion buildings

All the world’s buildings available as 3D models for the first time

With the GlobalBuildingAtlas, a research team at the Technical University of Munich (TUM) has created the first high-resolution 3D map of all buildings worldwide. The open data provides a crucial basis for climate research and the implementation of the UN Sustainable Development Goals. They enable more precise models for urbanization, infrastructure and disaster management – and help to make cities around the world more inclusive and resilient.

The data enables more accurate models for urbanization, infrastructure, and disaster management Earth System Science Data
The data enables more accurate models for urbanization, infrastructure, and disaster management

How many buildings are there on Earth – and what do they look like in 3D? The research team led by Prof. Xiaoxiang Zhu, holder of the Chair of Data Science in Earth Observation at TUM, has answered these fundamental questions in this project funded by an ERC Starting Grant. The GlobalBuildingAtlas comprises 2.75 billion building models, covering all structures captured in satellite imagery from the year 2019. This makes it the most comprehensive collection of its kind. For comparison: the largest previous global dataset contained about 1.7 billion buildings. The 3D models with a resolution of 3×3 meters are 30 times finer than data from comparable databases.

In addition, 97 percent (2.68 billion) of the buildings are provided as LoD1 3D models (Level of Detail 1). These are simplified three-dimensional representations that capture the basic shape and height of each building. While less detailed than higher LoD levels, they can be integrated at scale into computational models, forming a precise basis for analyses of urban structures, volume calculations, and infrastructure planning. Unlike previous datasets, GlobalBuildingAtlas includes buildings from regions often missing in global maps – such as Africa, South America, and rural areas.

New perspectives for sustainability and climate research

„3D building information provides a much more accurate picture of urbanization and poverty than traditional 2D maps,“ explains Prof. Zhu. „With 3D models, we see not only the footprint but also the volume of each building, enabling far more precise insights into living conditions. We introduce a new global indicator: building volume per capita, the total building mass relative to population – a measure of housing and infrastructure that reveals social and economic disparities. This indicator supports sustainable urban development and helps cities become more inclusive and resilient.“

Open data for global challenges

The 3D building data from the GlobalBuildingAtlas provides a precise basis for planning and monitoring urban development, enabling cities to take targeted measures to create inclusive and equitable living conditions – for example, by planning additional housing or public facilities such as schools and health centers in densely populated, disadvantaged neighborhoods. At the same time, the data is crucial for climate adaptation: it improves models on topics such as energy demand and CO₂ emissions and supports the planning of green infrastructure. Disaster prevention also benefits, as risks from natural events such as floods or earthquakes can be assessed more quickly.

The data is already attracting a great deal of interest: The German Aerospace Center (DLR), for example, is examining the use of the GlobalBuildingAtlas as part of the „International Charter: Space and Major Disasters“.

Prof. Xiaoxiang Zhu uses satellite data to analyze developments on Earth Juli Eberle / TUM / ediundsepp Gestaltungsgesellschaft
Prof. Xiaoxiang Zhu uses satellite data to analyze developments on Earth
Publications

Zhu,X. X., Chen, S., Zhang, F., Shi, Y, Wang, Y. „GlobalBuildingAtlas: an open global and complete dataset of building polygons, heights and LoD1 3D models“. Earth System Science Data (ESSD). DOI: 10.5194/essd-17-6647-2025 

Further information and links
  • All data and code are freely available via GitHub and mediaTUM, TUM’s media and publication server.
  • Like the databases and satellite data already available to the public, the project complies with all security standards for satellite data. In accordance with the German Satellite Data Security Regulation, the data is not considered sensitive due to its resolution of over 2.5 meters. 
  • Prof. Zhu is Director of the Munich Data Science Institute

Technical University of Munich

Corporate Communications Center

 

„I was forced to use AI until the day I was laid off.“ Copywriters reveal how AI has decimated their industry

Copywriters were one of the first to have their jobs targeted by AI firms. These are their stories, three years into the AI era.

Back in May 2025, not long after I put out the first call for AI Killed My Job stories, I received a thoughtful submission from Jacques Reulet II. Jacques shared a story about his job as the head of support operations for a software firm, where, among other things, he wrote copy documenting how to use the company’s product.

“AI didn’t quite kill my current job, but it does mean that most of my job is now training AI to do a job I would have previously trained humans to do,” he told me. “It certainly killed the job I used to have, which I used to climb into my current role.” He was concerned for himself, as well as for his more junior peers. As he told me, “I have no idea how entry-level developers, support agents, or copywriters are supposed to become senior devs, support managers, or marketers when the experience required to ascend is no longer available.”

When we checked back in with Jacques six months later, his company had laid him off. “I was actually let go the week before Thanksgiving now that the AI was good enough,” he wrote.

He elaborated:

Chatbots came in and made it so my job was managing the bots instead of a team of reps. Once the bots were sufficiently trained up to offer “good enough” support, then I was out. I prided myself on being the best. The company was actually awarded a “Best Support” award by G2 (a software review site). We had a reputation for excellence that I’m sure will now blend in with the rest of the pack of chatbots that may or may not have a human reviewing them and making tweaks.

It’s been a similarly rough year for so many other workers, as chronicled by this project and elsewhere—from artists and illustrators seeing client work plummet, to translators losing jobs en masse, to tech workers seeing their roles upended by managers eager to inject AI into every possible process.

And so we end 2025 in AI Killed My Jobs with a look at copywriting, which was among the first jobs singled out by tech firms, the media, and copywriters themselves as particularly vulnerable to job replacement. One of the early replaced-by-AI reports was the sadly memorable story of the copywriter whose senior coworkers started referring to her as “ChatGPT” in work chats before she was laid off without explanation. And YouTube was soon overflowing with influencers and grifters promising viewers thousands of dollars a month with AI copywriting tools.

But there haven’t been many investigations into how all that’s borne out since. How have the copywriters been faring, in a world awash in cheap AI text generators and wracked with AI adoption mania in executive circles? As always, we turn to the workers themselves. And once again, the stories they have to tell are unhappy ones. These are accounts of gutted departments, dried up work, lost jobs, and closed businesses. I’ve heard from copywriters who now fear losing their apartments, one who turned to sex work, and others, who, to their chagrin, have been forced to use AI themselves.

Readers of this series will recognize some recurring themes: The work that client firms are settling for is not better when it’s produced by AI, but it’s cheaper, and deemed “good enough.” Copywriting work has not vanished completely, but has often been degraded to gigs editing client-generated AI output. Wages and rates are in free fall, though some hold out hope that business will realize that a human touch will help them stand out from the avalanche of AI homogeneity.

As for Jacques, he’s relocated to Mexico, where the cost of living is cheaper, while he looks for new work. He’s not optimistic. As he put it, “It’s getting dark out there, man.”

Art by Koren Shadmi.

Before we press on, a quick word: Many thanks for reading Blood in the Machine and AI Killed My Job. This work is made possible by readers who pitch in a small sum each month to support it. And, for the cost of $6, a decent coffee a month, or $60 a year, you can help ensure it continues, and even, hopefully, expands. Thanks again, and onwards.

The next installments will focus on education, healthcare, and journalism. If you’re a teacher, professor, administrative assistant, TA, librarian, or otherwise work in education, or a doctor, nurse, therapist, pharmacist, or otherwise work in healthcare, please get in touch at AIKilledMyJob@pm.me. Same if you’re a reporter, journalist, editor, or a creative writer. You can read more about the project in the intro post, or the installments published so far.

This story was edited by Joanne McNeil.


They let go of the all the freelancers and used AI to replace us

Social media copywriter

I believe I was among the first to have their career decimated by AI. A privilege I never asked for. I spent nearly 6 years as a freelance social media copywriter, contracting through a popular company that worked with clients—mostly small businesses—across every industry you can imagine. I wrote posts and researched topics for everything from beauty to HVAC, dentistry, and even funeral homes. I had to develop the right voice for every client and transition seamlessly between them on any given day. I was frequently called out and praised, something that wasn’t the norm, and clients loved me. I was excellent at my job, and adapting to the constantly changing social media landscape and figuring out how to best the algorithms.

In early 2022, the company I contracted to was sold, which is never a sign of something good to come. Immediately, I expressed my concerns but was told everything would continue as it was and the new owners had no intention of getting rid of freelancers or changing how things were done. As the months went by, I noticed I was getting less and less work. Clients I’d worked with monthly for years were no longer showing up in my queue. I’d ask what was happening and get shrugged off, even as my work was cut in half month after month. At the start of the summer, suddenly I had no work. Not a single client. Maybe it was a slow week? Next week will be better. Until next week I yet again had an empty queue. And the week after. Panicking, I contacted my “boss”, who hadn’t been told anything. She asked someone higher up and it wasn’t until a week later she was told the freelancers had all been let go (without being notified), and they were going to hand the work off to a few in-house employees who would be using AI to replace the rest of us.

The company transitioned to a model where clients could basically “write” the content themselves, using Mad Libs-style templates that would use AI to generate the copy they needed, with the few in-house employees helping things along with some boilerplate stuff to kick things off.

They didn’t care that the quality of the posts would go down. They didn’t care that AI can’t actually get to know the client or their needs or what works with their customers. And the clients didn’t seem to care at first either, since they were assured it would be much cheaper than having humans do the work for them.

Since then, I’ve failed to get another job in social media copywriting. The industry has been crushed by things like Copy.AI. Small clients keep being convinced that there’s no need to invest in someone who’s an expert at what they do, instead opting for the cheap and easy solution and wondering why they’re not seeing their sales or engagement increasing.

For the moment, honestly I’ve been forced to get into online sex work, which I’ve never said “out loud” to anyone. There’s no shame in doing it, because many people genuinely enjoy doing it and are empowered by it, but for me it’s not the case. It’s just the only thing I’ve been able to get that pays the bills. I’m disabled and need a lot of flexibility in the hours I work any given day, and my old work gave me that flexibility as long as I met my deadlines – which I always did.

I think that’s another aspect to the AI job killing a lot of people overlook; what kind of jobs will be left? What kind of rights and benefits will we have to give up just because we’re meant to feel grateful to have any sort of job at all when there are thousands competing for every opening?

–Anonymous

I was forced to use AI until the day I was laid off

Corporate content copywriter

I’m a writer. I’ll always be a writer when it comes to my off-hours creative pursuits, and I hope to eventually write what I’d like to write full-time. But I had been writing and editing corporate content for various companies for about a decade until spring 2023, when I was laid off from the small marketing startup I had been working at for about six months, along with most of my coworkers.

The job mostly involved writing press releases, and for the first few months I wrote them without AI. Then my bosses decided to pivot their entire operational structure to revolve around AI, and despite voicing my concerns, I was essentially forced to use AI until the day I was laid off.

Copywriting/editing and corporate content writing had unfortunately been a feast-and-famine cycle for several years before that, but after this lay-off, there were far fewer jobs available in my field, and far more competition for these few jobs. The opportunities had dried up as more and more companies were relying on AI to produce content rather than human creatives. I couldn’t compete with copywriters who had far more experience than me, so eventually, I had to switch careers. I am currently in graduate school in pursuit of my new career, and while I believe this new phase of my life was the right move, I resent the fact that I had to change careers in the first place.

—Anonymous

I had to close my business after my client started using AI

Freelance copywriter

I worked as a freelance writer for 15 years. The last five, I was working with a single client – a large online luxury fashion seller based in Dubai. My role was writing product copy, and I worked my ass off. It took up all my time, so I couldn’t handle other clients. For the majority of the time they were sending work 5 days a week, occasionally weekends too and I was handling over 1000 descriptions a month. Sometimes there would be quiet spells for a week or two, so when they stopped contacting me…I first thought it was just a normal “dip”. Then a month passed. Then two. At that point, I contacted them to ask what was happening and they gave me a vague “We have been handling more of the copy in-house”. And that was that – I have never heard from them again, they didn’t even bother to tell me that they didn’t need my services any more. I’ve seen the descriptions they use now and they are 100% AI generated. I ended up closing my business because I couldn’t afford to keep paying my country’s self employment fees while trying to find new clients who would pay enough to make it worth continuing.

-Becky

We had a staff of 8 people and made about $600,000. This year we made less than $10k

Business copywriter

I was a business copywriter for eCommerce brands and did B2B sales copywriting before 2022.

In fact, my agency employed 8 people total at our peak. But then 2022 came around and clients lost total faith in human writing. At first we were hopeful, but over time we lost everything. I had to let go of everyone, including my little sister, when we finally ran out of money.

I was lucky, I have some friends in business who bought a resort and who still value my marketing expertise – so they brought me on board in the last few months, but 2025 was shaping up to be the worst year ever as a freelancer. I was looking for other jobs when my buddies called me.

At our peak, we went from making something like $600,000 a year and employing 8 people… To making less than $10K in 2025 before I miraculously got my new job.

Being repeatedly told subconsciously if not directly that your expertise is not valued or needed anymore – that really dehumanizes you as a person. And I’m still working through the pain of the two-year-long process that demolished my future in that profession.

It’s one of those rare times in life when a man cries because he is just feeling so dehumanized and unappreciated despite pouring his life, heart and soul into something.

I’ve landed on my feet for now with people who value me as more than a words-dispensing machine, and for that I’m grateful. But AI is coming for everyone in the marketing space.

Designers are hardly talked about any more. My leadership is looking forward to the day when they can generate AI videos for promotional materials instead of paying a studio $8K or more to film and produce marketing videos. And Meta is rolling out AI media buying that will replace paid ads agencies.

What jobs will this create? I can see very little. I currently don’t have any faith that this will get better at any point in the future.

I think the reason why is that I was positioned towards the “bottom” of the market, in the sense that my customers were nearly all startups and new businesses that people were starting in their spare time.

I had a partner Jake and together we basically got most of our clients through Fiverr. Fiverr customers are generally not big institutions or multi-nationals, although you do get some of that on Fiverr… It’s mostly people trying to start small businesses from the ground up.

I remember actually, when I was first starting out in writing, thinking “I can’t believe this is a job!” because writing has always come naturally to me. But the truth is, a lot of people out there go to start a business and what’s the first thing you do? You get a website, you find a template, and then you’re staring at a blank page thinking “what should I write about it?” And for them, that’s not an easy question to answer.

So that’s essentially where we fit in – and there’s more to it, as well, such as Conversion Rate Optimization on landing pages and so forth. When you boil it all down, we were helping small businesses find their message, find their market, and find their media – the way they were going to communicate with their market. And we had some great successes!

But nothing affected my business like ChatGPT did. All through Covid we were doing great, maybe even better because there were a lot of people staying home trying to start a new business – so we’d be helping people write the copy for their websites and so forth.

AI is really dehumanizing, and I am still working through issues of self-worth as a result of this experience. When you go from knowing you are valuable and valued, with all the hope in the world of a full career and the ability to provide other people with jobs… To being relegated to someone who edits AI drafts of copy at a steep discount because “most of the work is already done” …

2022-2023 was a weird time, for two reasons.

First, because I’m a very aware person – I remember that AI was creeping up on our industry before ChatGPT, with Jasper and other tools. I was actually playing with the idea of creating my own AI copywriting tool at the time.

When ChatGPT came out, we were all like “OK, this is a wake up call. We need to evolve…” Every person I knew in my industry was shaken.

Second, because the economy wasn’t that great. It had already started to downturn in 2022, and I had already had to let a few people go at that point, I can’t remember exactly when.

The first part of the year is always the slowest. So January through March, you never know if that’s an indication of how bad the rest of the year is going to be.

In our case, it was. But I remember thinking “OK, the stimulus money has dried up. The economy is not great.” So I wasn’t sure if it was just broad market conditions or ChatGPT specifically.

But even the work we were doing was changing rapidly. We’d have people come to us like “hey, this was written by ChatGPT, can you clean it up?”

And we’d charge less because it was just an editing job and not fully writing from scratch.

The drop off from 2022 to 2023 was BAD. The drop off from 2023 to 2024 was CATASTROPHIC.

By the end of that year, the company had lost the remaining staff. I had one last push before November 2023 (the end of the year has historically been the best time for our business, with Black Friday and Christmas) but I only succeeded in draining my bank account, and I was forced to let go of our last real employee, my sister, in early 2024. My brother and his wife were also doing some contract work for me at the time, and I had to end that pretty abruptly after our big push failed.

I remember, I believed that things were going to turn around again once people realized that even having a writing machine was not enough to create success like a real copywriter can. After all, the message is only one part of it – and divorced from the overall strategy of market and media, it’s never as effective as it can be.

In other words, there’s a context in which all marketing messages are seen, and it takes a human to understand what will work in that context.

But instead, what happened is that the pace of adoption was speeding up and all of those small entrepreneurs who used to rely on us, now used AI to do the work.

The technological advancements of GPT-4, and everyone trying to build their own AI, dominated the airwaves throughout 2023 and 2024. And technology adoption skyrocketed.

The thing is, I can’t even blame people. To be honest, when I’m writing marketing copy I use AI to speed up the process.

I still believe you need intelligence and strategy behind your ideas, or they will simply be meaningless words on a screen – but I can’t blame people for using these very cheap tools instead of paying an expert hundreds of dollars to get their website written.

Especially in my end of the market, where we were working with startup entrepreneurs who are bootstrapping their way to success.

When I officially left the business a few months ago, that left just my partner manning the Fiverr account we started with over 8 years ago.

I think the account is active enough to support a single person now, but I wouldn’t be so sure about next year. The drop off from 2022 to 2023 was BAD. The drop off from 2023 to 2024 was CATASTROPHIC.

Normally there are signs of life around April – in 2025, May had come and there was hardly a pulse in the business.

I still believe there may be a space for copywriters in the future, but much like tailors and seamstresses, it will be a very, very niche market for only the highest-end clients.

—Marcus Wiesner

My hours have been cut from nearly full time to 4-5 a month

Medical writer

I’m a medical writer; I work as a contract writer for a large digital marketing platform, adapting content from pharma companies to fit our platform. Medical writers work in regulatory, clinical, and marketing fields and I’m in marketing. I got my current contract job just 2 years ago, back when you could get this job with just a BA/BS.

In the last 2 years the market has changed drastically. My hours have been cut from nearly full time up to March ‘24 to 4-5 a month now if I’m lucky. I’ve been applying for new jobs for over a year and have had barely a nibble.

The trend now seems to be to have AI produce content, and then hire professionals with advanced degrees to check it over. And paying them less per hour than I make now when I actually work.

I am no longer qualified to do the job I’ve been doing, which is really frustrating. I’m trying to find a new career, trying to start over at age 50.

—Anonymous

We learned our work had been used to train LLMs and our jobs were outsourced to India

Editor for Gracenotes

So I lost my previous job to AI, and a lot of other things. I always joke that the number of historical trends that led to me losing it is basically a summary of the recent history of Western Civilization.

I used to be a schedule editor for Gracenote (the company that used to find metadata for CDs that you ripped into iTunes). They got bought by Nielsen, the TV ratings company, and then tasked with essentially adding metadata to TV guide listings. When you hit the info button on your remote, or when you Google a movie and get the card, a lot of that is Gracenote. The idea was that we could provide accurate, consistent, high-quality text metadata that companies could buy to add to their own listings. There’s a specific style of Gracenote Description Writing that still sticks out to me every time I see it.

So, basically from when I joined the company in late 2021 things were going sideways. I’m based in the Netherlands and worker protections are good, but we got horror stories of whole departments in the US showing up, being called into a “town hall” and laid off en-masse, so the writing was on the wall. We unionised, but they seemed to be dragging their feet on getting us a CAO (Collective Labour Agreement) that would codify a lot of our benefits.

The way the job worked was each editor would have a group of TV channels they would edit the metadata for. My team worked on the UK market, and a lot of us were UK transplants living in the NL. During my time there I did a few groups but, being Welsh, I eventually ended up with the Welsh, Irish and Scottish channels like S4C, RTE, BBC Alba. The two skills we were selling to the company were essentially: knowledge of the UK TV market used to prioritise different shows, and a high degree of proficiency in written English (and I bet you think you know why I lost the job to AI, but hold on).

Around January 2024 they introduced a new tool in the proprietary database we used, that totally changed how our work was done. Instead of channel groups that we prioritised ourselves, instead we were given an interface that would load 10 or so show records from any channel group, which had been auto-sorted by priority. It was then revealed to us that for the last two years or so, every single bit of our work in prioritisation had been fed into machine learning to try and work out how and why we prioritised certain shows over others.

“Hold on” we said, “this kind of seems like you’ve developed a tool to replace us with cheap overseas labour and are about to outsource all our jobs”

“Nonsense,” said upper management, “ignore the evidence of your lying eyes.”

That is, of course, what they had done.

They had a business strategy they called “automation as a movement” and we assumed they would be introducing LLMs into our workflow. But, as they openly admitted when they eventually told us what they were doing, LLMs simply weren’t (and still aren’t) good enough to do the work of assimilating, parsing and condensing the many different sources of information we needed to do the job. Part of it was accuracy, we would often have to research show information online and a lot of our job amounted to enclosing the digital commons by taking episode descriptions from fanwikis and rewriting them; part of it was variety, the information for the descriptions was ingested into our system in many different ways including press sites, press packs from the channels, emails, spreadsheets, etc etc and “AI” at the time wasn’t up to the task. The writing itself would have been entirely possible, it was already very formulaic, but getting the information to the point it was writable by an LLM was so impractical as to be impossible.

So they automated the other half of the job, the prioritisation. The writing was outsourced to India. As I said at the start, there’s a lot of historical currents at play here. Why are there so many people in India who speak and write English to a high standard? Don’t worry about it!

And, the cherry on the cake, both the union and the works council knew this would be happening, but were legally barred from telling us because of “competitive advantage”. They negotiated a pretty good severance package for those of us on “vastcontracts” (essentially permanent employees, as opposed to time-limited contracts) but it still saw a team of 10 reduced to 2 in the space of a month.

—Anonymous

Coworkers told me to my face that AI could and maybe should be doing all my work

Nonprofit communications worker

I currently work in nonprofit communications, and worked as a radio journalist for about four years before that. I graduated college in 2020 with a degree in music and broadcasting.

In my current job, I hear about the benefits of AI on a weekly basis. Unfortunately, those benefits consist of doing tasks that are a part of my direct workload. I’m already struggling to handle the amount of downtime that I have, as I had worked in the always-behind-schedule world of journalism before this (in fact, I am writing this on the clock right now). My duties consist mainly of writing for and putting together weekly and quarterly newsletters and writing our social media.

After a volunteer who recorded audio versions of our newsletters passed away suddenly, it was brought up in a meeting two hours after we heard the news that AI should be the one to create the audio versions going forward. I had to remind them that I am in fact an award-winning radio journalist and audio producer (I produce a few podcasts on a freelance basis, some of which are quite popular) and that I already have little work to do and would be able to take over those duties. After about two weeks of fighting, it was decided that I would be recording those newsletters. I also make sure our website is up-to-date on all of our events and community outings. At some point, I stopped being asked to write blurbs about the different events and I learned that this task was now being done by our IT Manager using AI to write those blurbs instead. They suck, but I don’t get to make that distinction. It has been brought up more than once that our social media is usually pretty fact-forward, and could easily be written by AI. That might be true, but it is also about half of my already very light workload. If I lost that, I would have very little to do. This has not yet been decided.

I have been told (to my face!) by my coworkers that AI could and maybe should be doing all of my work. People who are otherwise very progressive leaning seem to see no problem with me being out of work. While it was a win for me to be able to record the audio newsletters, I feel as if I am losing the battle for the right to do what I have spent the last five years of my life doing. I am 30 and making pennies, barely able to afford a one-bedroom apartment, while logging three-to-four hours of solitaire on my phone every day. This isn’t what I signed up for in life. My employers have given me some new work to do, but that is mostly planning parties and spreading cheer through the workplace, something I loathe and was never asked to do. There are no jobs in my field in my area.

If things keep progressing at this rate… I’ll be nothing but a party planner. I don’t even like parties. Especially not for people who think I should be out of a job.

I have seen two postings in the past six months for communications jobs that pay enough for me to continue living in my apartment. I got neither of them.

While I am still able to write my newsletter articles, those give me very little joy and if things keep progressing at this rate I won’t even have those. I’ll be nothing but a party planner. I don’t even like parties. Especially not for people who think I should be out of a job.

At this rate, I have seen little pushback from my employer about having AI do my entire job. Even if I think this is a horrible idea, as the topics I write about are often sensitive and personal, I have no faith that they will not go in this direction. At this point, I am concerned about layoffs and my financial future.

[We checked in with the contributor a few weeks after he reached out to us and he gave us this update:]

I am now being sent clearly AI written articles from heads of other departments (on subjects that I can and will soon be writing about) for publication on our website. And when I say “clearly AI,” I mean I took one look and knew immediately and was backed up by an online AI checker (which I realize is not always accurate but still). The other change is that the past several weeks have taught me that I don’t want to be a part of this field any longer. I can find another comms job, and actually have an interview with another company tomorrow, but have no reason to believe that they won’t also be pushing for AI at every turn.

—Anonymous

I’m a copywriter by trade. These days I do very little

Copywriter

I’m a copywriter by trade. These days I do very little. The market for my services is drying up rapidly and I’m not the only one who is feeling it. I’ve spoken to many copywriters who have noticed a drop in their work or clients who are writing with ChatGPT and asking copywriters to simply edit it.

I have clients who ask me to use AI wherever I can and to let them know how long it takes. It takes me less time and that means less money.

Some copywriters have just given up on the profession altogether.

I have been working with AI for a while. I teach people how to use it. What I notice is a move towards becoming an operator.

I craft prompts, edit through prompts and add my skills along the way (I feel my copywriting skills mean I can prompt and analyse output better than a non-writer). But writing like this doesn’t feel like it used to. I don’t go through the full creative process. I don’t do the hard work that makes me feel alive afterwards. It’s different, more clinical and much less rewarding.

I don’t want to be a skilled operator. I want to be a human copywriter. Yet, I think these days are numbered.

—Anonymous

I did “adapt or die” using AI, but I’m still in a precarious position

Ghostwriter

From 2010-today I worked as a freelance writer in two capacities: freelance journalism for outlets like Cannabis Now, High Times, Phoenix New Times, and The Street, and ghostwriting through a variety of marketplaces (elance, fiverr, WriterAccess, Scripted, Crowd Content) and agencies (Volume 9, Influence & Co, Intero Digital, Cryptoland PR).

The freelance reporting market still exists but is extremely competitive and pretty poorly paid. So I largely made my living ghostwriting to supplement income. The marketplaces all largely dried up unless you have a highly ranked account. I do not because I never wanted to grind through the low paid work long enough. I did attempt to use ChatGPT for low-paid WriterAccess jobs but got declined.

Meanwhile, my steadiest ghostwriting client was Influence & Co/Intero Digital. Through this agency, I have ghostwritten articles for nearly everyone you can think of (except Vox/Verge): NYT, LA Times, WaPo, WSJ, Harvard Business Review, Venture Beat, HuffPost, AdWeek, and so many more. And I’ve done it for execs for large tech companies, politicians, and more. The reason it works is because they have guest posts down to a science.

They built a database of all publisher’s guidelines. If I wanted to be in HBR, I knew the exact submission guidelines and could pitch relevant topics based on the client. Once the pitch is accepted, an outline is written, and the client is interviewed. This interview is crucial because it’s where we tap into the source and gain firsthand knowledge that can’t be found online. It also gets the client’s natural voice. I then combine the recorded interview with targeted online research to find statistics and studies to back up what the client says, connect it to recent events, and format to the publisher’s specs.

So ChatGPT came along December 2022, and for most of 2023 things were fine, although Influence & Co was bought by Intero, so internal issues were arising. I was with this company from the start when they were emailing word docs through building the database and selling the company several times. I can go on and on about how it all works.

We as writers don’t use ChatGPT, but it still seeped into the workflow from the client. The client interview I mentioned above as being vital because it gets info you can’t get online and their voice and everything you need to do it right—well those clients started using ChatGPT. By the end of 2023, I couldn’t handle it anymore because my job fundamentally changed. I was no longer learning anything. That vital mix that made it work was gone, and it was all me combining ChatGPT and the internet to try and make it fit into those publications above, many of which implemented AI detection, started publishing their own AI articles, and stopped accepting outside contributions.

I could probably write a book about the backend of all this stuff and how guest posts end up on every media outlet on the planet. Either way, ChatGPT ruined it

The thing about writing in this instance is that it doesn’t matter how many drafts you write, if it doesn’t get published in an acceptable publication, then it looks like we did nothing. What was steady work for over a decade slowed to a trickle, and I was tired of the work that was coming in because it was so bad.

Last summer, I emailed them and quit. I could no longer depend on the income. It was $1500-$3000 a month for over a decade and then by 2024 was $100 a month. And I hated doing it. It was the lowest level bs work I hated so much. I loved that job because I learned so much and I was challenged trying to get into all those publications, even if it was a team effort and not just me. I wrote some killer articles that ChatGPT could never. And the reason AI took my job is because clients who hired me for hundreds to thousands of dollars a month decided it’s not worth their time to follow our process and instead use ChatGPT.

That is why I think it’s important to talk about. I probably could still be working today in what became a content mill. And the reason it ultimately became no longer worth it isn’t all the corporate changes. It wasn’t my boss who was using AI—it was our customers. Working with us was deemed not important, and it’s impossible to explain to someone in an agency environment that they’re doing it to themselves. They will just go to a different agency and keep trying, and many of the unethical ones will pull paid tricks that make it look more successful than it is, like paying Entrepreneur $3000 for a year in their leadership network. (Comes out to paying $150 per published post, which is wild considering the pay scale above).

The whole YEC publishing conglomerate is another rabbit hole. Forbes, CoinTelegraph, Newsweek, and others have the same paid club structure that happens to come with guest post access. And those publishers allow paid marketing in the guise of editorials.

I could probably write a book about the backend of all this stuff and how guest posts end up on every media outlet on the planet. Either way, ChatGPT ruined it, and I’m largely retired now. I am still doing some ghostwriting, but it’s more in the vein of PR and marketing work for various agencies I can find that need writers. The market still exists, even if I have to work harder for clients.

And inexplicably, the reason we met originally was because I was involved in the start of Adobe Stock accepting AI outputs from contributors. I now earn $2500 per month consistently from that and have a lot of thoughts about how as a writer with deep inside knowledge of the writing industry, I couldn’t find a single way to “adapt or die” and leverage ChatGPT to make money. I could probably put up a website and build some social media bots. But plugging AI into the existing industry wasn’t possible. It was already competitive. Yet I somehow managed to build a steady recurring residual income stream selling Midjourney images on Adobe stock for $1 a piece. I’m on track to earn $30,000 this year from that compared to only $12,000 from writing. I used to earn $40,000-$50,000 a year doing exclusively writing from 2011-2022.

I did “adapt or die” using AI, but I’m still in a precarious position. If Adobe shut down or stopped accepting AI, I’ll be screwed. It doesn’t help that I’m very vocally against Adobe and called them out last year via Bloomberg for training firefly on Midjourney outputs when I’m one of the people making money from it. I’m fascinated to learn how the court cases end up and how it impacts my portfolio. I’m currently working to learn photography and videography well enough to head to Vegas and LA for conferences next year to build a real editorial stock portfolio across the other sites.

So my human writing job was reduced below a living wage, and I have an AI image portfolio keeping me afloat while I try to build a human image/video portfolio faster than AI images are banned. Easy peasy right?

–Brian Penny

The agency was begging me to take on more work. Then it had nothing for me

Freelance copywriter

I was a freelance copywriter. I am going to be fully transparent and say I was never one of those people that hustled the best, but I had steady work. Then AI came and one of the main agencies that I worked for went from begging me to take on more work to having 0 work for me in just 6-8 months. I struggled to find other income, found another agency that had come out of the initial AI hype and built a base of clients that had realized AI was slop, only for their customer base to be decimated by Trump’s tariffs about a month after I joined.

What I think people fail to realize when they talk about AI is that this is coming on the tail end of a crisis in employment for college grads for years. I only started freelancing because I applied to hundreds of jobs after winding up back at my mom’s house during COVID-19. Anecdotally, most of my friends that I graduated with (Class of 2019) spent years struggling to find stable, full-time jobs with health insurance, pre-AI. Add AI to the mix, and getting your foot in the door of most white collar industries just got even harder.

As I continue airing my grievances in your email, I remember when ChatGPT first came out a lot of smug literary types on Twitter were saying “if your writing can be replaced by AI then it wasn’t good to begin with,” and that made me want to scream. The writing that I’m actually good at was the writing that nobody was going to pay me for because the media landscape is decimated!

Content writing/copywriting was supposed to be the way you support yourself as an artist, and now even that’s gone.

—Rebecca Duras

My biggest client replaced me with a custom GPT. They surely trained it using my work

Copywriter and Marketing Consultant

I am a long-time solopreneur and small business owner, who got into the marketing space about 8 years ago. This career shift was quite the surprise to me, as for most of my career I didn’t like marketing…or marketers. But here we are ;p

While I don’t normally put it in these terms, what shifted everything for me was realizing that copywriting was a thing — it could make a huge difference in my business and for other businesses, too. With a BA in English, and after doing non-marketing writing projects on the side for years, it just made a ton of sense to me that the words we use to talk about our businesses can make a big difference. I was hooked.

After pursuing some training, I had a lucrative side-hustle doing strategic messaging work and website copy for a few years before jumping into full-time freelancing in 2021. The work was fun, the community of marketers I was a part of was amazing, and I was making more money than I ever could have in my prior business.

And while the launch of ChatGPT in Nov ‘22 definitely made many of us nervous — writing those words brings into focus how stressful the existential angst has actually been since that day — for me and many of my copywriting friends, the good times just kept rolling. 2023 was my best year ever in business — by a whopping 30%. I wasn’t alone. Many of my colleagues were also killing it.

All of that changed in 2024.

Early that year, the AI propaganda seemed to hit its full crescendo, and it started significantly impacting my business. I quickly noticed leads were down, and financially, things started feeling tight. Then, that spring, my biggest retainer client suddenly gave me 30-days notice that they wouldn’t renew my contract — which made up half of what I needed to live on. The decision caught everyone, including the marketing director, off guard. She loved what I was doing for them and cried when she told me the news. I later found out through the grapevine that the CEO and his right hand guy were hoping to replace me with a custom GPT they had created. They surely trained it using my work.

The AI-related hits kept coming. The thriving professional community I enjoyed pretty much imploded that summer – largely because of some unpopular leadership decisions around AI. Almost all of my skilled copywriter friends left the organization — and while I’ve lost touch with most, the little I have heard is that almost all of them have struggled. Many have found full-time employment elsewhere.

I won’t go into all the ins-and-outs of what has happened to me since, and I’ll leave my rant about getting AI slop from my clients to “edit” alone. (Briefly, that task is beyond miserable.)

But I will say from May of 2024 to now, I’ve gone from having a very healthy business and amazing professional community, to feeling very isolated and struggling to get by. Financially, we’ve burned through $20k in savings and almost $30k in credit cards at this point. We’re almost out of cash and the credit cards are close to maxed. Full-time employment that’d pay the bills (and get us out of our hole) just isn’t there. Truthfully, if it wasn’t for a little help from some family – and basically being gifted two significant contracts through a local friend – we’d be flat broke with little hope on the horizon. Despite our precarious position, continuing to risk freelance work seems to be our best and pretty much only option.

I do want to say, though, that even though it’s bleak, I see some signs for hope. In the last few months, in my experience many business owners are waking up to the fact that AI can’t do what it claims it can. Moreover, with all of the extra slop around, they’re feeling even more overwhelmed – which means if you can do any marketing strategy and consulting, you might make it.

But while I see that things might be starting to turn, the pre-AI days of junior copywriting roles and freelancers being able making lots of money writing non-AI content seem to be long gone. I think those writers who don’t lean on AI and find a way to make it through will be in high-demand once the AI-illusion starts to lift en masse. I just hope enough business owners who need marketing help wake up before then so that more of us writers don’t have to starve.

–Anonymous

source: https://www.bloodinthemachine.com/p/i-was-forced-to-use-ai-until-the

Kids in China Are Using Bots and Engagement Hacks to Look More Popular on Their Smartwatches

 
 
In China, parents are buying smartwatches for children as young as 5, connecting them to a digital world that blends socializing with fierce competition.
Image may contain Meng Xiaodong Body Part Finger Hand Person Baby and Shelf
Photo-Illustration: WIRED Staff; Getty Images
 
 

At what age should a kid ideally get a smartwatch? In China, parents are buying them for children as young as five. Adults want to be able to call their kids and track their location down to a specific building floor. But that’s not why children are clamoring for the devices, specifically ones made by a company called Xiaotiancai, which translates to Little Genius in English.

The watches, which launched in 2015 and cost up to $330, are a portal into an elaborate world that blends social engagement with relentless competition. Kids can use the watches to buy snacks at local shops, chat and share videos with friends, play games, and, sure, stay in touch with their families. But the main activity is accumulating as many “likes” as possible on their watch’s profile page. On the extreme end, Chinese media outlets have reported on kids who buy bots to juice their numbers, hack the watches to dox their enemies, and sometimes even find romantic partners. According to tech research firm Counterpoint Research, Little Genius accounts for nearly half of global market share for kids’ smartwatches.

Status Games

Over the past decade, Little Genius has found ways to gamify nearly every measurable activity in the life of a child—playing ping pong, posting updates, the list goes on. Earning more experience points boosts kids to a higher level, which increases the number of likes they can send to friends. It’s a game of reciprocity—you send me likes, and I’ll return the favor. One 18-year-old recently told Chinese media that she had struggled to make friends until four years ago when a classmate invited her into a Little Genius social circle. She racked up more than one million likes and became a mini-celebrity on the platform. She said she met all three of her boyfriends through the watch, two of whom she broke up with because they asked her to send erotic photos.

 

High like counts have become a sort of status symbol. Some enthusiastic Little Genius users have taken to RedNote (or Xiaohongshu), a prominent Chinese social media app, to hunt for new friends so as to collect more likes and badges. As video tutorials on the app explain, low-level users can only give out five likes a day to any one friend; higher-ranking users can give out 20. Because the watch limits its owner to a total of 150 friends, kids are therefore incentivized to maximize their number of high-level friends. Lower-status kids, in turn, are compelled to engage in competitive antics so they don’t get dumped by higher-ranking friends.

“They feel this sense of camaraderie and community,” said Ivy Yang, founder of New York-based consultancy Wavelet Strategy, who has studied Little Genius. “They have a whole world.” But Yang expressed reservations about the way the watch seems to commodify friendship. “It’s just very transactional,” she adds.

Engagement Hacks

On RedNote/Xiaohongshu, people post videos on circumventing Little Genius’s daily like limits, with titles such as “First in the world! Unlimited likes on Little Genius new homepage!” The competitive pressure has also spawned businesses that promise to help kids boost their metrics. Some high-ranking users sell their old accounts. Others sell bots that send likes or offer to help keep accounts active while the owner of a watch is in class.

Get enough likes—say, 800,000—and you become a “big shot” in the Little Genius community. Last month, a Chinese media outlet reported that a 17-year-old with more than 2 million likes used her online clout to sell bots and old accounts, earning her more than $8,000 in a year. Though she enjoyed the fame that the smartwatch brought her, she said she left the platform after getting into fights with other Little Genius “big shots” and facing cyberbullying.

 

In September, a Beijing-based organization called China’s Child Safety Emergency Response warned parents that children with Little Genius watches were at risk of developing dangerous relationships or falling victim to scams. Officials have also raised alarms about these hidden corners of the Little Genius universe. The Chinese government has begun drafting national safety standards for children’s watches, following growing concerns over internet addiction, content unfit for children, and overspending via the watch payment function. The company did not respond to requests for comment.

I talked to one parent who had been reluctant to buy the watch. Lin Hong, a 48-year-old mom in Beijing, worried that her nearsighted daughter, Yuanyuan, would become obsessed with its tiny screen. But once Yuanyuan turned 8, Lin relented and splurged on the device. Lin’s fears quickly materialized.

 

Yuanyuan loved starting her day by customizing her avatar’s appearance. She regularly sent likes to her friends and made an effort to run and jump rope to earn more points. “She would look for her smartwatch first thing every morning,” Lin said. “It was like adults, actually, they’re all a bit addicted.”

 

To curb her daughter’s obsession, Lin limited Yuanyuan’s time on the watch. Now she’s noticing that her daughter, who turns 9 soon, chafes at her mother’s digital supervision. “If I call her three times, she’ll finally pick up to say, ‘I’m still out, stop calling. I’m not done playing yet,’ and hang up,” Lin said. “If it’s like this, she probably won’t want to keep wearing the watch for much longer.”


This is an edition of Zeyi Yang and Louise Matsakis Made in China newsletter. Read previous newsletters here.