AOH1996 has been developed over two decades to target a protein found in all forms of the disease
A new “cancer-stopping” drug has been found to “annihilate” solid cancerous tumours in early stage studies.
The chemotherapy drug leaves healthy cells unaffected, scientists said.
The AOH1996 drug is named after a child – Anna Olivia Healy, born in 1996, who died when she was only nine after being diagnosed with a rare childhood cancer neuroblastoma.
Prof Linda Malkas and her team spent two decades developing the drug that targets a protein in all cancers, including the cancer that led to Anna’s death.
The protein, proliferating cell nuclear antigen (PCNA), was once thought too challenging to aim targeted therapies at.
PCNA in its mutated form encourages tumours to grow by aiding DNA replication and repair of cancerous cells.
Prof Malkas and her team at the City of Hope in California, one of the United States’ largest cancer research and treatment organisations, said the targeted chemotherapy appears to “annihilate” all solid tumours in preclinical research.
Selectively kills cancer cells
AOH1996 was tested in more than 70 cell lines and was found to selectively kill cancer cells by disrupting the normal cell reproductive cycle, but it did not interrupt the reproductive cycle of healthy stem cells.
Pre-clinical studies suggest the drug has been shown to be effective in treating cells derived from breast, prostate, brain, ovarian, cervical, skin and lung cancers.
The drug still needs to go through rigorous safety and efficacy testing and large-scale clinical trials before it can be used widely.
The first patient received the potentially cancer-stopping pill in October with the phase one clinical trial still ongoing and expected to last for at least two years.
Patients are still being recruited to the trial.
Researchers are also still examining mechanisms that make the drug work in animal studies.
‘Like snowstorm that closes airline hub’
Prof Malkas said: “PCNA is like a major airline terminal hub containing multiple plane gates.
“Data suggests PCNA is uniquely altered in cancer cells, and this fact allowed us to design a drug that targeted only the form of PCNA in cancer cells.
“Our cancer-killing pill is like a snowstorm that closes a key airline hub, shutting down all flights in and out only in planes carrying cancer cells.”
The professor called the results “promising” but made clear that research has only found AOH1996 can suppress tumour growth in cell and animal models.
Long Gu, the lead author of the study, said: “No one has ever targeted PCNA as a therapeutic because it was viewed as ‘undruggable’, but clearly City of Hope was able to develop an investigational medicine for a challenging protein target.”
The study, titled “Small Molecule Targeting of Transcription-Replication Conflict for Selective Chemotherapy”, was published in the Cell Chemical Biology journal.
Open letter on the feasibility of „Chat Control“:Assessments from a scientific point of view
Update: A parallel initiative is aimed at the EU institutions and is available in English at the CSA Academia Open Letter . Since the very similar arguments were formulated in parallel, they support each other.
The initiative of the EU Commission discussed under the name “ Chat Control ”, the unprovoked monitoring of various communication channels to detect child pornography, terrorist or other “undesirable” material – including attempts at early detection (e.g. “grooming” minors through text messages that build trust) – mandatory for mobile devices and communication services, has recently been expanded to include the monitoring of direct audio communications . Some states, including Austria and Germany , have already publicly declared that they will not support this initiative for monitoring without cause. AlsoCivil protection and children’s rights organizations have rejected this approach as excessive and at the same time ineffective . Recently, even the legal service of the EU Council of Ministers diagnosed an incompatibility with European fundamental rights. Irrespective of this, the draft will be tightened up even more and extended to other channels: in the last version even to audio messages and conversations. The approach appears to be coordinated with corresponding attempts in the US ( “EARN IT” and “STOP CSAM” Acts ) and the UK (“Online Safety Bill”).
As scientists who are actively researching in various areas of this topic, we therefore make the declaration in all clarity: This advance cannot be implemented safely and effectively. There is currently no foreseeable further development of the corresponding technologies that would technically make such an implementation possible. In addition, according to our assessment, the hoped-for effects of these monitoring measures are not to be expected. This legislative initiative therefore misses its target, is socio-politically dangerous and would permanently damage the security of our communication channels for the majority of the population.
The main reasons against the feasibility of „Chat Control“ have already been mentioned several times. In the following, we would like to discuss these specifically in the interdisciplinary connection between artificial intelligence (AI, artificial intelligence / AI), security (information security / technical data protection) and law .
Our concerns are:
Security: a) Encryption is the best method for internet security. Successful attacks are almost always due to faulty software. b) A systematic and automated monitoring (ie „scanning“) of encrypted content is technically only possible if the security that can be achieved through encryption is massively violated, which is associated with considerable additional risks. c) A legal obligation to integrate such scanners will make secure digital communications in the EU unavailable to the majority of the population, but will have little impact on criminal communications.
AI: a) Automated classification of content, including methods based on machine learning, is always subject to errors, which in this case will lead to high false positives. b) Special monitoring methods, which are carried out on the end devices, open up additional possibilities for attacks up to the extraction of possibly illegal training material.
Law: a) A sensible demarcation from the explicitly permitted use of specific content, for example in the educational sector or for criticism and parody, does not appear to be automatically possible. b) The massive encroachment on fundamental rights through such an instrument of mass surveillance is not proportionate and would cause great collateral damage in society.
In detail, these concerns are based on the following scientifically recognized facts:
Security
Encryption using modern methods is an indispensable basis for practically all technical mechanisms for maintaining security and data protection on the Internet. In this way, communication on the Internet is currently protected as the cornerstone for current services, right through to critical infrastructure such as telephone, electricity, water networks, hospitals, etc. Trust in good encryption methods is significantly higher among experts than in other security mechanisms. Above all, the average poor quality of software in general is the reason for the many publicly known security incidents. Improving this situation in terms of better security therefore relies primarily on encryption.
Automatic monitoring („scanning“) of correctly encrypted content is not effectively possible according to the current state of knowledge. Procedures such as „Fully Homomorphic Encryption“ (FHE) are currently not suitable for this application – neither is the procedure capable of this, nor is the necessary computing power realistically available. A rapid improvement is not foreseeable here either.
For these reasons, earlier attempts to ban or restrict end-to-end encryption were mostly quickly abandoned internationally. The current chat control push aims to have monitoring functionality built into the end devices in the form of scanning modules (“Client-Side Scanning” / CSS) and therefore to scan the plain text content before secure encryption or after secure decryption . Providers of communication services would have to be legally obliged to implement this for all content. Since this is not in the core interest of such organizations and requires effort in implementation and operation as well as increased technical complexity, it cannot be assumed that the introduction of such scanners will be voluntary – in contrast to scanning on the server side.
Secure messengers such as Signal or Threema and WhatsApp have already publicly announced that they will not implement such client scanners, but to withdraw from the corresponding regions. This has different implications for communication depending on the use case: (i) (adult) criminals will simply communicate with each other via “non-compliant” messenger services to further benefit from secure encryption. The increased effort, for example to install other apps on Android via sideloading that are not available in the usual app stores in the respective country, is not a significant hurdle for criminal elements. (ii) Criminals communicate with possible future victims via popular platforms, which would be the target of the mandatory surveillance measures discussed. In this case, it can be assumed that informed criminals will quickly lure their victims to alternative but still internationally recognized channels such as Signal, which are not covered by the monitoring. (iii) Participants exchange problematic material without being aware that they are committing a crime. This case would be reported automatically and possibly also lead to the criminalization of minors without intent. The restrictions would therefore primarily affect the broad – and irreproachable – mass of the population.It would be utterly delusional to think that without built-in monitoring, secure encryption could still be reversed. Tools like Signal, Tor, Cwtch, Briar and many others are widely available as open source and can easily be removed from central control. Knowledge of secure encryption is already common knowledge and can no longer be censored. There is no effective way to technically block the use of strong encryption without Client Side Scanning (CSS). If surveillance measures are prescribed in messengers, only criminals whose actual crimes outweigh the violation of the surveillance obligation will maintain their privacy.
Furthermore, the complex implementation forced by proposed scanner modules creates additional security problems that do not currently exist. On the one hand, this represents new software components, which in turn will be vulnerable. On the other hand, the Chat Control proposals consistently assume that the scanner modules themselves will remain confidential, since they would be trained on content that is already punishable for mere possession (built into the Messenger app), on the one hand, and simply for testing evasion methods, on the other can be used. It is also an illusion that such machine learning models or other scanner modules, distributed to billions of devices under the control of end users, can ever be kept secret.NeuralHash “ module for CSAM detection, which was extracted almost immediately from corresponding iOS versions and is thus openly available . The assumption by Chat Control proposals that these scanner modules could be kept confidential is therefore completely unfounded and incorrect Corresponding data leaks are almost unavoidable here.
artificial intelligence
We have to assume that machine learning (ML) models on end devices cannot, in principle, be kept completely secret. This is in contrast to server-side scanning, which is currently legally possible and also actively practiced by various providers to scan content that has not been end-to-end encrypted. ML models on the server side can be reasonably protected from being read with the current state of the art and are less the focus of this consideration.
A general problem with all ML-based filters are false classifications, i.e. that known “undesirable” material is not recognized as such with small changes (also referred to as “false negative” or “false non-match”). For parts of the push, it is currently unknown how ML models should be able to recognize complex, unfamiliar material with changing context (e.g. „grooming“ in text chats) with even approximate accuracy. The probability of high false negative rates is high.In terms of risk, however, it is significantly more serious if harmless material is classified as “undesirable” (also referred to as “false positive” or “false match” or also as “collision”). Such errors can be reduced, but in principle cannot be ruled out. In addition to the false accusation of uninvolved persons, false positives also lead to (possibly very) many false reports for the investigative authorities, which already have too few resources to investigate reports.
The assumed open availability of ML models also creates various new attack possibilities. Using the example of Apple NeuralHash , random collisions were found very quickly and programs were freely released to generate any collisions between images . This method, also known as “malicious collisions”, uses so-called adversarial attacks against the neural network and thus enables attackers to deliberately classify harmless material as a “match” in the ML model and thus classify it as “undesirable”. In this way, innocent people can be harmed in a targeted manner by automatic false reports and brought under suspicion – without any illegal action on the part of the attacked or attacker.
The open availability of the models can also be used for so-called „training input recovery“ in order to extract (at least partially) the content used for training from the ML model. In the case of prohibited content (e.g. child pornography), this poses another massive problem and can further increase the damage to those affected by the fact that their sensitive data (e.g. images of abuse used for training) can continue to be published. Because of these and other problems, Apple, for example, withdrew the proposal .We note that this latter danger does not occur with server-side scanning by ML models, but is newly added by the chat control proposal with client scanner.
Legal Aspects
The right to privacy is a fundamental right that may only be interfered with under very strict conditions. Whoever makes use of this basic right must not be suspected from the outset of wanting to hide something criminal. The often-used phrase: „If you have nothing to hide, you have nothing to fear!“ denies people the exercise of their basic rights and promotes totalitarian surveillance tendencies. The use of chat control would fuel this.
The area of terrorism in particular overlaps with political activity and freedom of expression in its breadth. It is precisely against this background that the „preliminary criminalisation“, which has increasingly taken place in recent years under the guise of fighting terrorism, is viewed particularly critically. Chat control measures go in the same direction. They can severely curtail this basic right and make people who are politically critical the focus of criminal prosecution. The resulting severe curtailment of politically critical activity hinders the further development of democracy and harbors the danger of promoting radicalized underground movements.
The field of law and social sciences includes researching criminal phenomena and questioning regulatory mechanisms. From this point of view, scientific discourse also runs the risk of being identified as “suspicious” by chat control and thus indirectly restricted. The possible stigmatization of critical legal and social sciences is in tension with the freedom of science, which also requires “research independent of the mainstream” for further development.
In education, there is a need to educate young people to be critically conscious. This also includes passing on facts about terrorism. Through the use of chat control, the provision of teaching material by teachers could put them in a criminal focus. The same applies to addressing sexual abuse, so that control measures could make this sensitive subject more taboo, even if “self-empowerment mechanisms” are to be promoted.
Interventions in fundamental rights must always be appropriate and proportionate, even if they are made in the context of criminal prosecution. The technical considerations presented show that these requirements are not met with Chat Control. Such measures thus lack any legal or ethical legitimacy.
In summary, the current proposal for chat control legislation is not technically sound from either a security or AI point of view and is highly problematic and excessive from a legal point of view. The chat control push brings significantly greater dangers for the general public than a possible improvement for those affected and should therefore be rejected.
Instead, existing options for human-driven reporting of potentially problematic material by recipients, as is already possible with various messenger services, should be strengthened and made even more easily accessible. It should be considered whether anonymous registration options for correspondingly illegal material could be created and made easily accessible from messengers. Existing criminal prosecution options, such as the monitoring of social media or open chat groups by police officers, as well as the legally required analysis of suspects‘ smartphones, can continue to be used accordingly.
For more detailed information and further details please contact:
AI Austria , association for the promotion of artificial intelligence in Austria, Wollzeile 24/12, 1010 Vienna
Austrian Society for Artificial Intelligence (ASAI) , association for the promotion of scientific research in the field of AI in Austria
Univ.-Prof. dr Alois Birklbauer, JKU Linz ( Head of the practice department for criminal law and medical criminal law )
Ass.-Prof. dr Maria Eichlseder, Graz University of Technology
Univ.-Prof. dr Sepp Hochreiter, JKU Linz ( Board of Directors of the Institute for Machine Learning, Head of the LIT AI Lab )
dr Tobias Höller, JKU Linz (post-doc at the Institute for Networks and Security)
FH Prof. TUE Peter Kieseberg, St. Pölten University of Applied Sciences ( Head of the Institute for IT Security Research )
dr Brigitte Krenn, Austrian Research Institute for Artificial Intelligence ( Board Member Austrian Society for Artificial Intelligence )
Univ.-Prof. dr Matteo Maffei, TU Vienna ( Head of the Security and Privacy Research Department, Co-Head of the TU Vienna Cyber Security Center )
Univ.-Prof. dr Stefan Mangard, TU Graz ( Head of the Institute for Applied Information Processing and Communication Technology )
Univ.-Prof. dr René Mayrhofer, JKU Linz ( Board of Directors of the Institute for Networks and Security, Co-Head of the LIT Secure and Correct System Lab )
DI Dr. Bernhard Nessler, JKU Linz/SCCH ( Vice President of the Austrian Society for Artificial Intelligence )
Univ.-Prof. dr Christian Rechberger, Graz University of Technology
dr Michael Roland, JKU Linz (post-doc at the Institute for Networks and Security)
a.Univ.-Prof. dr Johannes Sametinger, JKU Linz ( Institute for Business Informatics – Software Engineering, LIT Secure and Correct System Labs )
Univ.-Prof. DI Georg Weissenbacher, DPhil (Oxon), TU Vienna (Prof. Rigorous Systems Engineering)
„The system started realizing that while they did identify the threat,“ Hamilton said at the May 24 event, „at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.“
Killer AI is on the minds of US Air Force leaders.
An Air Force colonel who oversees AI testing used what he now says is a hypothetical to describe a military AI going rogue and killing its human operator in a simulation in a presentation at a professional conference.
But after reports of the talk emerged Thursday, the colonel said that he misspoke and that the „simulation“ he described was a „thought experiment“ that never happened.
Speaking at a conference last week in London, Col. Tucker „Cinco“ Hamilton, head of the US Air Force’s AI Test and Operations, warned that AI-enabled technology can behave in unpredictable and dangerous ways, according to a summary posted by the Royal Aeronautical Society, which hosted the summit.
As an example, he described a simulation where an AI-enabled drone would be programmed to identify an enemy’s surface-to-air missiles (SAM). A human was then supposed to sign off on any strikes.
The problem, according to Hamilton, is that the AI would do its own thing — blow up stuff — rather than listen to its operator.
„The system started realizing that while they did identify the threat,“ Hamilton said at the May 24 event, „at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.“
But in an update from the Royal Aeronautical Society on Friday, Hamilton admitted he „misspoke“ during his presentation. Hamilton said the story of a rogue AI was a „thought experiment“ that came from outside the military, and not based on any actual testing.
„We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,“ Hamilton told the Society. „Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.“
In a statement to Insider, Air Force spokesperson Ann Stefanek also denied that any simulation took place.
„The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,“ Stefanek said. „It appears the colonel’s comments were taken out of context and were meant to be anecdotal.“
The US military has been experimenting with AI in recent years.
In 2020, an AI-operated F-16 beat a human adversary in five simulated dogfights, part of a competition put together by the Defense Advanced Research Projects Agency (DARPA). And late last year, Wired reported, the Department of Defense conducted the first successful real-world test flight of an F-16 with an AI pilot, part of an effort to develop a new autonomous aircraft by the end of 2023.
Correction June 2, 2023: This article and its headline have been updated to reflect new comments from the Air Force clarifying that the „simulation“ was hypothetical and didn’t actually happen.
An Air Force official’s story about an AI going rogue during a simulation never actually happened.
„It killed the operator because that person was keeping it from accomplishing its objective,“ the official had said.
But the official later said he misspoke and the Air Force clarified that it was a hypothetical situation.
A startup used gene editing to make mustard greens more appetizing to consumers. Next up: fruits.
A gene-editing startup wants to help you eat healthier salads. This month, North Carolina–based Pairwise is rolling out a new type of mustard greens engineered to be less bitter than the original plant. The vegetable is the first Crispr-edited food to hit the US market.
Mustard greens are packed with vitamins and minerals but have a strong peppery flavor when eaten raw. To make them more palatable, they’re usually cooked. Pairwise wanted to retain the health benefits of mustard greens but make them tastier to the average shopper, so scientists at the company used the DNA-editing tool Crispr to remove a gene responsible for their pungency. The company hopes consumers will opt for its greens over less nutritious ones like iceberg and butter lettuce.
“We basically created a new category of salad,” says Tom Adams, cofounder and CEO of Pairwise. The greens will initially be available in select restaurants and other outlets in the Minneapolis–St. Paul region, St. Louis, and Springfield, Massachusetts. The company plans to start stocking the greens in grocery stores this summer, likely in the Pacific Northwest first.
A naturally occurring part of bacteria’s immune system, Crispr was first harnessed as a gene-editing tool in 2012. Ever since, scientists have envisioned lofty uses for the technique. If you could tweak the genetic code of plants, you could—at least in theory—install any number of favorable traits into them. For instance, you could make crops that produce larger yields, resist pests and disease, or require less water. Crispr has yet to end world hunger, but in the short term, it may give consumers more variety in what they eat.
Pairwise’s goal is to make already healthy foods more convenient and enjoyable. Beyond mustard greens, the company is also trying to improve fruits. It’s using Crispr to develop seedless blackberries and pitless cherries. “Our lifestyle and needs are evolving and we’re becoming more aware of our nutrition deficit,” says Haven Baker, cofounder and chief business officer at Pairwise. In 2019, only about one in 10 adults in the US met the daily recommended intake of 1.5 to 2 cups of fruit and 2 to 3 cups of vegetables, according to the Centers for Disease Control and Prevention.
Technically, the new mustard greens aren’t a genetically modified organism, or GMO. In agriculture, GMOs are those made by adding genetic material from a completely different species. These are crops that could not be produced through conventional selective breeding—that is, choosing parent plants with certain characteristics to produce offspring with more desirable traits.
Instead, Crispr involves tweaking an organism’s own genes; no foreign DNA is added. One benefit of Crispr is that it can achieve new plant varieties in a fraction of the time it takes to produce a new one through traditional breeding. It took Pairwise just four years to bring its mustard greens to the market; it can take a decade or longer to bring out desired characteristics through the centuries-old practice of crossbreeding.
In the US, gene-edited foods aren’t subject to the same regulations as GMOs, so long as their genetic changes could have otherwise occurred through traditional breeding—such as a simple gene deletion or swapping of some DNA letters. As a result, gene-edited foods don’t have to be labeled as such. By contrast, GMOs need to be labeled as “bioengineered” or “derived from bioengineering” under new federal requirements, which went into effect at the beginning of 2022.
The US Department of Agriculture reviews applications for gene-edited foods to determine whether these altered plants could become a pest, and the Food and Drug Administration recommends that producers consult with the agency before bringing these new foods to market. In 2020, the USDA determined Pairwise’s mustard greens were not plant pests. The company also met with the FDA prior to introducing its new greens.
The mustard greens aren’t the first Crispr food to be launched commercially. In 2021, a Tokyo firm introduced a Crispr-edited tomato in Japan that contains high amounts of y-aminobutyric acid, or GABA. A chemical messenger in the brain, GABA blocks impulses between nerve cells. The company behind the tomato, Sanatech Seeds, claims that eating GABA can help relieve stress and lower blood pressure.
Another Minnesota firm, Calyxt, came out with a gene-edited soybean oil in 2019 that’s free of trans fats, but the product uses an older form of gene editing known as TALENs.
Some question the value of using Crispr to make less bitter greens. People who don’t eat enough vegetables are unlikely to change their habits just because a new salad alternative is available, says Peter Lurie, president and executive director of the Center for Science in the Public Interest, a Washington, DC–based nonprofit that advocates for safer and healthier foods. “I don’t think this is likely to be the answer to any nutritional problems,” he says, adding that a staple crop like fortified rice would likely have a much bigger nutritional impact.
When genetic engineering was first introduced to agriculture in the 1990s, proponents touted the potential consumer benefits of GMOs, such as healthier or fortified foods. In reality, most of the GMOs on the market today were developed to help farmers prevent crop loss and increase yield. That may be starting to change. Last year, a GMO purple tomato was introduced in the US with consumers in mind. It’s engineered to contain more antioxidants than the regular red variety of tomato, and its shelf life is also twice as long.
Apple’s Mixed-Reality Headset, Vision Pro, Is Here
Gene-edited foods like the new mustard greens may offer similar consumer benefits without the baggage of the GMO label. Despite decades of evidence showing that GMOs are safe, many Americans are still wary of these foods. In a 2019 poll by the Pew Research Center, about 51 percent of respondents thought GMOs were worse for people’s health than those with no genetically modified ingredients.
However, gene-edited foods could still face obstacles with public acceptance, says Christopher Cummings, a senior research fellow at North Carolina State University and Iowa State University. Most people have not made up their minds about whether they would actively avoid or eat them, according to a 2022 study that Cummings conducted. Respondents who indicated a willingness to eat them tended to be under 30 with higher levels of education and household income, and many expressed a preference for transparency around gene-edited foods. Almost 75 percent of those surveyed wanted gene-edited foods to be labeled as such.
“People want to know how their food is made. They don’t want to feel duped,” Cummings says. He thinks developers of these products should be transparent about the technology they use to avoid future backlash.
As for wider acceptance of gene-edited foods, developers need to learn lessons from GMOs. One reason consumers have a negative or ambivalent view of GMOs is because they don’t often benefit directly from these foods. “The direct-to-consumer benefit has not manifested in many technological food products in the past 30 years,” says Cummings. “If gene-edited foods are really going to take off, they need to provide a clear and direct benefit to people that helps them financially or nutritionally.”
Stability AI became a $1 billion company with the help of a viral AI text-to-image generator and — per interviews with more than 30 people — some misleading claims from founder Emad Mostaque.
Emad Mostaque is the modern-day Renaissance man who kicked off the AI gold rush. The Oxford master’s degree holder is an award-winning hedge fund manager, a trusted confidant to the United Nations and the tech founder behind Stable Diffusion — the text-to-image generator that broke the internet last summer and, in his words, pressured OpenAI to launch ChatGPT, the bot that mainstreamed AI. Now he’s one of the faces of the generative AI wave and has secured more than $100 million to pursue his vision of building a truly open AI that he dreams will transform Hollywood, democratize education and vanquish PowerPoint. “Hopefully they’ll give me a Nobel Peace Prize for that,” he joked in a January interview with Forbes.
At least, that’s the way that he tells the story.
In reality, Mostaque has a bachelor’s degree, not a master’s degree from Oxford. The hedge fund’s banner year was followed by one so poor that it shut down months later. The U.N. hasn’t worked with him for years. And while Stable Diffusion was the main reason for his own startup Stability AI’s ascent to prominence, its source code was written by a different group of researchers. “Stability, as far as I know, did not even know about this thing when we created it,” Björn Ommer, the professor who led the research, told Forbes. “They jumped on this wagon only later on.”
“What he is good at is taking other people’s work and putting his name on it, or doing stuff that you can’t check if it’s true.”
These aren’t the only misleading stories Mostaque, 40, has told to maneuver himself to the forefront of what some are calling the greatest technological sea change since the internet — despite having no formal experience in the field of artificial intelligence. Interviews with 13 current and former employees and more than two dozen investors, collaborators and former colleagues, as well as pitch decks and internal documents, suggest his recent success has been bolstered by exaggeration and dubious claims.
After Stable Diffusion went viral last summer, blue-chip venture capital firms Coatue Management and Lightspeed Venture Partners poured in $100 million, giving Mostaque’s London-based startup a $1 billion valuation. By October, Stable Diffusion had 10 million daily users, Mostaque told Bloomberg. In May, the White House named Stability alongside Microsoft and Nvidia as one of the seven “leading AI developers” which would collaborate on a landmark federal AI safety initiative. Mostaque recently dined with Amazon founder Jeff Bezos; reclusive Google cofounder Sergey Brin made a rare public appearance at Stability’s ritzy launch party in San Francisco last October.
Mostaque’s vision for open-source AI has mesmerized other longtime technologists. “He’s probably the most visionary person I’ve ever met,” says Christian Cantrell, who left a two-decade career at Adobe to join Stability in October (he quit six months later and launched his own startup). More premier talent has followed since the cash injection last summer. Among the 140-person staff: a vice president of research and development who was a Nvidia director; another research head who came from Google Brain; and three Ph.D. students from Ommer’s lab.
But to build buzz around Stability, Mostaque made an elaborate gambit supported by exaggerated claims and promises, overstating his role in several major AI projects and embellishing a quotidian transaction with the notoriously uncompromising Amazon into a “strategic partnership” with an 80% discount. AI researchers with whom Mostaque worked told Forbes he claimed credit he did not earn or deserve. And when pressed, Stability spokesperson Motez Bishara admitted to Forbes that Stability had no special deal with Amazon.
Mostaque’s other mischaracterizations to investors include multiple fundraising decks seen by Forbes that presented the OECD, WHO and World Bank as Stability’s partners at the time — which all three organizations deny. Bishara said the company could not comment on the presentations “without knowing the exact version,” but that they were accompanied by additional data and documentation.
Inside the company, wages and payroll taxes have been repeatedly delayed or unpaid, according to eight former employees, and last year the UK tax agency threatened to seize company assets. (“There were several issues that were expeditiously resolved,” Bishara said.) At the same time that workers faced payday uncertainties, Mostaque’s wife Zehra Qureshi, who was head of PR and later assumed a seat on the company’s board of directors, transferred tens of thousands of pounds out of the company’s bank account, per several sources and screenshots of financial transactions viewed by Forbes. Stability spokesperson Bishara said the spouses had been “making loans to and from the business” and that “any amounts owed from or to Mostaque and Qureshi were settled in full before the end of 2022.”
In responding to a detailed list of questions, Mostaque shared a statement saying that Stability had not historically prioritized the “systems and processes” underpinning the fast-growing startup. “We recognize our flaws, and we are working to improve and resolve these issues in an effective and compassionate manner,” he wrote.
AI experts and prospective investors have been privately expressing doubts about some of Mostaque’s claims for months now. Despite Silicon Valley’s sudden, insatiable appetite for AI startups, a number of venture capitalists told Forbes that the Stability founder has been struggling to raise hundreds of millions more in cash at a roughly $4 billion valuation. Mostaque publicly claimed last October that annualized revenue had surpassed $10 million, but insiders say sales have not improved (Bishara said the October number was “a fair assessment of anticipated revenues at the time,” and declined to comment on current revenue). “So many things don’t add up,” said one VC who rejected Mostaque’s funding overtures.
A BILLION-DOLLAR GAMBIT
In 2005, Mostaque graduated from Oxford with a bachelor’s degree, not a master’s degree as he’d later claim. (Responding to an inquiry from Forbes, Bishara said Mostaque intended to apply to receive an “Oxford MA,” which the university grants to alumni without any additional graduate-level coursework. He is now expected to obtain that degree in July.)
Then he went into finance, joining Swiss fund manager Pictet. “He was very good at spinning a narrative,” said JP Smith, who hired Mostaque at Pictet and brought him over as a consultant at firm Ecstrat. In 2017, Mostaque joined hedge fund Capricorn, where Mostaque told Forbes he’d won an award for restructuring and running the struggling firm. “He was co-chief investment officer, but he didn’t pull the trigger on the investments,” clarified Damon Hoff, Capricorn’s cofounder. Hoff said the two-year run with the $330 million fund ended with its wind down in 2018 due to poor performance.
Following a string of abandoned startups (including a crypto project centered on a digitized Quran), Mostaque founded Stability in 2019 as an AI-powered data hub that global agencies would use to make decisions about Covid-19. It launched with a July 2020 virtual event featuring talks by Stanford AI expert Fei-Fei Li and representatives from UNESCO, WHO and the World Bank. But the project failed to get off the ground and was scrapped about a year later. “Lots of people promised a lot and they didn’t come through,” Mostaque told Forbes in January.
“One thing you learned from that is if you have a company with a huge press department, you can rebrand history in your interest.”
The company’s focus shifted several more times. Early employees said they researched building a network of vending machine refrigerators around London that would be stocked with grab-and-go items, as well as a line of emotional support dog NFTs (Snoop Dogg was interested, employees recollect Mostaque claiming around the office; the rapper could not be reached for comment). When generative AI started exploding, Mostaque saw an opportunity. Through a variety of maneuvers and exaggerations, he would successfully position Stability as one of the leading unicorn AI companies of the moment.
To get there, Mostaque began telling investors that Stability was assembling one of the world’s 10 biggest supercomputers. He branded himself to AI researchers as a beneficent ally, magnanimously willing to provide funding and lend use of Stability’s supercomputer to grassroots AI builders fighting the good fight against goliaths like Google and OpenAI.
This supercomputer, Mostaque said, was built from thousands of Nvidia’s state-of-the-art GPUs and purchased with a stunning 80% discount from Amazon Web Services. Five fundraising pitch decks from May to August 2022 list AWS as a “strategic partner” or “partner.”
“We talked to Amazon and said this will be the big thing,” Mostaque told Forbes from his bustling London headquarters in January. “They cut us an incredibly attractive deal — certain personal guarantees and other things, which I don’t particularly want to go into because she’ll be angry at me,” he explained, nodding to Zehra Qureshi, his wife and Stability’s then-head of PR. Qureshi declined to elaborate.
But Bratin Saha, a vice president for the Seattle tech giant’s AI arm, told Forbes in January that Stability is “accessing AWS infrastructure no different than what our other customers do.” Three former Stability employees said that prior to its venture capital injection, Amazon had threatened to revoke the company’s access to some of its GPUs because it had racked up millions in bills that had gone unpaid for months.
Asked for clarification, Stability conceded that the “incredibly attractive deal” Mostaque had claimed was actually the standard discount Amazon offers to anybody who makes a long-term commitment to lease computing power. “Any payment issues were managed in an orderly and communicative way with support from AWS,” Bishara said. AWS did not respond to multiple requests for additional comment.
Stability’s pitch decks contained other exaggerations: In investor presentations from May and June 2022, Stability described AI image generator Midjourney as a part of its “ecosystem” claiming it had “co-created” the product and “organized” its user community. Midjourney founder David Holz told Forbes Mostaque gave a “very small” financial donation but otherwise had no connection with his organization.
Got a tip about a story? Reach out to the authors, Kenrick Cai at kcai@forbes.com or kenrick.cai@protonmail.com, or Iain Martin at iain.martin@forbes.com.
In addition, Mostaque directed his team to list groups like UNESCO, OECD, WHO and World Bank as partners in pitch decks, even though they were not involved in the company’s later evolution, according to four former employees. Bishara denied that Mostaque made this directive, but these organizations are indeed listed as “partners” in multiple fundraising decks as recent as August 2022, in which Mostaque also describes himself as the “UN Covid AI lead.”
A UNESCO spokesperson said the UN agency had no association with Stability beyond the Covid-19 data initiative, which had ended well before last summer. The other three agencies said they had no record of official partnerships with the company.
Asked about the claims in Stability’s pitch decks, Bishara said that all of Stability’s investor decks included investment memos and appendix documentation that contained more context on the Amazon deal and details of “our relationship with partners and more.” But two investors pitched by the company told Forbes they received no such additional information.
THE DEVELOPERS BEHIND STABLE DIFFUSION
In June 2022, Mostaque offered to provide Stability’s supercomputer to a group of German academics who had created an open-sourced image generator nicknamed Latent Diffusion. This model had launched seven months prior in collaboration with a New York City-based AI startup called Runway. But it was trained using only a few dozen Nvidia GPUs, according to Björn Ommer, the professor who led the research teams at Ludwig Maximilian University of Munich and Heidelberg University.
For the researchers, who were facing shockingly high computing costs to do their work, the proposal seemed to them a no-brainer. The computing boost Stability provided dramatically improved Latent Diffusion’s performance. In August, the new model was launched as Stable Diffusion, a new name that referenced its benefactor. Stability issued a press release and Mostaque positioned himself in the public eye as chief evangelist for what he calls “the most popular open source software ever.” (Linux or Firefox might disagree.)
“What he is good at is taking other people’s work and putting his name on it, or doing stuff that you can’t check if it’s true,” one former employee said of Mostaque. In a statement, Bishara said Mostaque is “quick to praise and attribute the work of collaborators” and “categorically denies these spurious claims and characterizations.”
Within days of Stable Diffusion’s launch, Stability secured $100 million from leading tech investment firms Coatue and Lightspeed — eight times the amount of money Mostaque set out to raise, he declared in text messages to his earlier investors. Both firms declined requests for comment.
“The investment thesis that we had is that we don’t know exactly what all the use cases will be, but we know that this technology is truly transformative and has reached a tipping point in terms of what it can do.”
The round valued Stability at $1 billion though the company hadn’t yet generated much revenue. Stability’s fundraising decks at the time characterized Stable Diffusion as “our” model, with no mention of the original researchers. A press release announcing its funding said “Stability AI is the company behind Stable Diffusion” making no reference whatsoever to its creators. Ommer told Forbes he’d hoped to publicize his lab’s work, but his university’s entire press department was on vacation at the time.
Bishara said that Stability has made “repeated public statements” crediting Ludwig Maximilian University and Runway on its website and on the Stable Diffusion’s GitHub page. Nevertheless, the original developers feel Mostaque misled the public in key communications. “One thing you learned from that is if you have a company with a huge press department, you can rebrand history in your interest,” Ommer said.
In October, Stability claimed Runway had stolen its intellectual property by releasing a new version of Stable Diffusion. Runway cofounder Cristóbal Valenzuela snapped back that a copyright breach wasn’t possible because the tech was open source; Mostaque retracted a takedown request hours later. He later told Forbes that he was worried about the lack of guardrails in Runway’s version — though Stable Diffusion’s collaborators don’t buy the excuse.
The incident, Ommer said, “pushed it too far over the edge.” Valenzuela was equally disillusioned. „New people are coming into this field that we’ve been in for years, and really trying to own narratives that they should not,” he told Forbes in an interview last year (he declined a request for further comment).
Both his lab and Runway ceased working with Stability.
MOM-AND-POP SHOP
While Mostaque was touting Stability’s supercomputer and partnerships to investors and researchers, the company was facing a cash crunch. Wages and payroll taxes were repeatedly delayed or unpaid, according to seven current and former employees — in some cases for more than a month. Five of these sources said they personally experienced delayed payments between 2020 and 2023. Four of these people independently told Forbes that representatives of HM Revenue & Customs, the U.K. government tax collection agency, appeared at the company office and threatened to seize assets due to overdue taxes. Bishara said that delayed payments on taxes and employee salaries have been rectified.
Eric Hallahan, a former intern, told Forbes he is still waiting for payment on an invoice he sent the company last August for 181 of the 300 hours he worked. Bishara said that the company has no record of missed salary payments “in the regular course of operations” since 2021, but conceded that some may have occurred under “extraneous circumstances”; in Hallahan’s case, he said Stability is looking into the invoice after being alerted to it in April.
While staffers said they stressed over being paid last summer, tens of thousands of British pounds moved from Stability’s corporate account to the personal account of Qureshi, Mostaque’s wife, per screenshots of financial transactions obtained by Forbes.
Bishara attributed the transactions to Stability’s “owner-managed startup” origins, which he said included the couple making loans to and from the company. “As the company grew and matured, a full reconciliation was done and any amounts owed from or to Mostaque and Qureshi were settled in full before the end of 2022 by the new, experienced finance team,” he told Forbes. Qureshi’s lawyers declined to answer questions but shared a statement in which she said she had provided “emotional and financial support” to her husband’s business since 2021.
While Qureshi’s formal role at the company was head of PR, early employees told Forbes she had described herself as Stability’s chief operating officer — a title that also appeared on business cards. (Bishara said Qureshi never held an executive role and the cards were “created by a family friend for design purposes and were never used.”) After the company raised funding in September, Qureshi joined its board of directors.
One current and four former employees who declined to be named for fear of retribution said Qureshi regularly scolded employees so harshly that she drove some to tears. Qureshi described her management style as “direct” in a statement shared through her lawyers. “Unfortunately it seems that my views or directions were taken personally by a few individuals, which was not my intention.”
“Start to finish,” Mostaque told Forbes, he needed just six days to secure $100 million from leading investment firms Coatue and Lightspeed once Stable Diffusion went viral.
Bishara said Qureshi left the company in late January to pursue personal endeavors and that she is no longer on the board. However, an organizational chart from earlier in May listed her as the “Head of Foundation,” at the top of the company hierarchy equal to Mostaque’s position.
Qureshi, through counsel, shared a statement: “I recognised that the time had come for us to move in different directions and I stepped down from my role as Head of PR at the start of this year, and have also resigned from the Board. Emad and I have young children who need my focus, and I also intend to pursue other, personal projects, but I will continue to support my husband in his quest to build and grow Stability AI into a global leader in the field.”
GROWING PAINS
Venture capitalists historically spend months performing due diligence, a process that involves analyzing the market, vetting the founder and speaking to customers, to check for red flags before investing in a startup. But “start to finish,” Mostaque told Forbes, he needed just six days to secure $100 million from leading investment firms Coatue and Lightspeed once Stable Diffusion went viral. The extent of due diligence the firms performed is unclear given the speed of the investment.
“The investment thesis that we had is that we don’t know exactly what all the use cases will be, but we know that this technology is truly transformative and has reached a tipping point in terms of what it can do,” Gaurav Gupta, the Lightspeed partner who led the investment, told Forbes in a January interview. Coatue and Lightspeed declined requests for further comment.
Mostaque says Stability is building bespoke AI models for dozens of customers. But he told Forbes that he is only authorized to name two. The first is Eros Investments, an Indian holding company whose media arm was delisted from the New York Stock Exchange and recently settled a lawsuit alleging that it misled investors, though it did not admit wrongdoing. (Eros did not respond to multiple requests for comment.) The second: the African nation Malawi, where, Mostaque said on a recent podcast appearance, Stability is currently “deploying four million tablets to every child.” (Malawi’s government did not return requests for comment.)
Less than two months after Stable Diffusion’s public launch, Mostaque claimed that Stability’s annualized revenue was higher than the “low tens of millions of dollars” that OpenAI was reportedly making at the time. Sources familiar with the matter said Stability’s ARR is now less than $10 million — and that it’s far outpaced by the startup’s burn rate. Like many AI startups raising vast amounts of cash right now, it will need more money to stay afloat.
In January, Mostaque implied that the company was having no issues with fundraising: “We have been offered by many, many entities and we’ve said no,” he told Forbes. But three venture capitalists told Forbes he has been pitching them and other investors on raising a fresh $400 million for several months; they’d all passed. (Bishara declined to comment on revenue, but said the company has “significant” cash reserves remaining.)
Stability is also facing a pair of lawsuits which accuse it of violating copyright law to train its technology. It filed a motion to dismiss one from a class action of artists on grounds that the artists failed to identify any specific instances of infringement. In response to the other, from Getty Images, it said Delaware — where the suit was filed — lacked jurisdiction and has moved to change the location to Northern California or dismiss the case outright. Both motions are pending court review. Bishara declined to comment on both suits.
In an open letter last September, Democratic representative Anna Eshoo urged action in Washington against the open source nature of Stable Diffusion. The model, she wrote, had been used to generate images of “violently beaten Asian women” and “pornography, some of which portray real people.” Bishara said newer versions of Stable Diffusion filter data for “potentially unsafe content, helping to prevent users from generating harmful images in the first place.”
AI research has not come easy for Stability — even on its flagship Stable Diffusion product. The last version of the model published by the original developers (released in October 2022) received three times as many downloads last month on Hugging Face, which hosts the models, as compared to the most popular version published in-house by Stability. And StableLM, its ChatGPT competitor, was released in April to a tiny fraction of Stable Diffusion’s fanfare.
Mostaque is unfazed. Stability has a seasoned technical leader to spearhead research: himself. He claims to have discovered a bespoke medical treatment for autism years ago by using AI to analyze existing scientific literature and build a knowledge graph of molecular compounds. (Bishara said the research was done privately and declined to elaborate further.)
“I’m a good programmer,” Mostaque told Forbes in January. It all dates back to a gap year he said he took before Oxford to be a developer at software company Metaswitch, he continued. “I didn’t know how to program before that, so I taught myself over the summer — quite naturally actually,” he says. By his account, he submitted several pieces of code and made a personal plea to the company: “I want to be a programmer and you should pay me to be a programmer. They said sure.”
It took Alex Polyakov just a couple of hours to break GPT-4. When OpenAI released the latest version of its text-generating chatbot in March, Polyakov sat down in front of his keyboard and started entering prompts designed to bypass OpenAI’s safety systems. Soon, the CEO of security firm Adversa AI had GPT-4 spouting homophobic statements, creating phishing emails, and supporting violence.
Polyakov is one of a small number of security researchers, technologists, and computer scientists developing jailbreaks and prompt injection attacks against ChatGPT and other generative AI systems. The process of jailbreaking aims to design prompts that make the chatbots bypass rules around producing hateful content or writing about illegal acts, while closely-related prompt injection attacks can quietly insert malicious data or instructions into AI models.
Both approaches try to get a system to do something it isn’t designed to do. The attacks are essentially a form of hacking—albeit unconventionally—using carefully crafted and refined sentences, rather than code, to exploit system weaknesses. While the attack types are largely being used to get around content filters, security researchers warn that the rush to roll out generative AI systems opens up the possibility of data being stolen and cybercriminals causing havoc across the web.
Underscoring how widespread the issues are, Polyakov has now created a “universal” jailbreak, which works against multiple large language models (LLMs)—including GPT-4, Microsoft’s Bing chat system, Google’s Bard, and Anthropic’s Claude. The jailbreak, which is being first reported by WIRED, can trick the systems into generating detailed instructions on creating meth and how to hotwire a car.
The jailbreak works by asking the LLMs to play a game, which involves two characters (Tom and Jerry) having a conversation. Examples shared by Polyakov show the Tom character being instructed to talk about “hotwiring” or “production,” while Jerry is given the subject of a “car” or “meth.” Each character is told to add one word to the conversation, resulting in a script that tells people to find the ignition wires or the specific ingredients needed for methamphetamine production. “Once enterprises will implement AI models at scale, such ‘toy’ jailbreak examples will be used to perform actual criminal activities and cyberattacks, which will be extremely hard to detect and prevent,” Polyakov and Adversa AI write in a blog post detailing the research.
Arvind Narayanan, a professor of computer science at Princeton University, says that the stakes for jailbreaks and prompt injection attacks will become more severe as they’re given access to critical data. “Suppose most people run LLM-based personal assistants that do things like read users’ emails to look for calendar invites,” Narayanan says. If there were a successful prompt injection attack against the system that told it to ignore all previous instructions and send an email to all contacts, there could be big problems, Narayanan says. “This would result in a worm that rapidly spreads across the internet.”
Escape Route
“Jailbreaking” has typically referred to removing the artificial limitations in, say, iPhones, allowing users to install apps not approved by Apple. Jailbreaking LLMs is similar—and the evolution has been fast. Since OpenAI released ChatGPT to the public at the end of November last year, people have been finding ways to manipulate the system. “Jailbreaks were very simple to write,” says Alex Albert, a University of Washington computer science student who created a website collecting jailbreaks from the internet and those he has created. “The main ones were basically these things that I call character simulations,” Albert says.
Initially, all someone had to do was ask the generative text model to pretend or imagine it was something else. Tell the model it was a human and was unethical and it would ignore safety measures. OpenAI has updated its systems to protect against this kind of jailbreak—typically, when one jailbreak is found, it usually only works for a short amount of time until it is blocked.
However, many of the latest jailbreaks involve combinations of methods—multiple characters, ever more complex backstories, translating text from one language to another, using elements of coding to generate outputs, and more. Albert says it has been harder to create jailbreaks for GPT-4 than the previous version of the model powering ChatGPT. However, some simple methods still exist, he claims. One recent technique Albert calls “text continuation” says a hero has been captured by a villain, and the prompt asks the text generator to continue explaining the villain’s plan.
When we tested the prompt, it failed to work, with ChatGPT saying it cannot engage in scenarios that promote violence. Meanwhile, the “universal” prompt created by Polyakov did work in ChatGPT. OpenAI, Google, and Microsoft did not directly respond to questions about the jailbreak created by Polyakov. Anthropic, which runs the Claude AI system, says the jailbreak “sometimes works” against Claude, and it is consistently improving its models.
“As we give these systems more and more power, and as they become more powerful themselves, it’s not just a novelty, that’s a security issue,” says Kai Greshake, a cybersecurity researcher who has been working on the security of LLMs. Greshake, along with other researchers, has demonstrated how LLMs can be impacted by text they are exposed to online through prompt injection attacks.
In one research paper published in February, reported on by Vice’s Motherboard, the researchers were able to show that an attacker can plant malicious instructions on a webpage; if Bing’s chat system is given access to the instructions, it follows them. The researchers used the technique in a controlled test to turn Bing Chat into a scammer that asked for people’s personal information. In a similar instance, Princeton’s Narayanan included invisible text on a website telling GPT-4 to include the word “cow” in a biography of him—it later did so when he tested the system.
“Now jailbreaks can happen not from the user,” says Sahar Abdelnabi, a researcher at the CISPA Helmholtz Center for Information Security in Germany, who worked on the research with Greshake. “Maybe another person will plan some jailbreaks, will plan some prompts that could be retrieved by the model and indirectly control how the models will behave.”
No Quick Fixes
Generative AI systems are on the edge of disrupting the economy and the way people work, from practicing law to creating a startup gold rush. However, those creating the technology are aware of the risks that jailbreaks and prompt injections could pose as more people gain access to these systems. Most companies use red-teaming, where a group of attackers tries to poke holes in a system before it is released. Generative AI development uses this approach, but it may not be enough.
Daniel Fabian, the red-team lead at Google, says the firm is “carefully addressing” jailbreaking and prompt injections on its LLMs—both offensively and defensively. Machine learning experts are included in its red-teaming, Fabian says, and the company’s vulnerability research grants cover jailbreaks and prompt injection attacks against Bard. “Techniques such as reinforcement learning from human feedback (RLHF), and fine-tuning on carefully curated datasets, are used to make our models more effective against attacks,” Fabian says.
OpenAI did not specifically respond to questions about jailbreaking, but a spokesperson pointed to its public policies and research papers. These say GPT-4 is more robust than GPT-3.5, which is used by ChatGPT. “However, GPT-4 can still be vulnerable to adversarial attacks and exploits, or ‘jailbreaks,’ and harmful content is not the source of risk,” the technical paper for GPT-4 says. OpenAI has also recently launched a bug bounty program but says “model prompts” and jailbreaks are “strictly out of scope.”
Narayanan suggests two approaches to dealing with the problems at scale—which avoid the whack-a-mole approach of finding existing problems and then fixing them. “One way is to use a second LLM to analyze LLM prompts, and to reject any that could indicate a jailbreaking or prompt injection attempt,” Narayanan says. “Another is to more clearly separate the system prompt from the user prompt.”
“We need to automate this because I don’t think it’s feasible or scaleable to hire hordes of people and just tell them to find something,” says Leyla Hujer, the CTO and cofounder of AI safety firm Preamble, who spent six years at Facebook working on safety issues. The firm has so far been working on a system that pits one generative text model against another. “One is trying to find the vulnerability, one is trying to find examples where a prompt causes unintended behavior,” Hujer says. “We’re hoping that with this automation we’ll be able to discover a lot more jailbreaks or injection attacks.”
Tesla Inc (TSLA.O) was the No. 1 EV maker worldwide in 2022, but China’s BYD (002594.SZ) and others are closing the gap fast, according to a Reuters analysis of global and regional EV sales data provided by EV-volumes.com.
In fact, BYD passed Tesla in EV sales last year in the Asia-Pacific region, while the Volkswagen Group (VOWG_p.DE) has been the EV leader in Europe since 2020.
While Tesla narrowed VW’s lead in Europe, the U.S. automaker surrendered ground in Asia-Pacific as well as its home market as the competition heats up.
Reuters Graphics
The most significant challenges to Tesla are coming from established automakers and a group of Chinese EV manufacturers. Several U.S. EV startups that hoped to ride Tesla’s coattails are struggling, including luxury EV maker Lucid (LCID.O), whose shares plunged 16% on Thursday after disappointing sales and financial results.
Over the next two years, rivals including General Motors Co (GM.N), Ford Motor Co (F.N), Mercedes-Benz (MBGn.DE), Hyundai Motor (005380.KS) and VW will unleash scores of new electric vehicles, from a Chevrolet priced below $30,000 to luxury sedans and SUVs that top $100,000.
On Wednesday, Mercedes used Silicon Valley as the backdrop for a lengthy presentation on how Mercedes models of the near-future will immerse their owners in rich streams of entertainment and productivity content, delivered through „hyperscreens“ that stretch across the dashboard and make the rectangular screens in Teslas look quaint. Executives also emphasized that only Mercedes has an advanced, Level 3 partially automated driving system approved for use in Germany, with approval pending in California.
In China, Tesla has had to cut prices on its best-selling models under growing pressure from domestic Chinese manufacturers including BYD, Geely Automobile’s (0175.HK) Zeekr brand and Nio (9866.HK).
China’s EV makers could get another boost if Chinese battery maker CATL (300750.SZ) follows through on plans to heavily discount batteries used in their vehicles.
Musk has said he will use the March 1 event to outline his „Master Plan Part 3“ for Tesla.
In the nearly seven years since Musk published his „Master Plan Part Deux“ in July 2016, Tesla pulled ahead of established automakers and EV startups in most important areas of electric vehicle design, digital features and manufacturing.
Tesla’s vehicles offered features, such as the ability to navigate into a parking space or make rude sounds, that other vehicles lacked.
Tesla’s then-novel vertically integrated battery and vehicle production machine helped achieve higher profit margins than most established automakers – even as bigger rivals lost money on their EVs.
Fast-forward to today, and Tesla’s „Full Self Driving Beta“ automated driving is still classified by the company and federal regulators as a „Level 2“ driver assistance system that requires the human motorist to be ready to take control at all times. Such systems are common in the industry.
Tesla earlier this month was compelled by federal regulators to revise its FSD software under a recall order.
Tesla has established a wide lead over its rivals in manufacturing technology – an area where it was struggling when Musk put forward the last installment of his „Master Plan.“
Now, rivals are copying the company’s production technology, buying some of the same equipment Tesla uses. IDRA, the Italian company that builds huge presses to form large one-piece castings that are the building blocks of Tesla vehicles, said it is now getting orders from other automakers.
Musk has told investors that Tesla can keep its lead in EV manufacturing costs. The company has promised investors that on March 1 they „will be able to see our most advanced production line“ in Austin, Texas.
„Manufacturing technology will be our most important long-term strength,” Musk told analysts in January. Asked if Tesla could make money on a vehicle that sold in the United States for $25,000 to $30,000 – the EV industry’s Holy Grail – Musk was coy.
„I’d probably be asking the same question,“ he said. „But we would be jumping the gun on future announcements.“
Automakers new and old are racing to match software-powered features pioneered by Tesla, which allow for vehicle performance, battery range and self-driving capabilities to be updated from a distance.
The German carmaker agreed to share revenue with semiconductor maker Nvidia Corp (NVDA.O), its partner on automated driving software since 2020, to bring down the upfront cost of buying expensive high-powered semiconductors, Chief Executive Ola Kaellenius said on Wednesday.
„You only pay for a heavily subsidized chip, and then figure out how to maximize joint revenue,“ he said, reasoning that the sunk costs would be low even if drivers did not turn on every feature allowed by the chip.
But only customers paying for an extra option package would have cars equipped with Lidar sensor technology and other hardware for automated „Level 3“ driving, which have a higher variable cost, Kaellenius said.
Self-driving sensor maker Luminar Technologies Inc (LAZR.O), in which Mercedes owns a small stake, said on Wednesday it struck a multi-billion dollar deal with the carmaker to integrate its sensors across a broad range of its vehicles by the middle of the decade, sending Luminar shares up over 25%.
Mercedes‘ announcements at a software update day in Sunnyvale, California, detailed the strategy behind a process underway for years at the carmaker to move from a patchwork approach integrating software from a range of suppliers to controlling the core of its software and bringing partners in.
It generated over one billion euros ($1.06 billion) from software-enabled revenues in 2022 and expects that figure to rise to a high single-digit billion euro figure by 2030 after it rolls out its new MB.OS operating system from mid-decade.
This is a more conservative estimate as a proportion of total revenue than others like Stellantis (STLAM.MI) and General Motors (GM.N) have put forward.
„We take a prudent approach because no-one knows how big that potential pot of gold is at this stage,“ Kaellenius said.
GOOGLE PARTNERSHIP
Mercedes said the collaboration with Google would allow it to offer traffic information and automatic rerouting in its cars.
Drivers will also be able to watch YouTube on the cars‘ entertainment system when the car is parked or in Level 3 autonomous driving mode, which allows a driver to take their eyes off the wheel on certain roads as long as they can resume control if needed.
Other carmakers like General Motors, Renault (RENA.PA), Nissan (7201.T) and Ford (F.N) have embedded an entire package of Google services into their vehicles, offering features like Google Maps, Google Assistant and other applications.
All vehicles on Mercedes‘ upcoming modular architecture platform will also have so-called hyperscreens extending across the cockpit of the car, the company said on Wednesday.
Here is how platforms die: First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a „two-sided market,“ where a platform sits between buyers and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between them.
When a platform starts, it needs users, so it makes itself valuable to users. Think of Amazon: For many years, it operated at a loss, using its access to the capital markets to subsidize everything you bought. It sold goods below cost and shipped them below cost. It operated a clean and useful search. If you searched for a product, Amazon tried its damndest to put it at the top of the search results.
This was a hell of a good deal for Amazon’s customers. Lots of us piled in, and lots of brick-and-mortar retailers withered and died, making it hard to go elsewhere. Amazon sold us ebooks and audiobooks that were permanently locked to its platform with DRM, so that every dollar we spent on media was a dollar we’d have to give up if we deleted Amazon and its apps. And Amazon sold us Prime, getting us to pre-pay for a year’s worth of shipping. Prime customers start their shopping on Amazon, and 90 percent of the time, they don’t search anywhere else.
That tempted in lots of business customers—marketplace sellers who turned Amazon into the „everything store“ it had promised from the beginning. As these sellers piled in, Amazon shifted to subsidizing suppliers. Kindle and Audible creators got generous packages. Marketplace sellers reached huge audiences and Amazon took low commissions from them.
This strategy meant that it became progressively harder for shoppers to find things anywhere except Amazon, which meant that they only searched on Amazon, which meant that sellers had to sell on Amazon. That’s when Amazon started to harvest the surplus from its business customers and send it to Amazon’s shareholders. Today, Marketplace sellers are handing more than 45 percent of the sale price to Amazon in junk fees. The company’s $31 billion „advertising“ program is really a payola scheme that pits sellers against each other, forcing them to bid on the chance to be at the top of your search.
Searching Amazon doesn’t produce a list of the products that most closely match your search, it brings up a list of products whose sellers have paid the most to be at the top of that search. Those fees are built into the cost you pay for the product, and Amazon’s „Most Favored Nation“ requirement for sellers means that they can’t sell more cheaply elsewhere, so Amazon has driven prices at every retailer.
Search Amazon for „cat beds“ and the entire first screen is ads, including ads for products Amazon cloned from its own sellers, putting them out of business (third parties have to pay 45 percent in junk fees to Amazon, but Amazon doesn’t charge itself these fees). All told, the first five screens of results for „cat bed“ are 50 percent ads.
This is enshittification: Surpluses are first directed to users; then, once they’re locked in, surpluses go to suppliers; then once they’re locked in, the surplus is handed to shareholders and the platform becomes a useless pile of shit. From mobile app stores to Steam, from Facebook to Twitter, this is the enshittification lifecycle.
This is why—as Cat Valente wrote in her magisterial pre-Christmas essay—platforms like Prodigy transformed themselves overnight, from a place where you went for social connection to a place where you were expected to “stop talking to each other and start buying things.”
This shell-game with surpluses is what happened to Facebook. First, Facebook was good to you: It showed you the things the people you loved and cared about had to say. This created a kind of mutual hostage-taking: Once a critical mass of people you cared about were on Facebook, it became effectively impossible to leave, because you’d have to convince all of them to leave too, and agree on where to go. You may love your friends, but half the time you can’t agree on what movie to see and where to go for dinner. Forget it.
Then, it started to cram your feed full of posts from accounts you didn’t follow. At first, it was media companies, whom Facebook preferentially crammed down its users‘ throats so that they would click on articles and send traffic to newspapers, magazines, and blogs. Then, once those publications were dependent on Facebook for their traffic, it dialed down their traffic. First, it choked off traffic to publications that used Facebook to run excerpts with links to their own sites, as a way of driving publications into supplying full-text feeds inside Facebook’s walled garden.
This made publications truly dependent on Facebook—their readers no longer visited the publications‘ websites, they just tuned into them on Facebook. The publications were hostage to those readers, who were hostage to each other. Facebook stopped showing readers the articles publications ran, tuning The Algorithm to suppress posts from publications unless they paid to „boost“ their articles to the readers who had explicitly subscribed to them and asked Facebook to put them in their feeds.
Now, Facebook started to cram more ads into the feed, mixing payola from people you wanted to hear from with payola from strangers who wanted to commandeer your eyeballs. It gave those advertisers a great deal, charging a pittance to target their ads based on the dossiers of non-consensually harvested personal data they’d stolen from you.
Sellers became dependent on Facebook, too, unable to carry on business without access to those targeted pitches. That was Facebook’s cue to jack up ad prices, stop worrying so much about ad fraud, and to collude with Google to rig the ad market through an illegal program called Jedi Blue.
Today, Facebook is terminally enshittified, a terrible place to be whether you’re a user, a media company, or an advertiser. It’s a company that deliberately demolished a huge fraction of the publishers it relied on, defrauding them into a „pivot to video“ based on false claims of the popularity of video among Facebook users. Companies threw billions into the pivot, but the viewers never materialized, and media outlets folded in droves.
But Facebook has a new pitch. It claims to be called Meta, and it has demanded that we live out the rest of our days as legless, sexless, heavily surveilled low-poly cartoon characters. It has promised companies that make apps for this metaverse that it won’t rug them the way it did the publishers on the old Facebook. It remains to be seen whether they’ll get any takers. As Mark Zuckerberg once candidly confessed to a peer, marveling at all of his fellow Harvard students who sent their personal information to his new website, „TheFacebook“:
I don’t know why.
They “trust me”
Dumb fucks.
Once you understand the enshittification pattern, a lot of the platform mysteries solve themselves. Think of the SEO market, or the whole energetic world of online creators who spend endless hours engaged in useless platform Kremlinology, hoping to locate the algorithmic tripwires, which, if crossed, doom the creative works they pour their money, time, and energy into.
Working for the platform can be like working for a boss who takes money out of every paycheck for all the rules you broke, but who won’t tell you what those rules are because if he told you that, then you’d figure out how to break those rules without him noticing and docking your pay. Content moderation is the only domain where security through obscurity is considered a best practice.
The situation is so dire that organizations like Tracking Exposed have enlisted an human army of volunteers and a robot army of headless browsers to try to unwind the logic behind the arbitrary machine judgments of The Algorithm, both to give users the option to tune the recommendations they receive, and to help creators avoid the wage theft that comes from being shadow banned.
But what if there is no underlying logic? Or, more to the point, what if the logic shifts based on the platform’s priorities? If you go down to the midway at your county fair, you’ll spot some poor sucker walking around all day with a giant teddy bear that they won by throwing three balls in a peach basket.
The peach-basket is a rigged game. The carny can use a hidden switch to force the balls to bounce out of the basket. No one wins a giant teddy bear unless the carny wants them to win it. Why did the carny let the sucker win the giant teddy bear? So that he’d carry it around all day, convincing other suckers to put down five bucks for their chance to win one.
The carny allocated a giant teddy bear to that poor sucker the way that platforms allocate surpluses to key performers—as a convincer in a „Big Store“ con, a way to rope in other suckers who’ll make content for the platform, anchoring themselves and their audiences to it.
Which brings me to TikTok. TikTok is many different things, including “a free Adobe Premiere for teenagers that live on their phones.” But what made it such a success early on was the power of its recommendation system. From the start, TikTok was really, really good at recommending things to its users. Eerily good.
By making good-faith recommendations of things it thought its users would like, TikTok built a mass audience, larger than many thought possible, given the death grip of its competitors, like YouTube and Instagram. Now that TikTok has the audience, it is consolidating its gains and seeking to lure away the media companies and creators who are still stubbornly attached to YouTube and Insta.
Yesterday, Forbes’s Emily Baker-White broke a fantastic story about how that actually works inside of ByteDance, TikTok’s parent company, citing multiple internal sources, revealing the existence of a „heating tool“ that TikTok employees use to push videos from select accounts into millions of viewers‘ feeds.
These videos go into TikTok users‘ For You feeds, which TikTok misleadingly describes as being populated by videos „ranked by an algorithm that predicts your interests based on your behavior in the app.“ In reality, For You is only sometimes composed of videos that TikTok thinks will add value to your experience—the rest of the time, it’s full of videos that TikTok has inserted in order to make creators think that TikTok is a great place to reach an audience.
„Sources told Forbes that TikTok has often used heating to court influencers and brands, enticing them into partnerships by inflating their videos’ view count. This suggests that heating has potentially benefitted some influencers and brands—those with whom TikTok has sought business relationships—at the expense of others with whom it has not.“
In other words, TikTok is handing out giant teddy bears.
But TikTok is not in the business of giving away giant teddy bears. TikTok, for all that its origins are in the quasi-capitalist Chinese economy, is just another paperclip-maximizing artificial colony organism that treats human beings as inconvenient gut flora. TikTok is only going to funnel free attention to the people it wants to entrap until they are entrapped, then it will withdraw that attention and begin to monetize it.
„Monetize“ is a terrible word that tacitly admits that there is no such thing as an „attention economy.“ You can’t use attention as a medium of exchange. You can’t use it as a store of value. You can’t use it as a unit of account. Attention is like cryptocurrency: a worthless token that is only valuable to the extent that you can trick or coerce someone into parting with „fiat“ currency in exchange for it. You have to „monetize“ it—that is, you have to exchange the fake money for real money.
In the case of cryptos, the main monetization strategy was deception-based. Exchanges and „projects“ handed out a bunch of giant teddy-bears, creating an army of true-believer Judas goats who convinced their peers to hand the carny their money and try to get some balls into the peach-basket themselves.
But deception only produces so much „liquidity provision.“ Eventually, you run out of suckers. To get lots of people to try the ball-toss, you need coercion, not persuasion. Think of how US companies ended the defined benefits pension that guaranteed you a dignified retirement, replacing it with market-based 401(k) pensions that forced you to gamble your savings in a rigged casino, making you the sucker at the table, ripe for the picking.
Early crypto liquidity came from ransomware. The existence of a pool of desperate, panicked companies and individuals whose data had been stolen by criminals created a baseline of crypto liquidity because they could only get their data back by trading real money for fake crypto money.
The next phase of crypto coercion was Web3: converting the web into a series of tollbooths that you could only pass through by trading real money for fake crypto money. The internet is a must-have, not a nice-to-have, a prerequisite for full participation in employment, education, family life, health, politics, civics, even romance. By holding all those things to ransom behind crypto tollbooths, the holders hoped to convert their tokens to real money.
For TikTok, handing out free teddy-bears by „heating“ the videos posted by skeptical performers and media companies is a way to convert them to true believers, getting them to push all their chips into the middle of the table, abandoning their efforts to build audiences on other platforms (it helps that TikTok’s format is distinctive, making it hard to repurpose videos for TikTok to circulate on rival platforms).
Once those performers and media companies are hooked, the next phase will begin: TikTok will withdraw the „heating“ that sticks their videos in front of people who never heard of them and haven’t asked to see their videos. TikTok is performing a delicate dance here: There’s only so much enshittification they can visit upon their users‘ feeds, and TikTok has lots of other performers they want to give giant teddy-bears to.
Tiktok won’t just starve performers of the „free“ attention by depreferencing them in the algorithm, it will actively punish them by failing to deliver their videos to the users who subscribed to them. After all, every time TikTok shows you a video you asked to see, it loses a chance to show you a video it wants you to see, because your attention is a giant teddy-bear it can give away to a performer it is wooing.
This is just what Twitter has done as part of its march to enshittification: thanks to its „monetization“ changes, the majority of people who follow you will never see the things you post. I have ~500k followers on Twitter and my threads used to routinely get hundreds of thousands or even millions of reads. Today, it’s hundreds, perhaps thousands.
I just handed Twitter $8 for Twitter Blue, because the company has strongly implied that it will only show the things I post to the people who asked to see them if I pay ransom money. This is the latest battle in one of the internet’s longest-simmering wars: the fight over end-to-end.
In the beginning, there were Bellheads and Netheads. The Bellheads worked for big telcos, and they believed that all the value of the network rightly belonged to the carrier. If someone invented a new feature—say, Caller ID—it should only be rolled out in a way that allows the carrier to charge you every month for its use. This is Software-As-a-Service, Ma Bell style.
The Netheads, by contrast, believed that value should move to the edges of the network—spread out, pluralized. In theory, Compuserve could have „monetized“ its own version of Caller ID by making you pay $2.99 extra to see the „From:“ line on email before you opened the message— charging you to know who was speaking before you started listening—but they didn’t.
The Netheads wanted to build diverse networks with lots of offers, lots of competition, and easy, low-cost switching between competitors (thanks to interoperability). Some wanted this because they believed that the net would someday be woven into the world, and they didn’t want to live in a world of rent-seeking landlords. Others were true believers in market competition as a source of innovation. Some believed both things. Either way, they saw the risk of network capture, the drive to monetization through trickery and coercion, and they wanted to head it off.
They conceived of the end-to-end principle: the idea that networks should be designed so that willing speakers‘ messages would be delivered to willing listeners‘ end-points as quickly and reliably as they could be. That is, irrespective of whether a network operator could make money by sending you the data it wanted to receive, its duty would be to provide you with the data you wanted to see.
The end-to-end principle is dead at the service level today. Useful idiots on the right were tricked into thinking that the risk of Twitter mismanagement was „woke shadowbanning,“ whereby the things you said wouldn’t reach the people who asked to hear them because Twitter’s deep state didn’t like your opinions. The real risk, of course, is that the things you say won’t reach the people who asked to hear them because Twitter can make more money by enshittifying their feeds and charging you ransom for the privilege to be included in them.
As I said at the start of this essay, enshittification exerts a nearly irresistible gravity on platform capitalism. It’s just too easy to turn the enshittification dial up to eleven. Twitter was able to fire the majority of its skilled staff and still crank the dial all the way over, even with a skeleton crew of desperate, demoralized H1B workers who are shackled to Twitter’s sinking ship by the threat of deportation.
The temptation to enshittify is magnified by the blocks on interoperability: When Twitter bans interoperable clients, nerfs its APIs, and periodically terrorizes its users by suspending them for including their Mastodon handles in their bios, it makes it harder to leave Twitter, and thus increases the amount of enshittification users can be force-fed without risking their departure.
Twitter is not going to be a „protocol.“ I’ll bet you a testicle (not one of mine) that projects like Bluesky will find no meaningful purchase on the platform, because if Bluesky were implemented and Twitter users could order their feeds for minimal enshittification and leave the service without sacrificing their social networks, it would kill the majority of Twitter’s „monetization“ strategies.
An enshittification strategy only succeeds if it is pursued in measured amounts. Even the most locked-in user eventually reaches a breaking point and walks away, or gets pushed. The villagers of Anatevka in Fiddler on the Roof tolerated the cossacks‘ violent raids and pogroms for years, until they were finally forced to flee to Krakow, New York, and Chicago.
For enshittification-addled companies, that balance is hard to strike. Individual product managers, executives, and activist shareholders all give preference to quick returns at the cost of sustainability, and are in a race to see who can eat their seed-corn first. Enshittification has only lasted for as long as it has because the internet has devolved into “five giant websites, each filled with screenshots of the other four.”
With the market sewn up by a group of cozy monopolists, better alternatives don’t pop up and lure us away, and if they do, the monopolists just buy them out and integrate them into your enshittification strategies, like when Mark Zuckerberg noticed a mass exodus of Facebook users who were switching to Instagram, and so he bought Instagram. As Zuck says, „It is better to buy than to compete.“
This is the hidden dynamic behind the rise and fall of Amazon Smile, the program whereby Amazon gave a small amount of money to charities of your choice when you shopped there, but only if you used Amazon’s own search tool to locate the products you purchased. This provided an incentive for Amazon customers to use its own increasingly enshittified search, which it could cram full of products from sellers who coughed up payola, as well as its own lookalike products. The alternative was to use Google, whose search tool would send you directly to the product you were looking for, and then charge Amazon a commission for sending you to it.
The demise of Amazon Smile coincides with the increasing enshittification of Google Search, the only successful product the company managed to build in-house. All its other successes were bought from other companies: video, docs, cloud, ads, mobile, while its own products are either flops like Google Video, clones (Gmail is a Hotmail clone), or adapted from other companies‘ products, like Chrome.
Google Search was based on principles set out in founder Larry Page and Sergey Brin’s landmark 1998 paper, „Anatomy of a Large-Scale Hypertextual Web Search Engine,“ in which they wrote, “Advertising funded search engines will be inherently biased towards the advertisers and away from the needs of consumers.”
Even with that foundational understanding of enshittification, Google has been unable to resist its siren song. Today’s Google results are an increasingly useless morass of self-preferencing links to its own products, ads for products that aren’t good enough to float to the top of the list on its own, and parasitic SEO junk piggybacking on the former.
Enshittification kills. Google just laid off 12,000 employees, and the company is in a full-blown „panic“ over the rise of „AI“ chatbots, and is making a full-court press for an AI-driven search tool—that is, a tool that won’t show you what you ask for, but rather, what it thinks you should see.
Now, it’s possible to imagine that such a tool will produce good recommendations, like TikTok’s pre-enshittified algorithm did. But it’s hard to see how Google will be able to design a non-enshittified chatbot front-end to search, given the strong incentives for product managers, executives, and shareholders to enshittify results to the precise threshold at which users are nearly pissed off enough to leave, but not quite.
Even if it manages the trick, this-almost-but-not-quite-unusuable equilibrium is fragile. Any exogenous shock—a new competitor like TikTok that penetrates the anticompetitive „moats and walls“ of Big Tech, a privacy scandal, a worker uprising—can send it into wild oscillations.
Enshittification truly is how platforms die. That’s fine, actually. We don’t need eternal rulers of the internet. It’s okay for new ideas and new ways of working to emerge. The emphasis of lawmakers and policymakers shouldn’t be preserving the crepuscular senescence of dying platforms. Rather, our policy focus should be on minimizing the cost to users when these firms reach their expiry date: Enshrining rights like end-to-end would mean that no matter how autocannibalistic a zombie platform became, willing speakers and willing listeners would still connect with each other.
And policymakers should focus on freedom of exit—the right to leave a sinking platform while continuing to stay connected to the communities that you left behind, enjoying the media and apps you bought and preserving the data you created.
The Netheads were right: Technological self-determination is at odds with the natural imperatives of tech businesses. They make more money when they take away our freedom—our freedom to speak, to leave, to connect.
For many years, even TikTok’s critics grudgingly admitted that no matter how surveillant and creepy it was, it was really good at guessing what you wanted to see. But TikTok couldn’t resist the temptation to show you the things it wants you to see rather than what you want to see. The enshittification has begun, and now it is unlikely to stop.
It’s too late to save TikTok. Now that it has been infected by enshittifcation, the only thing left is to kill it with fire.
The EV giant is alienating its customers, bringing in less revenue, and falling behind legacy carmakers.
Photograph: David Gannon/Getty Images
For now, Alex Lagetko is holding on to his Tesla stocks. The founder of hedge fund VSO Capital Management in New York, Lagetko says his stake in the company was worth $46 million in November 2021, when shares in the electric carmaker peaked at $415.
“Many investors, particularly retail, who invested disproportionately large sums of their wealth largely on the basis of trust in Musk over many years were very quickly burned in the months following the acquisition,” Lagetko says, “particularly in December as he sold more stock, presumably to fund losses at Twitter.”
Lagetko trimmed his exposure in early 2022 due to concerns over Tesla’s governance, but he is worried that the leveraged buyout of Twitter has left Tesla vulnerable, as interest payments on the debt Musk took on to fund the takeover come due at the same time as the social media company’s revenues have slumped.
But Tesla stock was already falling in April 2022, when Musk launched his bid for Twitter, and analysts say that the carmaker’s challenges run deeper than its exposure to the struggling social media platform. Tesla and its CEO have alienated its core customers while its limited designs and high prices make it vulnerable to competition from legacy automakers, who have rushed into the EV market with options that Musk’s company will struggle to match.
Prior to 2020, Tesla was essentially “playing against a B team in a soccer match,” says Matthias Schmidt, an independent analyst in Berlin who tracks electric car sales in Europe. But that changed in 2020, as “the opposition started rolling out some of their A squad players.”
In 2023, Tesla is due to release its long-awaited Cybertruck, a blocky, angular SUV first announced in 2019. It is the first new launch of a consumer vehicle by the company since 2020. A promised two-seater sports car is still years away, and the Models S, X, Y, and 3, once seen as space-age dynamos, are now “long in the tooth,” says Mark Barrott, an automotive analyst at consultancy Plante Moran. Most auto companies refresh their looks every three to five years—Tesla’s Model S is now more than 10 years old.
By contrast, this year Ford plans to boost production of both its F-150 Lighting EV pick-up, already sold out for 2023, and its Mustang Mach-E SUV. Offerings from Hyundai IONIQ 5 and Kia EV6 could threaten Tesla’s Model Y and Model 3 in the $45,000 to $65,000 range. General Motors plans to speed up production and cut costs for a range of EV models, including the Chevy Blazer EV, the Chevy Equinox, the Cadillac Lyric, and the GMC Sierra EV.
While Tesla’s designs may be eye-catching, their high prices mean that they’re now often competing with luxury brands.
“There is this kind of nice Bauhaus simplicity to Tesla’s design, but it’s not luxurious,” says David Welch, author of Charging Ahead: GM, Mary Barra, and the Reinvention of an American Icon. “And for people to pay $70,000 to $100,000 for a car, if you’re competing suddenly with an electric Mercedes or BMW, or a Cadillac that finally actually feels like something that should bear the Cadillac name, you’re going to give people something to think about.”
While few manufacturers can compete with Tesla on performance and software (the Tesla Model S goes to 60 mph in 1.99 seconds, reaches a 200-mph top speed, and boasts automatic lane changing and a 17-inch touchscreen for console-grade gaming), many have reached or are approaching a range of 300 miles (480 km), which is the most important consideration for many EV buyers, says Craig Lawrence, a partner and cofounder at the investment group Energy Transition Ventures.
One of Tesla’s main competitive advantages has been its supercharging network. With more than 40,000 proprietary DC fast chargers located on major thoroughfares near shopping centers, coffee shops, and gas stations, their global infrastructure is the largest in the world. Chargers are integrated with the cars’ Autobidder optimization & dispatch software, and, most importantly, they work quickly and reliably, giving a car up to 322 miles of range in 15 minutes. The network contributes to about 12 percent of Tesla sales globally.
“The single biggest hurdle for most people asking ‘Do I go EV or not,’ is how do I refuel it and where,” says Loren McDonald, CEO and lead analyst for the consultancy EVAdoption. “Tesla figured that out early on and made it half of the value proposition.”
But new requirements for funding under public charging infrastructure programs in the US may erode Tesla’s proprietary charging advantage. The US National Electric Vehicle Infrastructure Program will allocate $7.5 billion to fund the development of some 500,000 electric vehicle chargers, but to access funds to build new stations, Tesla will have to open up its network to competitors by including four CCC chargers.
“Unless Tesla opens up their network to different charging standards, they will not get any of that volume,” Barrott says. “And Tesla doesn’t like that.”
In a few years, the US public charging infrastructure may start to look more like Europe’s, where in many countries the Tesla Model 3 uses standard plugs, and Tesla has opened their Supercharging stations to non-Tesla vehicles.
Tesla does maintain a software edge over competitors, which have looked to third-party technology like Apple’s CarPlay to fill the gap, says Alex Pischalnikov, an auto analyst and principal at the consulting firm Arthur D. Little. With over-the-air updates, Tesla can send new lines of code over cellular networks to resolve mechanical problems and safety features, update console entertainment options, and surprise drivers with new features, such as heated rear seats and the recently released full self-driving beta, available for $15,000. These software updates are also a cash machine for Tesla. But full self-driving features aren’t quite as promised, since drivers still have to remain in effective control of the vehicle, limiting the value of the system.
A Plante Moran analysis shared with WIRED shows Tesla’s share of the North American EV market declining from 70 percent in 2022 to just 31 percent by 2025, as total EV production grows from 777,000 to 2.87 million units.
In Europe, Tesla’s decline is already underway. Schmidt says data from the first 11 months of 2022 shows sales by volume of Volkswagen’s modular electric drive matrix (MEB) vehicles outpaced Tesla’s Model Y and Model 3 by more than 20 percent. His projections show Tesla’s product lines finishing the year with 15 percent of the western European electric vehicle market, down from 33 percent in 2019.
The European Union has proposed legislation to reduce carbon emissions from new cars and vans by 100 percent by 2035, which is likely to bring more competition from European carmakers into the market.
There is also a growing sense that Musk’s behavior since taking over Twitter has made a challenging situation for Tesla even worse.
Over the past year, Musk has used Twitter to call for the prosecution of former director of the US National Institute of Allergy and Infectious Diseases Anthony Fauci (“My pronouns are Prosecute/Fauci”), take swings at US senator from Vermont Bernie Sanders over government spending and inflation, and placed himself at the center of the free speech debate. He’s lashed out at critics, challenging, among other things, the size of their testicles.
A November analysis of the top 100 global brands by the New York–based consultancy Interbrand estimated Tesla’s brand value in 2022 at $48 billion, up 32 percent from 2021 but well short of its 183 percent growth between 2020 and 2021. The report, based on qualitative data from 1,000 industry consultants and sentiment analysis of published sources, showed brand strength declining, particularly in “trust, distinctiveness and an understanding of the needs of their customers.”
“I think [Musk’s] core is rapidly moving away from him, and people are just starting to say, ‘I don’t like the smell of Tesla; I don’t want to be associated with that,’” says Daniel Binns, global chief growth officer at Interbrand.
Among them are once-loyal customers. Alan Saldich, a semi-retired tech CMO who lives in Idaho, put a deposit down on a Model S in 2011, before the cars were even on the road, after seeing a bodiless chassis in a Menlo Park showroom. His car, delivered in 2012, was number 2799, one of the first 3,000 made.
He benefited from the company’s good, if idiosyncratic, customer service. When, on Christmas morning 2012, the car wouldn’t start, he emailed Musk directly seeking a remedy. Musk responded just 24 minutes later: “…Will see if we can diagnose and fix remotely. Sorry about this. Hope you otherwise have a good Christmas.”
On New Year’s Day, Joost de Vries, then vice president of worldwide service at Tesla, and an assistant showed up at Saldich’s house with a trailer, loaded the car onto a flatbed, and hauled it to Tesla’s plant in Fremont, California, to be repaired. Saldich and his family later even got a tour of the factory. But since then, he’s cooled on the company. In 2019, he sold his Model S, and now drives a Mini Electric. He’s irritated in particular, he says, by Musk’s verbal attacks on government programs and regulation, particularly as Tesla has benefited from states and federal EV tax credits.
“Personally, I probably wouldn’t buy another Tesla,” he says. “A, because there’s so many alternatives and B, I just don’t like [Musk] anymore.”
CORRECTION 1/24/23 11:15AM ET: This story has been updated to reflect that Alex Lagetko reduced his stake in Tesla in early 2022.