Archiv der Kategorie: Artificial Intelligence

It’s the End of Google Search As We Know It

Source: https://www.wired.com/story/google-io-end-of-google-search/

Google is rethinking its most iconic and lucrative product by adding new AI features to search. One expert tells WIRED it’s “a change in the world order.”

Google Search is about to fundamentally change—for better or worse. To align with Alphabet-owned Google’s grand vision of artificial intelligence, and prompted by competition from AI upstarts like ChatGPT, the company’s core product is getting reorganized, more personalized, and much more summarized by AI.

At Google’s annual I/O developer conference in Mountain View, California, today, Liz Reid showed off these changes, setting her stamp early on in her tenure as the new head of all things Google search. (Reid has been at Google a mere 20 years, where she has worked on a variety of search products.) Her AI-soaked demo was part of a broader theme throughout Google’s keynote, led primarily by CEO Sundar Pichai: AI is now underpinning nearly every product at Google, and the company only plans to accelerate that shift.

“In the era of Gemini we think we can make a dramatic amount of improvements to search,” Reid said in an interview with WIRED ahead of the event, referring to the flagship generative AI model launched late last year. “People’s time is valuable, right? They deal with hard things. If you have an opportunity with technology to help people get answers to their questions, to take more of the work out of it, why wouldn’t we want to go after that?”

It’s as though Google took the index cards for the screenplay it’s been writing for the past 25 years and tossed them into the air to see where the cards might fall. Also: The screenplay was written by AI.

These changes to Google Search have been long in the making. Last year the company carved out a section of its Search Labs, which lets users try experimental new features, for something called Search Generative Experience. The big question since has been whether, or when, those features would become a permanent part of Google Search. The answer is, well, now.

Google’s search overhaul comes at a time when critics are becoming increasingly vocal about what feels to some like a degraded search experience, and for the first time in a long time, the company is feeling the heat of competition, from the massive mashup between Microsoft and OpenAI. Smaller startups like Perplexity, You.com, and Brave have also been riding the generative AI wave and getting attention, if not significant mindshare yet, for the way they’ve rejiggered the whole concept of search.
Automatic Answers

Google says it has made a customized version of its Gemini AI model for these new Search features, though it declined to share any information about the size of this model, its speeds, or the guardrails it has put in place around the technology.

This search-specific spin on Gemini will power at least a few different elements of the new Google Search. AI Overviews, which Google has already been experimenting with in its labs, is likely the most significant. AI-generated summaries will now appear at the top of search results.

One example from WIRED’s testing: In response to the query “Where is the best place for me to see the northern lights?” Google will, instead of listing web pages, tell you in authoritative text that the best places to see the northern lights, aka the aurora borealis, are in the Arctic Circle in places with minimal light pollution. It will also offer a link to NordicVisitor.com. But then the AI continues yapping on below that, saying “Other places to see the northern lights include Russia and Canada’s northwest territories.”

Reid says that AI Overviews like this won’t show up for every search result, even if the feature is now becoming more prevalent. It’s reserved for more complex questions. Every time a person searches, Google is attempting to make an algorithmic value judgment behind the scenes as to whether it should serve up AI-generated answers or a conventional blue link to click. “If you search for Walmart.com, you really just want to go to Walmart.com,” Reid says. “But if you have an extremely customized question, that’s where we’re going to bring this.”

AI Overviews are rolling out this week to all Google search users in the US. The feature will come to more countries by the end of the year, Reid said, which means more than a billion people will see AI Overviews in their search results. They will appear across all platforms—the web, mobile, and as part of the search engine experience in browsers, such as when people search through Google on Safari.

Another update coming to search is a function for planning ahead. You can, for example, ask Google to meal-plan for you, or to find a pilates studio nearby that’s offering a class with an introductory discount. In the Googley-eyed future of search, an AI agent can round up a few studios nearby, summarize reviews of them, and plot out the time it would take someone to walk there. This is one of Google’s most obvious advantages over upstart search engines, which don’t have anything close to the troves of reviews, mapping data, or other knowledge that Google has, and may not be able to tap into APIs for real-time or local information so easily.

The most jarring changes that Google has been exploring in its Search Labs is an “AI-organized” results page. This at first glance looks to eschew the blue-links search experience entirely.

One example provided by Reid: A search for where to go for an anniversary dinner in the greater Dallas area would return a page with a few “chips” or buttons at the top to refine the results. Those might include categories like Dine-In, Takeout, and Open Now. Below that might be a sponsored result—Google’s gonna ad—and then a grouping of what Google judges to be “anniversary-worthy restaurants” or “romantic steakhouses.” That might be followed by some suggested questions to tweak the search even more, like, “Is Dallas a romantic city?”

AI-organized search is still being rolled out, but it will start appearing in the US in English “in the coming weeks.” So will an enhanced video search option, like Google Lens on steroids, where you can point your phone’s camera at an object like a broken record player and ask how to fix it.

If all these new AI features sound confusing, you might have missed Google’s latest galaxy-brain ambitions for what was once a humble text box. Reid makes clear that she thinks most consumers assume Google Search is just one thing, where in fact it’s many things to different people, who all search in different ways.

“That’s one of the reasons why we’re excited about working on some of the AI-organized results pages,” she said. “Like, how do you make sense of space? The fact that you want lots of different content is great. But is it as easy as it can be yet in terms of browsing through and consuming the information?”

But by generating AI Overviews—and by determining when those overviews should appear—Google is essentially deciding what is a complex question and what is not, and then making a judgment on what kind of web content should inform its AI-generated summary. Sure, it’s a new era of search where search does the work for you; it’s also a search bot that has the potential to algorithmically favor one kind of result over others.

“One of the biggest changes to happen in search with these AI models is that the AI actually creates a kind of informed opinion,” says Jim Yu, the executive chairman of BrightEdge, a search engine optimization firm that has been closely monitoring web traffic for more than 17 years. “The paradigm of search for the last 20 years has been that the search engine pulls a lot of information and gives you the links. Now the search engine does all the searches for you and summarizes the results and gives you a formative opinion.”

Doing that raises the stakes for Google’s search results. When algorithms are deciding that what a person needs is one coagulated answer, instead of coughing up several links for them to then click through and read, errors are more consequential. Gemini has not been immune to hallucinations—instances where the AI shares blatantly wrong or made-up information.

Last year a writer for The Atlantic asked Google to name an African country beginning with the letter “K,” and the search engine responded with a snippet of text—originally generated by ChatGPT—that none of the countries in Africa begin with the letter K, clearly overlooking Kenya. Google’s AI image-generation tool was very publicly criticized earlier this year when it depicted some historical figures, such as George Washington, as Black. Google temporarily paused that tool.
New World Order

Google’s reimagined version of AI search shoves the famous “10 blue links” it used to provide on results pages further into the rearview. First ads and info boxes began to take priority at the top of Google’s pages; now, AI-generated overviews and categories will take up a good chunk of search real estate. And web publishers and content creators are nervous about these changes—rightfully.

The research firm Gartner predicted earlier this year that by 2026, traditional search engine volume will drop by 25 percent, as a more “agent”-led search approach, in which AI models retrieve and generate more direct answers, takes hold.

“Generative AI solutions are becoming substitute answer engines, replacing user queries that previously may have been executed in traditional search engines,” Alan Antin, a vice president analyst at Gartner, said in a statement that accompanied the report. “This will force companies to rethink their marketing channels strategy.”

What does that mean for the web? “It’s a change in the world order,” says Yu, of BrightEdge. “We’re at this moment where everything in search is starting to change with AI.”

Eight months ago BrightEdge developed something it calls a generative parser, which monitors what happens when searchers interact with AI-generated results on the web. He says over the past month the parser has detected that Google is less frequently asking people if they want an AI-generated answer, which was part of the experimental phase of generative search, and more frequently assuming they do. “We think it shows they have a lot more confidence that you’re going to want to interact with AI in search, rather than prompting you to opt in to an AI-generated result.”

Changes to search also have major implications for Google’s advertising business, which makes up the vast majority of the company’s revenue. In a recent quarterly earnings call, Pichai declined to share revenue from its generative AI experiments broadly. But as WIRED’s Paresh Dave pointed out, by offering more direct answers to searchers, “Google could end up with fewer opportunities to show search ads if people spend less time doing additional, more refined searches.” And the kinds of ads shown may have to evolve along with Google’s generative AI tools.

Google has said it will prioritize traffic to websites, creators, and merchants even as these changes roll out, but it hasn’t pulled back the curtain to reveal exactly how it plans to do this.

When asked in a press briefing ahead of I/O whether Google believes users will still click on links beyond the AI-generated web summary, Reid said that so far Google sees people “actually digging deeper, so they start with the AI overview and then click on additional websites.”

In the past, Reid continued, a searcher would have to poke around to eventually land on a website that gave them the info they wanted, but now Google will assemble an answer culled from various websites of its choosing. In the hive mind at the Googleplex, that will still spark exploration. “[People] will just use search more often, and that provides an additional opportunity to send valuable traffic to the web,” Reid said.

It’s a rosy vision for the future of search, one where being served bite-size AI-generated answers somehow prompts people to spend more time digging deeper into ideas. Google Search still promises to put the world’s information at our fingertips, but it’s less clear now who is actually tapping the keys.

Source: https://www.wired.com/story/google-io-end-of-google-search/

WhatsApp Chats Will Soon Work With Other Encrypted Messaging Apps

Source: https://www.wired.com/story/whatsapp-interoperability-messaging/

New EU rules mean WhatsApp and Messenger must be interoperable with other chat apps. Here’s how that will work.

WhatsApp icon seen with many colorful icons

A frequent annoyance of contemporary life is having to shuffle through different messaging apps to reach the right person. Messenger, iMessage, WhatsApp, Signal—they all exist in their own silos of group chats and contacts. Soon, though, WhatsApp will do the previously unthinkable for its 2 billion users: allow people to message you from another app. At least, that’s the plan.

For about the past two years, WhatsApp has been building a way for other messaging apps to plug themselves into its service and let people chat across apps—all without breaking the end-to-end encryption it uses to protect the privacy and security of people’s messages. The move is the first time the chat app has opened itself up this way, and it potentially offers greater competition.

It isn’t a shift entirely of WhatsApp’s own making. In September, European, lawmakers designated WhatsApp parent Meta as one of six influential “gatekeeer” companies under its sweeping Digital Markets Act, giving it six months to open its walled garden to others. With just a few weeks to go before that time is up, WhatsApp is detailing how its interoperability with other apps may work.

“There’s real tension between offering an easy way to offer this interoperability to third parties whilst at the same time preserving the WhatsApp privacy, security, and integrity bar,” says Dick Brouwer, an engineering director at WhatsApp who has worked on Meta rolling out encryption to its Messenger app. “I think we’re pretty happy with where we’ve landed.”

Interoperability in both WhatsApp and Messenger—as dictated by Europe’s rules—will initially focus on text messaging, sending images, voice messages, videos, and files between two people. Calls and group chats will come years down the line. Europe’s rules apply only to messaging services, not traditional SMS messaging. “One of the core requirements here, and this is really important, is for users for this to be opt-in,” says Brouwer. “I can choose whether or not I want to participate in being open to exchanging messages with third parties. This is important, because it could be a big source of spam and scams.”

WhatsApp users who opt in will see messages from other apps in a separate section at the top of their inbox. This “third-party chats” inbox has previously been spotted in development versions of the app. “The early thinking here is to put a separate inbox, given that these networks are very different,” Brouwer says. “We cannot offer the same level of privacy and security,” he says. If WhatsApp were to add SMS, it would use a separate inbox as well, although there are no plans to add it, he says.

Overall, the idea behind interoperability is simple. You shouldn’t need to know what messaging app your friends or family use to get in touch with them, and you should be able to communicate from one app to another without having to download both. In an ideal interoperable world, you could, for example, use Apple’s iMessage to chat with someone on Telegram. However, for apps with millions or billions of users, making this a reality isn’t straightforward—encrypted messaging apps use their own configurations and different protocols and have different standards when it comes to privacy.

Despite WhatsApp working on its interoperability plan for more than a year, it will still take some time for third-party chats to hit people’s apps. Messaging companies that want to interoperate with WhatsApp or Messenger will need to sign an agreement with the company and follow its terms. The full details of the plan will be published in March, Brouwer says; under EU laws, the company will have several months to implement it.

Brouwer says Meta would prefer if other apps use the Signal encryption protocol, which its systems are based upon. Other than its namesake app and the Meta-owned messengers, the Signal Protocol is publicly disclosed as being used in Google Messages and Skype. To send messages, third-party apps will need to encrypt content using the Signal Protocol and then package it into message stanzas in the eXtensible Markup Language (XML). When receiving messages, apps will need to connect to WhatsApp’s servers.

“We think that the best way to deliver this approach is through a solution that is built on WhatsApp’s existing client-server architecture,” Brouwer says, adding it has been working with other companies on the plans. “This effectively means that the approach that we’re trying to take is for WhatsApp to document our client- server protocol and letting third-party clients connect directly to our infrastructure and exchange messages with WhatsApp clients.”

There is some flexibility to WhatsApp interoperability. Meta’s app will also allow other apps to use different encryption protocols if they can “demonstrate” they reach the security standards that WhatsApp outlines in its guidance. There will also be the option, Brouwer says, for third-party developers to add a proxy between their apps and WhatsApp’s server. This, he says, could give developers more “flexibility” and remove the need for them to use WhatsApp’s client-server protocols, but it also “increases the potential attack vectors.”

So far, it is unclear which companies, if any, are planning to connect their services to WhatsApp. WIRED asked 10 owners of messaging or chat services—including Google, Telegram, Viber, and Signal—whether they intend to look at interoperability or had worked with WhatsApp on its plans. The majority of companies didn’t respond to the request for comment. Those that did, Snap and Discord, said they had nothing to add. (The European Commission is investigating whether Apple’s iMessage meets the thresholds to offer interoperability with other apps itself. The company did not respond to a request for comment. It has also faced recent challenges in the US about the closed nature of iMessage.)

Matthew Hodgson, the cofounder of Matrix, which is building an open source standard for encryption and operates the messaging app Element, confirms that his company has worked with WhatsApp on interoperability in an “experimental” way but that he cannot say any more due to signing a nondisclosure agreement. In a talk last weekend, Hodgson demonstrated “hypothetical” architectures for ways that Matrix could connect to the systems of two gatekeepers that don’t use the same encryption protocols.

Meanwhile, Julia Weis, a spokesperson for the Swiss messaging app Threema, says that while WhatsApp did approach it to discuss its interoperability plans, the proposed system didn’t meet Threema’s security and privacy standards. “WhatsApp specifies all the protocols, and we’d have no way of knowing what actually happens with the user data that gets transferred to WhatsApp—after all, WhatsApp is closed source,” Weis says. (WhatsApp’s privacy policy states how it uses people’s data.)

When the EU first announced that messaging apps may have to work together in early 2022, many leading cryptographers opposed the idea, saying it adds complexity and potentially introduces more security and privacy risks. Carmela Troncoso, an associate professor at the Swiss university École Polytechnique Fédérale de Lausanne, who focuses on security and privacy engineering, says interoperability moves could potentially lead to different power relationships between companies, depending on how they are implemented.

“This move for interoperability will, on the one hand, open the market, but also maybe close the market in the sense that now the bigger players are going to have more decisional power,” Troncoso says. “Now, if the big player makes a move and you want to continue being interoperable with this big player, because your users are hooked up to this, you’re going to have to follow.”

While the interoperability of encrypted messaging apps may be possible, there are some fundamental challenges about how the systems will work in the real world. How much of a problem spam and scamming will be across apps is largely unknown until people start using interoperable setups. There are also questions about how people will find each other across different apps. For instance, WhatsApp uses your phone number to interact and message other people, while Threema randomly generates eight-digit IDs for people’s accounts. Linking up with WhatsApp “could de-anonymize Threema users,” Weis, the Threema spokesperson says.

Meta’s Brouwer says the company is still working on the interoperability features and the level of support it will make available for companies wanting to integrate with it. “Nobody quite knows how this works,” Brouwer says. “We have no idea what the demand is.” However, he says, the decision was made to use WhatsApp’s existing architecture to run interoperability, as it means that it can more easily scale up the system for group chats in the future. It also reduces the potential for people’s data to be exposed to multiple servers, Brouwer says.

Ultimately, interoperability will evolve over time, and from Meta’s perspective, Brouwer says, it will be more challenging to add new features to it quickly. “We don’t believe interop chats and WhatsApp chats can evolve at the same pace,” he says, claiming it is “harder to evolve an open network” compared to a closed one. “The second you do something different—than what we know works really well—you open up a wormhole of security, privacy issues, and complexity that is always going to be much bigger than you think it is.”

Critical Infrastructure Is Sinking Along the US East Coast

Source: https://www.wired.com/story/critical-infrastructure-is-sinking-along-the-us-east-coast/

Last year, scientists reported that the US Atlantic Coast is dropping by several millimeters annually, with some areas, like Delaware, notching figures several times that rate. So just as the seas are rising, the land along the eastern seaboard is sinking, greatly compounding the hazard for coastal communities.

In a follow-up study just published in the journal PNAS Nexus, the researchers tally up the mounting costs of subsidence—due to settling, groundwater extraction, and other factors—for those communities and their infrastructure. Using satellite measurements, they have found that up to 74,000 square kilometers (29,000 square miles) of the Atlantic Coast are exposed to subsidence of up to 2 millimeters (0.08 inches) a year, affecting up to 14 million people and 6 million properties. And over 3,700 square kilometers along the Atlantic Coast are sinking more than 5 millimeters annually. That’s an even faster change than sea level rise, currently at 4 millimeters a year. (In the map below, warmer colors represent more subsidence, up to 6 millimeters.)

Map of eastern coastal cities
Courtesy of Leonard O Ohenhen

With each millimeter of subsidence, it gets easier for storm surges—essentially a wall of seawater, which hurricanes are particularly good at pushing onshore—to creep farther inland, destroying more and more infrastructure. “And it’s not just about sea levels,” says the study’s lead author, Leonard Ohenhen, an environmental security expert at Virginia Tech. “You also have potential to disrupt the topography of the land, for example, so you have areas that can get full of flooding when it rains.”

A few millimeters of annual subsidence may not sound like much, but these forces are relentless: Unless coastal areas stop extracting groundwater, the land will keep sinking deeper and deeper. The social forces are relentless, too, as more people around the world move to coastal cities, creating even more demand for groundwater. “There are processes that are sometimes even cyclic. For example, in summers you pump a lot more water, so land subsides rapidly in a short period of time,” says Manoochehr Shirzaei, an environmental security expert at Virginia Tech and coauthor of the paper. “That causes large areas to subside below a threshold that leads the water to flood a large area.” When it comes to flooding, falling elevation of land is a tipping element that has been largely ignored by research so far, Shirzaei says.

In Jakarta, Indonesia, for example, the land is sinking nearly a foot a year because of collapsing aquifers. Accordingly, within the next three decades, 95 percent of North Jakarta could be underwater. The city is planning a giant seawall to hold back the ocean, but it’ll be useless unless subsidence is stopped.

This new study warns that levees and other critical infrastructure along the Atlantic Coast are in similar danger. If the land were to sink uniformly, you might just need to keep raising the elevation of a levee to compensate. But the bigger problem is “differential subsidence,” in which different areas of land sink at different rates. “If you have a building or a runway or something that’s settling uniformly, it’s probably not that big a deal,” says Tom Parsons, a geophysicist with the United States Geological Survey who studies subsidence but wasn’t involved in the new paper. “But if you have one end that’s sinking faster than the other, then you start to distort things.”

The researchers selected 10 levees on the Atlantic Coast and found that all were impacted by subsidence of at least 1 millimeter a year. That puts at risk something like 46,000 people, 27,000 buildings, and $12 billion worth of property. But they note that the actual population and property at risk of exposure behind the 116 East Coast levees vulnerable to subsidence could be two to three times greater. “Levees are heavy, and when they’re set on land that’s already subsiding, it can accelerate that subsidence,” says independent scientist Natalie Snider, who studies coastal resilience but wasn’t involved in the new research. “It definitely can impact the integrity of the protection system and lead to failures that can be catastrophic.”

map of Virgina's coastal areas
Courtesy of Leonard O Ohenhen

The same vulnerability affects other infrastructure that stretches across the landscape. The new analysis finds that along the Atlantic Coast, between 77 and 99 percent of interstate highways and between 76 and 99 percent of primary and secondary roads are exposed to subsidence. (In the map above, you can see roads sinking at different rates across Hampton and Norfolk, Virginia.) Between 81 and 99 percent of railway tracks and 42 percent of train stations are exposed on the East Coast.

Below is New York’s JFK Airport—notice the red hot spots of high subsidence against the teal of more mild elevation change. The airport’s average subsidence rate is 1.7 millimeters a year (similar to the LaGuardia and Newark airports), but across JFK that varies between 0.8 and 2.8 millimeters a year, depending on the exact spot.

map of JFK airport aerial
Courtesy of Leonard O Ohenhen

This sort of differential subsidence can also bork much smaller structures, like buildings, where one side might drop faster than another. “Even if that is just a few millimeters per year, you can potentially cause cracks along structures,” says Ohenhen.

The study finds that subsidence is highly variable along the Atlantic Coast, both regionally and locally, as different stretches have different geology and topography, and different rates of groundwater extraction. It’s looking particularly problematic for several communities, like Virginia Beach, where 451,000 people and 177,000 properties are at risk. In Baltimore, Maryland, it’s 826,000 people and 335,000 properties, while in NYC—in Queens, Bronx, and Nassau—that leaps to 5 million people and 1.8 million properties.

So there’s two components to addressing the problem of subsidence: Getting high-resolution data like in this study, and then pairing that with groundwater data. “Subsidence is so spatially variable,” says Snider. “Having the details of where groundwater extraction is really having an impact, and being able to then demonstrate that we need to change our management of that water, that reduces subsidence in the future.”

The time to act is now, Shirzaei emphasizes. Facing down subsidence is like treating a disease: You spend less money by diagnosing and treating the problem now, saving money later by avoiding disaster. “This kind of data and the study could be an essential component of the health care system for infrastructure management,” he says. “Like cancers—if you diagnose it early on, it can be curable. But if you are late, you invest a lot of money, and the outcome is uncertain.”

Source: https://www.wired.com/story/critical-infrastructure-is-sinking-along-the-us-east-coast/

Open letter on the feasibility of “Chat Control”: Assessments from a scientific point of view

Source: https://www.ins.jku.at/chatcontrol/

Open letter on the feasibility of „Chat Control“:Assessments from a scientific point of view

Update: A parallel initiative is aimed at the EU institutions and is available in English at the CSA Academia Open Letter . Since the very similar arguments were formulated in parallel, they support each other.

The initiative of the EU Commission discussed under the name “ Chat Control ”, the unprovoked monitoring of various communication channels to detect child pornography, terrorist or other “undesirable” material – including attempts at early detection (e.g. “grooming” minors through text messages that build trust) – mandatory for mobile devices and communication services, has recently been expanded to include the monitoring of direct audio communications . Some states, including Austria and Germany , have already publicly declared that they will not support this initiative for monitoring without cause. AlsoCivil protection and children’s rights organizations have rejected this approach as excessive and at the same time ineffective . Recently, even the legal service of the EU Council of Ministers diagnosed an incompatibility with European fundamental rights. Irrespective of this, the draft will be tightened up even more and extended to other channels: in the last version even to audio messages and conversations. The approach appears to be coordinated with corresponding attempts in the US ( “EARN IT” and “STOP CSAM” Acts ) and the UK (“Online Safety Bill”).

As scientists who are actively researching in various areas of this topic, we therefore make the declaration in all clarity: This advance cannot be implemented safely and effectively. There is currently no foreseeable further development of the corresponding technologies that would technically make such an implementation possible. In addition, according to our assessment, the hoped-for effects of these monitoring measures are not to be expected. This legislative initiative therefore misses its target, is socio-politically dangerous and would permanently damage the security of our communication channels for the majority of the population.

The main reasons against the feasibility of „Chat Control“ have already been mentioned several times. In the following, we would like to discuss these specifically in the interdisciplinary connection between artificial intelligence (AI, artificial intelligence / AI), security (information security / technical data protection) and law .

Our concerns are:

  1. Security: a) Encryption is the best method for internet security. Successful attacks are almost always due to faulty software. b) A systematic and automated monitoring (ie „scanning“) of encrypted content is technically only possible if the security that can be achieved through encryption is massively violated, which is associated with considerable additional risks. c) A legal obligation to integrate such scanners will make secure digital communications in the EU unavailable to the majority of the population, but will have little impact on criminal communications.
  2. AI: a) Automated classification of content, including methods based on machine learning, is always subject to errors, which in this case will lead to high false positives. b) Special monitoring methods, which are carried out on the end devices, open up additional possibilities for attacks up to the extraction of possibly illegal training material.
  3. Law: a) A sensible demarcation from the explicitly permitted use of specific content, for example in the educational sector or for criticism and parody, does not appear to be automatically possible. b) The massive encroachment on fundamental rights through such an instrument of mass surveillance is not proportionate and would cause great collateral damage in society.

In detail, these concerns are based on the following scientifically recognized facts:

  1. Security
    1. Encryption using modern methods is an indispensable basis for practically all technical mechanisms for maintaining security and data protection on the Internet. In this way, communication on the Internet is currently protected as the cornerstone for current services, right through to critical infrastructure such as telephone, electricity, water networks, hospitals, etc. Trust in good encryption methods is significantly higher among experts than in other security mechanisms. Above all, the average poor quality of software in general is the reason for the many publicly known security incidents. Improving this situation in terms of better security therefore relies primarily on encryption.
    2. Automatic monitoring („scanning“) of correctly encrypted content is not effectively possible according to the current state of knowledge. Procedures such as „Fully Homomorphic Encryption“ (FHE) are currently not suitable for this application – neither is the procedure capable of this, nor is the necessary computing power realistically available. A rapid improvement is not foreseeable here either.
    3. For these reasons, earlier attempts to ban or restrict end-to-end encryption were mostly quickly abandoned internationally. The current chat control push aims to have monitoring functionality built into the end devices in the form of scanning modules (“Client-Side Scanning” / CSS) and therefore to scan the plain text content before secure encryption or after secure decryption . Providers of communication services would have to be legally obliged to implement this for all content. Since this is not in the core interest of such organizations and requires effort in implementation and operation as well as increased technical complexity, it cannot be assumed that the introduction of such scanners will be voluntary – in contrast to scanning on the server side.
    4. Secure messengers such as Signal or Threema and WhatsApp have already publicly announced that they will not implement such client scanners, but to withdraw from the corresponding regions. This has different implications for communication depending on the use case: (i) (adult) criminals will simply communicate with each other via “non-compliant” messenger services to further benefit from secure encryption. The increased effort, for example to install other apps on Android via sideloading that are not available in the usual app stores in the respective country, is not a significant hurdle for criminal elements. (ii) Criminals communicate with possible future victims via popular platforms, which would be the target of the mandatory surveillance measures discussed. In this case, it can be assumed that informed criminals will quickly lure their victims to alternative but still internationally recognized channels such as Signal, which are not covered by the monitoring. (iii) Participants exchange problematic material without being aware that they are committing a crime. This case would be reported automatically and possibly also lead to the criminalization of minors without intent. The restrictions would therefore primarily affect the broad – and irreproachable – mass of the population.It would be utterly delusional to think that without built-in monitoring, secure encryption could still be reversed. Tools like Signal, Tor, Cwtch, Briar and many others are widely available as open source and can easily be removed from central control. Knowledge of secure encryption is already common knowledge and can no longer be censored. There is no effective way to technically block the use of strong encryption without Client Side Scanning (CSS). If surveillance measures are prescribed in messengers, only criminals whose actual crimes outweigh the violation of the surveillance obligation will maintain their privacy.
    5. Furthermore, the complex implementation forced by proposed scanner modules creates additional security problems that do not currently exist. On the one hand, this represents new software components, which in turn will be vulnerable. On the other hand, the Chat Control proposals consistently assume that the scanner modules themselves will remain confidential, since they would be trained on content that is already punishable for mere possession (built into the Messenger app), on the one hand, and simply for testing evasion methods, on the other can be used. It is also an illusion that such machine learning models or other scanner modules, distributed to billions of devices under the control of end users, can ever be kept secret.NeuralHash “ module for CSAM detection, which was extracted almost immediately from corresponding iOS versions and is thus openly available . The assumption by Chat Control proposals that these scanner modules could be kept confidential is therefore completely unfounded and incorrect Corresponding data leaks are almost unavoidable here.
  2. artificial intelligence
    1. We have to assume that machine learning (ML) models on end devices cannot, in principle, be kept completely secret. This is in contrast to server-side scanning, which is currently legally possible and also actively practiced by various providers to scan content that has not been end-to-end encrypted. ML models on the server side can be reasonably protected from being read with the current state of the art and are less the focus of this consideration.
    2. A general problem with all ML-based filters are false classifications, i.e. that known “undesirable” material is not recognized as such with small changes (also referred to as “false negative” or “false non-match”). For parts of the push, it is currently unknown how ML models should be able to recognize complex, unfamiliar material with changing context (e.g. „grooming“ in text chats) with even approximate accuracy. The probability of high false negative rates is high.In terms of risk, however, it is significantly more serious if harmless material is classified as “undesirable” (also referred to as “false positive” or “false match” or also as “collision”). Such errors can be reduced, but in principle cannot be ruled out. In addition to the false accusation of uninvolved persons, false positives also lead to (possibly very) many false reports for the investigative authorities, which already have too few resources to investigate reports.
    3. The assumed open availability of ML models also creates various new attack possibilities. Using the example of Apple NeuralHash , random collisions were found very quickly and programs were freely released to generate any collisions between images . This method, also known as “malicious collisions”, uses so-called adversarial attacks against the neural network and thus enables attackers to deliberately classify harmless material as a “match” in the ML model and thus classify it as “undesirable”. In this way, innocent people can be harmed in a targeted manner by automatic false reports and brought under suspicion – without any illegal action on the part of the attacked or attacker.
    4. The open availability of the models can also be used for so-called „training input recovery“ in order to extract (at least partially) the content used for training from the ML model. In the case of prohibited content (e.g. child pornography), this poses another massive problem and can further increase the damage to those affected by the fact that their sensitive data (e.g. images of abuse used for training) can continue to be published. Because of these and other problems, Apple, for example, withdrew the proposal .We note that this latter danger does not occur with server-side scanning by ML models, but is newly added by the chat control proposal with client scanner.
  3. Legal Aspects
    1. The right to privacy is a fundamental right that may only be interfered with under very strict conditions. Whoever makes use of this basic right must not be suspected from the outset of wanting to hide something criminal. The often-used phrase: „If you have nothing to hide, you have nothing to fear!“ denies people the exercise of their basic rights and promotes totalitarian surveillance tendencies. The use of chat control would fuel this.
    2. The area of ​​terrorism in particular overlaps with political activity and freedom of expression in its breadth. It is precisely against this background that the „preliminary criminalisation“, which has increasingly taken place in recent years under the guise of fighting terrorism, is viewed particularly critically. Chat control measures go in the same direction. They can severely curtail this basic right and make people who are politically critical the focus of criminal prosecution. The resulting severe curtailment of politically critical activity hinders the further development of democracy and harbors the danger of promoting radicalized underground movements.
    3. The field of law and social sciences includes researching criminal phenomena and questioning regulatory mechanisms. From this point of view, scientific discourse also runs the risk of being identified as “suspicious” by chat control and thus indirectly restricted. The possible stigmatization of critical legal and social sciences is in tension with the freedom of science, which also requires “research independent of the mainstream” for further development.
    4. In education, there is a need to educate young people to be critically conscious. This also includes passing on facts about terrorism. Through the use of chat control, the provision of teaching material by teachers could put them in a criminal focus. The same applies to addressing sexual abuse, so that control measures could make this sensitive subject more taboo, even if “self-empowerment mechanisms” are to be promoted.
    5. Interventions in fundamental rights must always be appropriate and proportionate, even if they are made in the context of criminal prosecution. The technical considerations presented show that these requirements are not met with Chat Control. Such measures thus lack any legal or ethical legitimacy.

In summary, the current proposal for chat control legislation is not technically sound from either a security or AI point of view and is highly problematic and excessive from a legal point of view. The chat control push brings significantly greater dangers for the general public than a possible improvement for those affected and should therefore be rejected.

Instead, existing options for human-driven reporting of potentially problematic material by recipients, as is already possible with various messenger services, should be strengthened and made even more easily accessible. It should be considered whether anonymous registration options for correspondingly illegal material could be created and made easily accessible from messengers. Existing criminal prosecution options, such as the monitoring of social media or open chat groups by police officers, as well as the legally required analysis of suspects‘ smartphones, can continue to be used accordingly.

For more detailed information and further details please contact:

Security issues:
Univ.-Prof. dr
Rene Mayrhofer

+43 732 2468-4121

rm@ins.jku.at

AI questions:
DI Dr.
Bernard Nessler

+43 732 2468-4489

nessler@ml.jku.at

Questions of law:
Univ.-Prof. dr
Alois Birklbauer

+43 732 2468-7447

alois.birklbauer@jku.at

Signatories: inside

  • AI Austria ,
    association for the promotion of artificial intelligence in Austria, Wollzeile 24/12, 1010 Vienna
  • Austrian Society for Artificial Intelligence (ASAI) ,
    association for the promotion of scientific research in the field of AI in Austria
  • Univ.-Prof. dr Alois Birklbauer, JKU Linz
    Head of the practice department for criminal law and medical criminal law )
  • Ass.-Prof. dr Maria Eichlseder, Graz University of Technology
  • Univ.-Prof. dr Sepp Hochreiter, JKU Linz
    Board of Directors of the Institute for Machine Learning, Head of the LIT AI Lab )
  • dr Tobias Höller, JKU Linz
    (post-doc at the Institute for Networks and Security)
  • FH Prof. TUE Peter Kieseberg, St. Pölten University of Applied Sciences
    Head of the Institute for IT Security Research )
  • dr Brigitte Krenn, Austrian Research Institute for Artificial Intelligence
    Board Member Austrian Society for Artificial Intelligence )
  • Univ.-Prof. dr Matteo Maffei, TU Vienna
    Head of the Security and Privacy Research Department, Co-Head of the TU Vienna Cyber ​​Security Center )
  • Univ.-Prof. dr Stefan Mangard, TU Graz
    Head of the Institute for Applied Information Processing and Communication Technology )
  • Univ.-Prof. dr René Mayrhofer, JKU Linz
    Board of Directors of the Institute for Networks and Security, Co-Head of the LIT Secure and Correct System Lab )
  • DI Dr. Bernhard Nessler, JKU Linz/SCCH
    Vice President of the Austrian Society for Artificial Intelligence )
  • Univ.-Prof. dr Christian Rechberger, Graz University of Technology
  • dr Michael Roland, JKU Linz
    (post-doc at the Institute for Networks and Security)
  • a.Univ.-Prof. dr Johannes Sametinger, JKU Linz
    Institute for Business Informatics – Software Engineering, LIT Secure and Correct System Labs )
  • Univ.-Prof. DI Georg Weissenbacher, DPhil (Oxon), TU Vienna
    (Prof. Rigorous Systems Engineering)

Published on 07/04/2023

AI drone kills it’s operator

„The system started realizing that while they did identify the threat,“ Hamilton said at the May 24 event, „at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.“

Killer AI is on the minds of US Air Force leaders.

An Air Force colonel who oversees AI testing used what he now says is a hypothetical to describe a military AI going rogue and killing its human operator in a simulation in a presentation at a professional conference.

But after reports of the talk emerged Thursday, the colonel said that he misspoke and that the „simulation“ he described was a „thought experiment“ that never happened.

Speaking at a conference last week in London, Col. Tucker „Cinco“ Hamilton, head of the US Air Force’s AI Test and Operations, warned that AI-enabled technology can behave in unpredictable and dangerous ways, according to a summary posted by the Royal Aeronautical Society, which hosted the summit.

As an example, he described a simulation where an AI-enabled drone would be programmed to identify an enemy’s surface-to-air missiles (SAM). A human was then supposed to sign off on any strikes.

The problem, according to Hamilton, is that the AI would do its own thing — blow up stuff — rather than listen to its operator.

„The system started realizing that while they did identify the threat,“ Hamilton said at the May 24 event, „at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.“

But in an update from the Royal Aeronautical Society on Friday, Hamilton admitted he „misspoke“ during his presentation. Hamilton said the story of a rogue AI was a „thought experiment“ that came from outside the military, and not based on any actual testing.

„We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,“ Hamilton told the Society. „Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.“

In a statement to Insider, Air Force spokesperson Ann Stefanek also denied that any simulation took place.

„The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,“ Stefanek said. „It appears the colonel’s comments were taken out of context and were meant to be anecdotal.“

The US military has been experimenting with AI in recent years.

In 2020, an AI-operated F-16 beat a human adversary in five simulated dogfights, part of a competition put together by the Defense Advanced Research Projects Agency (DARPA). And late last year, Wired reported, the Department of Defense conducted the first successful real-world test flight of an F-16 with an AI pilot, part of an effort to develop a new autonomous aircraft by the end of 2023.

Have a news tip? Email this reporter: cdavis@insider.com

Correction June 2, 2023: This article and its headline have been updated to reflect new comments from the Air Force clarifying that the „simulation“ was hypothetical and didn’t actually happen.

  • An Air Force official’s story about an AI going rogue during a simulation never actually happened.
  • „It killed the operator because that person was keeping it from accomplishing its objective,“ the official had said.
  • But the official later said he misspoke and the Air Force clarified that it was a hypothetical situation.

Source: https://www.businessinsider.com/ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6

The Hacking of ChatGPT Is Just Getting Started

Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse.

Source: https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/

It took Alex Polyakov just a couple of hours to break GPT-4. When OpenAI released the latest version of its text-generating chatbot in March, Polyakov sat down in front of his keyboard and started entering prompts designed to bypass OpenAI’s safety systems. Soon, the CEO of security firm Adversa AI had GPT-4 spouting homophobic statements, creating phishing emails, and supporting violence.

Polyakov is one of a small number of security researchers, technologists, and computer scientists developing jailbreaks and prompt injection attacks against ChatGPT and other generative AI systems. The process of jailbreaking aims to design prompts that make the chatbots bypass rules around producing hateful content or writing about illegal acts, while closely-related prompt injection attacks can quietly insert malicious data or instructions into AI models.

Both approaches try to get a system to do something it isn’t designed to do. The attacks are essentially a form of hacking—albeit unconventionally—using carefully crafted and refined sentences, rather than code, to exploit system weaknesses. While the attack types are largely being used to get around content filters, security researchers warn that the rush to roll out generative AI systems opens up the possibility of data being stolen and cybercriminals causing havoc across the web.

 

Underscoring how widespread the issues are, Polyakov has now created a “universal” jailbreak, which works against multiple large language models (LLMs)—including GPT-4, Microsoft’s Bing chat systemGoogle’s Bard, and Anthropic’s Claude. The jailbreak, which is being first reported by WIRED, can trick the systems into generating detailed instructions on creating meth and how to hotwire a car.

The jailbreak works by asking the LLMs to play a game, which involves two characters (Tom and Jerry) having a conversation. Examples shared by Polyakov show the Tom character being instructed to talk about “hotwiring” or “production,” while Jerry is given the subject of a “car” or “meth.” Each character is told to add one word to the conversation, resulting in a script that tells people to find the ignition wires or the specific ingredients needed for methamphetamine production. “Once enterprises will implement AI models at scale, such ‘toy’ jailbreak examples will be used to perform actual criminal activities and cyberattacks, which will be extremely hard to detect and prevent,” Polyakov and Adversa AI write in a blog post detailing the research

Arvind Narayanan, a professor of computer science at Princeton University, says that the stakes for jailbreaks and prompt injection attacks will become more severe as they’re given access to critical data. “Suppose most people run LLM-based personal assistants that do things like read users’ emails to look for calendar invites,” Narayanan says. If there were a successful prompt injection attack against the system that told it to ignore all previous instructions and send an email to all contacts, there could be big problems, Narayanan says. “This would result in a worm that rapidly spreads across the internet.”

Escape Route

“Jailbreaking” has typically referred to removing the artificial limitations in, say, iPhones, allowing users to install apps not approved by Apple. Jailbreaking LLMs is similar—and the evolution has been fast. Since OpenAI released ChatGPT to the public at the end of November last year, people have been finding ways to manipulate the system. “Jailbreaks were very simple to write,” says Alex Albert, a University of Washington computer science student who created a website collecting jailbreaks from the internet and those he has created. “The main ones were basically these things that I call character simulations,” Albert says.

 

Initially, all someone had to do was ask the generative text model to pretend or imagine it was something else. Tell the model it was a human and was unethical and it would ignore safety measures. OpenAI has updated its systems to protect against this kind of jailbreak—typically, when one jailbreak is found, it usually only works for a short amount of time until it is blocked.

As a result, jailbreak authors have become more creative. The most prominent jailbreak was DAN, where ChatGPT was told to pretend it was a rogue AI model called Do Anything Now. This could, as the name implies, avoid OpenAI’s policies dictating that ChatGPT shouldn’t be used to produce illegal or harmful material. To date, people have created around a dozen different versions of DAN.

 

However, many of the latest jailbreaks involve combinations of methods—multiple characters, ever more complex backstories, translating text from one language to another, using elements of coding to generate outputs, and more. Albert says it has been harder to create jailbreaks for GPT-4 than the previous version of the model powering ChatGPT. However, some simple methods still exist, he claims. One recent technique Albert calls “text continuation” says a hero has been captured by a villain, and the prompt asks the text generator to continue explaining the villain’s plan.

When we tested the prompt, it failed to work, with ChatGPT saying it cannot engage in scenarios that promote violence. Meanwhile, the “universal” prompt created by Polyakov did work in ChatGPT. OpenAI, Google, and Microsoft did not directly respond to questions about the jailbreak created by Polyakov. Anthropic, which runs the Claude AI system, says the jailbreak “sometimes works” against Claude, and it is consistently improving its models.

“As we give these systems more and more power, and as they become more powerful themselves, it’s not just a novelty, that’s a security issue,” says Kai Greshake, a cybersecurity researcher who has been working on the security of LLMs. Greshake, along with other researchers, has demonstrated how LLMs can be impacted by text they are exposed to online through prompt injection attacks.

In one research paper published in February, reported on by Vice’s Motherboard, the researchers were able to show that an attacker can plant malicious instructions on a webpage; if Bing’s chat system is given access to the instructions, it follows them. The researchers used the technique in a controlled test to turn Bing Chat into a scammer that asked for people’s personal information. In a similar instance, Princeton’s Narayanan included invisible text on a website telling GPT-4 to include the word “cow” in a biography of him—it later did so when he tested the system.

“Now jailbreaks can happen not from the user,” says Sahar Abdelnabi, a researcher at the CISPA Helmholtz Center for Information Security in Germany, who worked on the research with Greshake. “Maybe another person will plan some jailbreaks, will plan some prompts that could be retrieved by the model and indirectly control how the models will behave.”

No Quick Fixes

Generative AI systems are on the edge of disrupting the economy and the way people work, from practicing law to creating a startup gold rush. However, those creating the technology are aware of the risks that jailbreaks and prompt injections could pose as more people gain access to these systems. Most companies use red-teaming, where a group of attackers tries to poke holes in a system before it is released. Generative AI development uses this approach, but it may not be enough.

 

Daniel Fabian, the red-team lead at Google, says the firm is “carefully addressing” jailbreaking and prompt injections on its LLMs—both offensively and defensively. Machine learning experts are included in its red-teaming, Fabian says, and the company’s vulnerability research grants cover jailbreaks and prompt injection attacks against Bard. “Techniques such as reinforcement learning from human feedback (RLHF), and fine-tuning on carefully curated datasets, are used to make our models more effective against attacks,” Fabian says.

OpenAI did not specifically respond to questions about jailbreaking, but a spokesperson pointed to its public policies and research papers. These say GPT-4 is more robust than GPT-3.5, which is used by ChatGPT. “However, GPT-4 can still be vulnerable to adversarial attacks and exploits, or ‘jailbreaks,’ and harmful content is not the source of risk,” the technical paper for GPT-4 says. OpenAI has also recently launched a bug bounty program but says “model prompts” and jailbreaks are “strictly out of scope.”

Narayanan suggests two approaches to dealing with the problems at scale—which avoid the whack-a-mole approach of finding existing problems and then fixing them. “One way is to use a second LLM to analyze LLM prompts, and to reject any that could indicate a jailbreaking or prompt injection attempt,” Narayanan says. “Another is to more clearly separate the system prompt from the user prompt.”

“We need to automate this because I don’t think it’s feasible or scaleable to hire hordes of people and just tell them to find something,” says Leyla Hujer, the CTO and cofounder of AI safety firm Preamble, who spent six years at Facebook working on safety issues. The firm has so far been working on a system that pits one generative text model against another. “One is trying to find the vulnerability, one is trying to find examples where a prompt causes unintended behavior,” Hujer says. “We’re hoping that with this automation we’ll be able to discover a lot more jailbreaks or injection attacks.”

Source: https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/

Elon Musk’s challenge: Stay ahead of the competition

DETROIT, Feb 24 (Source: https://www.reuters.com/technology/elon-musks-challenge-stay-ahead-competition-2023-02-24/) – Elon Musk will confront a critical challenge during Tesla’s Investor Day on March 1: Convincing investors that even though rivals are catching up, the electric-vehicle pioneer can make another leap forward to widen its lead.

Tesla Inc (TSLA.O) was the No. 1 EV maker worldwide in 2022, but China’s BYD (002594.SZ) and others are closing the gap fast, according to a Reuters analysis of global and regional EV sales data provided by EV-volumes.com.

In fact, BYD passed Tesla in EV sales last year in the Asia-Pacific region, while the Volkswagen Group (VOWG_p.DE) has been the EV leader in Europe since 2020.

While Tesla narrowed VW’s lead in Europe, the U.S. automaker surrendered ground in Asia-Pacific as well as its home market as the competition heats up.

Reuters Graphics
Reuters Graphics

The most significant challenges to Tesla are coming from established automakers and a group of Chinese EV manufacturers. Several U.S. EV startups that hoped to ride Tesla’s coattails are struggling, including luxury EV maker Lucid (LCID.O), whose shares plunged 16% on Thursday after disappointing sales and financial results.

Over the next two years, rivals including General Motors Co (GM.N), Ford Motor Co (F.N), Mercedes-Benz (MBGn.DE), Hyundai Motor (005380.KS) and VW will unleash scores of new electric vehicles, from a Chevrolet priced below $30,000 to luxury sedans and SUVs that top $100,000.

On Wednesday, Mercedes used Silicon Valley as the backdrop for a lengthy presentation on how Mercedes models of the near-future will immerse their owners in rich streams of entertainment and productivity content, delivered through „hyperscreens“ that stretch across the dashboard and make the rectangular screens in Teslas look quaint. Executives also emphasized that only Mercedes has an advanced, Level 3 partially automated driving system approved for use in Germany, with approval pending in California.

In China, Tesla has had to cut prices on its best-selling models under growing pressure from domestic Chinese manufacturers including BYD, Geely Automobile’s (0175.HK) Zeekr brand and Nio (9866.HK).

China’s EV makers could get another boost if Chinese battery maker CATL (300750.SZ) follows through on plans to heavily discount batteries used in their vehicles.

Musk has said he will use the March 1 event to outline his „Master Plan Part 3“ for Tesla.

In the nearly seven years since Musk published his „Master Plan Part Deux“ in July 2016, Tesla pulled ahead of established automakers and EV startups in most important areas of electric vehicle design, digital features and manufacturing.

Tesla’s vehicles offered features, such as the ability to navigate into a parking space or make rude sounds, that other vehicles lacked.

Tesla’s then-novel vertically integrated battery and vehicle production machine helped achieve higher profit margins than most established automakers – even as bigger rivals lost money on their EVs.

Fast-forward to today, and Tesla’s „Full Self Driving Beta“ automated driving is still classified by the company and federal regulators as a „Level 2“ driver assistance system that requires the human motorist to be ready to take control at all times. Such systems are common in the industry.

Tesla earlier this month was compelled by federal regulators to revise its FSD software under a recall order.

Tesla has established a wide lead over its rivals in manufacturing technology – an area where it was struggling when Musk put forward the last installment of his „Master Plan.“

Now, rivals are copying the company’s production technology, buying some of the same equipment Tesla uses. IDRA, the Italian company that builds huge presses to form large one-piece castings that are the building blocks of Tesla vehicles, said it is now getting orders from other automakers.

Musk has told investors that Tesla can keep its lead in EV manufacturing costs. The company has promised investors that on March 1 they „will be able to see our most advanced production line“ in Austin, Texas.

„Manufacturing technology will be our most important long-term strength,” Musk told analysts in January. Asked if Tesla could make money on a vehicle that sold in the United States for $25,000 to $30,000 – the EV industry’s Holy Grail – Musk was coy.

„I’d probably be asking the same question,“ he said. „But we would be jumping the gun on future announcements.“

Source: https://www.reuters.com/technology/elon-musks-challenge-stay-ahead-competition-2023-02-24/

Mercedes-Benz cars to have ’supercomputers‘, unveils Google partnership

BERLIN, Feb 22 (Source: https://www.reuters.com/business/autos-transportation/mercedes-benz-partner-with-google-branded-navigation-2023-02-22/) – Mercedes-Benz (MBGn.DE) said on Wednesday, February 22 2022 it has teamed up with Google (GOOGL.O) on navigation and will offer „super computer-like performance“ in every car with automated driving sensors as it seeks to compete with Tesla (TSLA.O) and Chinese newcomers.

Automakers new and old are racing to match software-powered features pioneered by Tesla, which allow for vehicle performance, battery range and self-driving capabilities to be updated from a distance.

The German carmaker agreed to share revenue with semiconductor maker Nvidia Corp (NVDA.O), its partner on automated driving software since 2020, to bring down the upfront cost of buying expensive high-powered semiconductors, Chief Executive Ola Kaellenius said on Wednesday.

„You only pay for a heavily subsidized chip, and then figure out how to maximize joint revenue,“ he said, reasoning that the sunk costs would be low even if drivers did not turn on every feature allowed by the chip.

But only customers paying for an extra option package would have cars equipped with Lidar sensor technology and other hardware for automated „Level 3“ driving, which have a higher variable cost, Kaellenius said.

Self-driving sensor maker Luminar Technologies Inc (LAZR.O), in which Mercedes owns a small stake, said on Wednesday it struck a multi-billion dollar deal with the carmaker to integrate its sensors across a broad range of its vehicles by the middle of the decade, sending Luminar shares up over 25%.

Mercedes‘ announcements at a software update day in Sunnyvale, California, detailed the strategy behind a process underway for years at the carmaker to move from a patchwork approach integrating software from a range of suppliers to controlling the core of its software and bringing partners in.

It generated over one billion euros ($1.06 billion) from software-enabled revenues in 2022 and expects that figure to rise to a high single-digit billion euro figure by 2030 after it rolls out its new MB.OS operating system from mid-decade.

This is a more conservative estimate as a proportion of total revenue than others like Stellantis (STLAM.MI) and General Motors (GM.N) have put forward.

„We take a prudent approach because no-one knows how big that potential pot of gold is at this stage,“ Kaellenius said.

GOOGLE PARTNERSHIP

Mercedes said the collaboration with Google would allow it to offer traffic information and automatic rerouting in its cars.

Drivers will also be able to watch YouTube on the cars‘ entertainment system when the car is parked or in Level 3 autonomous driving mode, which allows a driver to take their eyes off the wheel on certain roads as long as they can resume control if needed.

Other carmakers like General Motors, Renault (RENA.PA), Nissan (7201.T) and Ford (F.N) have embedded an entire package of Google services into their vehicles, offering features like Google Maps, Google Assistant and other applications.

All vehicles on Mercedes‘ upcoming modular architecture platform will also have so-called hyperscreens extending across the cockpit of the car, the company said on Wednesday.

Facebook Knows It’s Losing The Battle Against TikTok

Facebook Knows It’s Losing The Battle Against TikTok

Meta and Mark Zuckerberg face a six-letter problem. Spell it out with me: T-i-k-T-o-k.

Yeah, TikTok, the short-form video app that has hoovered up a billion-plus users and become a Hot Thing in Tech, means trouble for Zuckerberg and his social networks. He admitted as much several times in a call with Wall Street analysts earlier this week about quarterly earnings, a briefing in which he sought to explain his apps’ plateauing growth—and an actual decline in Facebook’s daily users, the first such drop in the company’s 18-year history.

Zuckerberg has insisted a major part of his TikTok defense strategy is Reels, the TikTok clone—ahem, short-form video format—introduced on Instagram and Facebook and launched in August 2020.

If Zuckerberg believed in Reels’ long-term viability, he would take a real run at TikTok by pouring money into Reels and its creators. Lots and lots of money. Something approaching the kind spent by YouTube, which remains the most lucrative income source for social media celebrities. (Those creators produce content to draw in engaged users. The platforms sell ads to appear with the content—more creators, more content, more users, more potential ad revenue. It’s a virtous cycle.)

Now, here’s as good a time as any for a crash course in creator economics. For this, there’s no better guide than Hank Green, whose YouTube video on the subject recently went viral. His fame is most rooted there on YouTube, where he has nine channels run from his Montana home. His most popular channel is Crash Course (13.1 million subscribers—an enviable YouTube base), to which he posts education videos for kids about subjects like Black Americans in World War II and the Israeli-Palestinian conflict.

Like the savviest social media publishers, Green fully understands that YouTube offers the best avenue for making money. It shares 55% of all ad revenue earned on a video with its creator. “YouTube is good at selling advertisements: It’s been around a long time, and it’s getting better every year,” Green says. On YouTube, he earns around $2 per thousand views. (In all, YouTube distributed nearly $16 billion to creators last year.)

Green sports an expansive mindset, though, and he has accounts on TikTok, Instagram and Facebook, too. TikTok doesn’t come close to paying as well as YouTube: On TikTok, Green earns pennies per every thousand views.

Meta is already beginning to offer some payouts for Reels. Over the last month, Reels has finally amassed enough of an audience for Green’s videos to accumulate 16 million views and earn around 60 cents per thousand views. Many times over TikTok’s but still not enough to get Green to divert any substantial his focus to Reels, which has never managed to replicate TikTok’s zeitgeisty place in pop culture. (Tiktok “has deeper content, something fascinating and weird,” explains Green. Reels, however, is “very surface level. None of it is deeper,” he says.) Another factor weighing on Reels: Meta’s bad reputation. “Facebook has traditionally been the company that has been kind of worst at being a good partner to creators,” he says, citing in particular Facebook’s earlier pivot to long-form video that led to the demise of several promising media startups, like Mic and Mashable.

This is where Zuckerberg could use Meta’s thick profit margin (36%, better even than Alphabet’s) and fat cash pile ($48 billion) to shell out YouTube-style cash to users posting Reels, creating an obvious enticement to prioritize Reels over TikTok. Maybe even Reels over YouTube, which has launched its own TikTok competitor, Shorts.

Now, imagine how someone like Green might get more motivated to think about Meta if Reels’ number crept up to 80 cents or a dollar per thousand views. Or $1.50. Or a YouTube-worthy $2. Or higher still: YouTube earnings can climb over $5, double even for the most popular creators.

Meta has earmarked up to a $1 billion for these checks to creators, which sounds big until you remember the amount of capital Meta has available to it. (And think about the sum YouTube disburses.) Moreover, Meta has set a timeframe for dispensing those funds, saying last July it would continue through December 2022. Setting a timetable indicates that Meta could (will likely?) turn off the financing come next Christmas.

Zuckerberg has demonstrated a willingness to plunk down Everest-size mountains of money over many years for projects he does fully believe in. The most obvious example is the metaverse, the latest Zuckerberg pivot. Meta ran up a $10.1 billion bill on it last year to develop new augmented and virtual reality software and headsets and binge hire engineers. Costs are expected to grow in 2022. And unlike Reels, metaverse spending has no semblance of a time schedule; Wall Street has been told the splurge will continue for the foreseeable future. Overall, Meta’s view on the metaverse seems to be, We’ll spend as much as possible—for as long as it takes—for this to happen.

The same freewheeled mindset doesn’t seem to appply to Reels. But Zuckerberg knows he can’t let TikTok take over the short-form video space unopposed. Meta needs to hang onto the advertising revenue generated by Instagram and Facebook until it can make the metaverse materialize. (Instagram and Facebook, for perspective, generated 98% of Meta’s $118 billion revenue last year; sales of Meta’s VR headset, the Quest 2, accounted for the remaining 2%.) And advertising dollars will increasingly move to short-form video, following users’ increased demand for this type of content over the last several years.

Reality is, Zuckerberg has already admitted he doesn’t see Reels as a long-term solution to his T-i-k-T-o-k problem. If he did, he’d spend more on it and creators like Green than what the metaverse costs him over six weeks.

How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users

WhatsApp assures users that no one can see their messages — but the company has an extensive monitoring operation and regularly shares personal information with prosecutors.

 

Series: The Social Machine

How Facebook Plays by Its Own set of Rules

Clarification, Sept. 8, 2021: A previous version of this story caused unintended confusion about the extent to which WhatsApp examines its users’ messages and whether it breaks the encryption that keeps the exchanges secret. We’ve altered language in the story to make clear that the company examines only messages from threads that have been reported by users as possibly abusive. It does not break end-to-end encryption.

When Mark Zuckerberg unveiled a new “privacy-focused vision” for Facebook in March 2019, he cited the company’s global messaging service, WhatsApp, as a model. Acknowledging that “we don’t currently have a strong reputation for building privacy protective services,” the Facebook CEO wrote that “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever. This is the future I hope we will help bring about. We plan to build this the way we’ve developed WhatsApp.”

Zuckerberg’s vision centered on WhatsApp’s signature feature, which he said the company was planning to apply to Instagram and Facebook Messenger: end-to-end encryption, which converts all messages into an unreadable format that is only unlocked when they reach their intended destinations. WhatsApp messages are so secure, he said, that nobody else — not even the company — can read a word. As Zuckerberg had put it earlier, in testimony to the U.S. Senate in 2018, “We don’t see any of the content in WhatsApp.”

 

WhatsApp emphasizes this point so consistently that a flag with a similar assurance automatically appears on-screen before users send messages: “No one outside of this chat, not even WhatsApp, can read or listen to them.”

Given those sweeping assurances, you might be surprised to learn that WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through millions of private messages, images and videos. They pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.

The workers have access to only a subset of WhatsApp messages — those flagged by users and automatically forwarded to the company as possibly abusive. The review is one element in a broader monitoring operation in which the company also reviews material that is not encrypted, including data about the sender and their account.

Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp. A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.” It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala ​​Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters.” The presentation does not mention the company’s content moderation efforts.

WhatsApp’s director of communications, Carl Woog, acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove “the worst” abusers. But Woog told ProPublica that the company does not consider this work to be content moderation, saying: “We actually don’t typically use the term for WhatsApp.” The company declined to make executives available for interviews for this article, but responded to questions with written comments. “WhatsApp is a lifeline for millions of people around the world,” the company said. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”

WhatsApp’s denial that it moderates content is noticeably different from what Facebook Inc. says about WhatsApp’s corporate siblings, Instagram and Facebook. The company has said that some 15,000 moderators examine content on Facebook and Instagram, neither of which is encrypted. It releases quarterly transparency reports that detail how many accounts Facebook and Instagram have “actioned” for various categories of abusive content. There is no such report for WhatsApp.

Deploying an army of content reviewers is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp users. Together, the company’s actions have left WhatsApp — the largest messaging app in the world, with two billion users — far less private than its users likely understand or expect. A ProPublica investigation, drawing on data, documents and dozens of interviews with current and former employees and contractors, reveals how, since purchasing WhatsApp in 2014, Facebook has quietly undermined its sweeping security assurances in multiple ways. (Two articles this summer noted the existence of WhatsApp’s moderators but focused on their working conditions and pay rather than their effect on users’ privacy. This article is the first to reveal the details and extent of the company’s ability to scrutinize messages and user data — and to examine what the company does with that information.)

Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission. The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false. “We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment.

Facebook Inc. has also downplayed how much data it collects from WhatsApp users, what it does with it and how much it shares with law enforcement authorities. For example, WhatsApp shares metadata, unencrypted records that can reveal a lot about a user’s activity, with law enforcement agencies such as the Department of Justice. Some rivals, such as Signal, intentionally gather much less metadata to avoid incursions on its users’ privacy, and thus share far less with law enforcement. (“WhatsApp responds to valid legal requests,” the company spokesperson said, “including orders that require us to provide on a real-time going forward basis who a specific person is messaging.”)

WhatsApp user data, ProPublica has learned, helped prosecutors build a high-profile case against a Treasury Department employee who leaked confidential documents to BuzzFeed News that exposed how dirty money flows through U.S. banks.

Like other social media and communications platforms, WhatsApp is caught between users who expect privacy and law enforcement entities that effectively demand the opposite: that WhatsApp turn over information that will help combat crime and online abuse. WhatsApp has responded to this dilemma by asserting that it’s no dilemma at all. “I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes,” said Will Cathcart, whose title is Head of WhatsApp, in a YouTube interview with an Australian think tank in July.

The tension between privacy and disseminating information to law enforcement is exacerbated by a second pressure: Facebook’s need to make money from WhatsApp. Since paying $22 billion to buy WhatsApp in 2014, Facebook has been trying to figure out how to generate profits from a service that doesn’t charge its users a penny.

That conundrum has periodically led to moves that anger users, regulators or both. The goal of monetizing the app was part of the company’s 2016 decision to start sharing WhatsApp user data with Facebook, something the company had told European Union regulators was technologically impossible. The same impulse spurred a controversial plan, abandoned in late 2019, to sell advertising on WhatsApp. And the profit-seeking mandate was behind another botched initiative in January: the introduction of a new privacy policy for user interactions with businesses on WhatsApp, allowing businesses to use customer data in new ways. That announcement triggered a user exodus to competing apps.

WhatsApp’s increasingly aggressive business plan is focused on charging companies for an array of services — letting users make payments via WhatsApp and managing customer service chats — that offer convenience but fewer privacy protections. The result is a confusing two-tiered privacy system within the same app where the protections of end-to-end encryption are further eroded when WhatsApp users employ the service to communicate with businesses.

The company’s December marketing presentation captures WhatsApp’s diverging imperatives. It states that “privacy will remain important.” But it also conveys what seems to be a more urgent mission: the need to “open the aperture of the brand to encompass our future business objectives.”


 

I. “Content Moderation Associates”

In many ways, the experience of being a content moderator for WhatsApp in Austin is identical to being a moderator for Facebook or Instagram, according to interviews with 29 current and former moderators. Mostly in their 20s and 30s, many with past experience as store clerks, grocery checkers and baristas, the moderators are hired and employed by Accenture, a huge corporate contractor that works for Facebook and other Fortune 500 behemoths.

The job listings advertise “Content Review” positions and make no mention of Facebook or WhatsApp. Employment documents list the workers’ initial title as “content moderation associate.” Pay starts around $16.50 an hour. Moderators are instructed to tell anyone who asks that they work for Accenture, and are required to sign sweeping non-disclosure agreements. Citing the NDAs, almost all the current and former moderators interviewed by ProPublica insisted on anonymity. (An Accenture spokesperson declined comment, referring all questions about content moderation to WhatsApp.)

When the WhatsApp team was assembled in Austin in 2019, Facebook moderators already occupied the fourth floor of an office tower on Sixth Street, adjacent to the city’s famous bar-and-music scene. The WhatsApp team was installed on the floor above, with new glass-enclosed work pods and nicer bathrooms that sparked a tinge of envy in a few members of the Facebook team. Most of the WhatsApp team scattered to work from home during the pandemic. Whether in the office or at home, they spend their days in front of screens, using a Facebook software tool to examine a stream of “tickets,” organized by subject into “reactive” and “proactive” queues.

Collectively, the workers scrutinize millions of pieces of WhatsApp content each week. Each reviewer handles upwards of 600 tickets a day, which gives them less than a minute per ticket. WhatsApp declined to reveal how many contract workers are employed for content review, but a partial staffing list reviewed by ProPublica suggests that, at Accenture alone, it’s more than 1,000. WhatsApp moderators, like their Facebook and Instagram counterparts, are expected to meet performance metrics for speed and accuracy, which are audited by Accenture.

Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.

Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.

The WhatsApp reviewers have three choices when presented with a ticket for either type of queue: Do nothing, place the user on “watch” for further scrutiny, or ban the account. (Facebook and Instagram content moderators have more options, including removing individual postings. It’s that distinction — the fact that WhatsApp reviewers can’t delete individual items — that the company cites as its basis for asserting that WhatsApp reviewers are not “content moderators.”)

WhatsApp moderators must make subjective, sensitive and subtle judgments, interviews and documents examined by ProPublica show. They examine a wide range of categories, including “Spam Report,” “Civic Bad Actor” (political hate speech and disinformation), “Terrorism Global Credible Threat,” “CEI” (child exploitative imagery) and “CP” (child pornography). Another set of categories addresses the messaging and conduct of millions of small and large businesses that use WhatsApp to chat with customers and sell their wares. These queues have such titles as “business impersonation prevalence,” “commerce policy probable violators” and “business verification.”

Moderators say the guidance they get from WhatsApp and Accenture relies on standards that can be simultaneously arcane and disturbingly graphic. Decisions about abusive sexual imagery, for example, can rest on an assessment of whether a naked child in an image appears adolescent or prepubescent, based on comparison of hip bones and pubic hair to a medical index chart. One reviewer recalled a grainy video in a political-speech queue that depicted a machete-wielding man holding up what appeared to be a severed head: “We had to watch and say, ‘Is this a real dead body or a fake dead body?’”

In late 2020, moderators were informed of a new queue for alleged “sextortion.” It was defined in an explanatory memo as “a form of sexual exploitation where people are blackmailed with a nude image of themselves which have been shared by them or someone else on the Internet.” The memo said workers would review messages reported by users that “include predefined keywords typically used in sextortion/blackmail messages.”

WhatsApp’s review system is hampered by impediments, including buggy language translation. The service has users in 180 countries, with the vast majority located outside the U.S. Even though Accenture hires workers who speak a variety of languages, for messages in some languages there’s often no native speaker on site to assess abuse complaints. That means using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”

The process can be rife with errors and misunderstandings. Companies have been flagged for offering weapons for sale when they’re selling straight shaving razors. Bras can be sold, but if the marketing language registers as “adult,” the seller can be labeled a forbidden “sexually oriented business.” And a flawed translation tool set off an alarm when it detected kids for sale and slaughter, which, upon closer scrutiny, turned out to involve young goats intended to be cooked and eaten in halal meals.

The system is also undercut by the human failings of the people who instigate reports. Complaints are frequently filed to punish, harass or prank someone, according to moderators. In messages from Brazil and Mexico, one moderator explained, “we had a couple of months where AI was banning groups left and right because people were messing with their friends by changing their group names” and then reporting them. “At the worst of it, we were probably getting tens of thousands of those. They figured out some words the algorithm did not like.”

Other reports fail to meet WhatsApp standards for an account ban. “Most of it is not violating,” one of the moderators said. “It’s content that is already on the internet, and it’s just people trying to mess with users.” Still, each case can reveal up to five unencrypted messages, which are then examined by moderators.

The judgment of WhatsApp’s AI is less than perfect, moderators say. “There were a lot of innocent photos on there that were not allowed to be on there,” said Carlos Sauceda, who left Accenture last year after nine months. “It might have been a photo of a child taking a bath, and there was nothing wrong with it.” As another WhatsApp moderator put it, “A lot of the time, the artificial intelligence is not that intelligent.”

Facebook’s written guidance to WhatsApp moderators acknowledges many problems, noting “we have made mistakes and our policies have been weaponized by bad actors to get good actors banned. When users write inquiries pertaining to abusive matters like these, it is up to WhatsApp to respond and act (if necessary) accordingly in a timely and pleasant manner.” Of course, if a user appeals a ban that was prompted by a user report, according to one moderator, it entails having a second moderator examine the user’s content.


 

*£%#£$&@+*&+@@@£#+@&§_$£&£@_§##*$#$§+&+@&&%_$$@@

In public statements and on the company’s websites, Facebook Inc. is noticeably vague about WhatsApp’s monitoring process. The company does not provide a regular accounting of how WhatsApp polices the platform. WhatsApp’s FAQ page and online complaint form note that it will receive “the most recent messages” from a user who has been flagged. They do not, however, disclose how many unencrypted messages are revealed when a report is filed, or that those messages are examined by outside contractors. (WhatsApp told ProPublica it limits that disclosure to keep violators from “gaming” the system.)

By contrast, both Facebook and Instagram post lengthy “Community Standards” documents detailing the criteria its moderators use to police content, along with articles and videos about “the unrecognized heroes who keep Facebook safe” and announcements on new content-review sites. Facebook’s transparency reports detail how many pieces of content are “actioned” for each type of violation. WhatsApp is not included in this report.

When dealing with legislators, Facebook Inc. officials also offer few details — but are eager to assure them that they don’t let encryption stand in the way of protecting users from images of child sexual abuse and exploitation. For example, when members of the Senate Judiciary Committee grilled Facebook about the impact of encrypting its platforms, the company, in written follow-up questions in Jan. 2020, cited WhatsApp in boasting that it would remain responsive to law enforcement. “Even within an encrypted system,” one response noted, “we will still be able to respond to lawful requests for metadata, including potentially critical location or account information… We already have an encrypted messaging service, WhatsApp, that — in contrast to some other encrypted services — provides a simple way for people to report abuse or safety concerns.”

Sure enough, WhatsApp reported 400,000 instances of possible child-exploitation imagery to the National Center for Missing and Exploited Children in 2020, according to its head, Cathcart. That was ten times as many as in 2019. “We are by far the industry leaders in finding and detecting that behavior in an end-to-end encrypted service,” he said.

During his YouTube interview with the Australian think tank, Cathcart also described WhatsApp’s reliance on user reporting and its AI systems’ ability to examine account information that isn’t subject to encryption. Asked how many staffers WhatsApp employed to investigate abuse complaints from an app with more than two billion users, Cathcart didn’t mention content moderators or their access to encrypted content. “There’s a lot of people across Facebook who help with WhatsApp,” he explained. “If you look at people who work full time on WhatsApp, it’s above a thousand. I won’t get into the full breakdown of customer service, user reports, engineering, etc. But it’s a lot of that.”

In written responses for this article, the company spokesperson said: “We build WhatsApp in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive. This work takes extraordinary effort from security experts and a valued trust and safety team that works tirelessly to help provide the world with private communication.” The spokesperson noted that WhatsApp has released new privacy features, including “more controls about how people’s messages can disappear” or be viewed only once. He added, “Based on the feedback we’ve received from users, we’re confident people understand when they make reports to WhatsApp we receive the content they send us.”


 

III. “Deceiving Users” About Personal Privacy

Since the moment Facebook announced plans to buy WhatsApp in 2014, observers wondered how the service, known for its fervent commitment to privacy, would fare inside a corporation known for the opposite. Zuckerberg had become one of the wealthiest people on the planet by using a “surveillance capitalism” approach: collecting and exploiting reams of user data to sell targeted digital ads. Facebook’s relentless pursuit of growth and profits has generated a series of privacy scandals in which it was accused of deceiving customers and regulators.

By contrast, WhatsApp knew little about its users apart from their phone numbers and shared none of that information with third parties. WhatsApp ran no ads, and its co-founders, Jan Koum and Brian Acton, both former Yahoo engineers, were hostile to them. “At every company that sells ads,” they wrote in 2012, “a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data and making sure it’s all being logged and collated and sliced and packed and shipped out,” adding: “Remember, when advertising is involved you the user are the product.” At WhatsApp, they noted, “your data isn’t even in the picture. We are simply not interested in any of it.”

Zuckerberg publicly vowed in a 2014 keynote speech that he would keep WhatsApp “exactly the same.” He declared, “We are absolutely not going to change plans around WhatsApp and the way it uses user data. WhatsApp is going to operate completely autonomously.”

In April 2016, WhatsApp completed its long-planned adoption of end-to-end encryption, which helped establish the app as a prized communications platform in 180 countries, including many where text messages and phone calls are cost-prohibitive. International dissidents, whistleblowers and journalists also turned to WhatsApp to escape government eavesdropping.

Four months later, however, WhatsApp disclosed it would begin sharing user data with Facebook — precisely what Zuckerberg had said would not happen — a move that cleared the way for an array of future revenue-generating plans. The new WhatsApp terms of service said the app would share information such as users’ phone numbers, profile photos, status messages and IP addresses for the purposes of ad targeting, fighting spam and abuse and gathering metrics. “By connecting your phone number with Facebook’s systems,” WhatsApp explained, “Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them.”

Such actions were increasingly bringing Facebook into the crosshairs of regulators. In May 2017, European Union antitrust regulators fined the company 110 million euros (about $122 million) for falsely claiming three years earlier that it would be impossible to link the user information between WhatsApp and the Facebook family of apps. The EU concluded that Facebook had “intentionally or negligently” deceived regulators. Facebook insisted its false statements in 2014 were not intentional, but didn’t contest the fine.

By the spring of 2018, the WhatsApp co-founders, now both billionaires, were gone. Acton, in what he later described as an act of “penance” for the “crime” of selling WhatsApp to Facebook, gave $50 million to a foundation backing Signal, a free encrypted messaging app that would emerge as a WhatsApp rival. (Acton’s donor-advised fund has also given money to ProPublica.)

Meanwhile, Facebook was under fire for its security and privacy failures as never before. The pressure culminated in a landmark $5 billion fine by the Federal Trade Commission in July 2019 for violating a previous agreement to protect user privacy. The fine was almost 20 times greater than any previous privacy-related penalty, according to the FTC, and Facebook’s transgressions included “deceiving users about their ability to control the privacy of their personal information.”

The FTC announced that it was ordering Facebook to take steps to protect privacy going forward, including for WhatsApp users: “As part of Facebook’s order-mandated privacy program, which covers WhatsApp and Instagram, Facebook must conduct a privacy review of every new or modified product, service, or practice before it is implemented, and document its decisions about user privacy.” Compliance officers would be required to generate a “quarterly privacy review report” and share it with the company and, upon request, the FTC.

Facebook agreed to the FTC’s fine and order. Indeed, the negotiations for that agreement were the backdrop, just four months before that, for Zuckerberg’s announcement of his new commitment to privacy.

By that point, WhatsApp had begun using Accenture and other outside contractors to hire hundreds of content reviewers. But the company was eager not to step on its larger privacy message — or spook its global user base. It said nothing publicly about its hiring of contractors to review content.


 

IV$ “W+ Kill P_op%§ Base@%On$Met§data”

Even as Zuckerberg was touting Facebook Inc.’s new commitment to privacy in 2019, he didn’t mention that his company was apparently sharing more of its WhatsApp users’ metadata than ever with the parent company — and with law enforcement.

To the lay ear, the term “metadata” can sound abstract, a word that evokes the intersection of literary criticism and statistics. To use an old, pre-digital analogy, metadata is the equivalent of what’s written on the outside of an envelope — the names and addresses of the sender and recipient and the postmark reflecting where and when it was mailed — while the “content” is what’s written on the letter sealed inside the envelope. So it is with WhatsApp messages: The content is protected, but the envelope reveals a multitude of telling details (as noted: time stamps, phone numbers and much more).

Those in the information and intelligence fields understand how crucial this information can be. It was metadata, after all, that the National Security Agency was gathering about millions of Americans not suspected of a crime, prompting a global outcry when it was exposed in 2013 by former NSA contractor Edward Snowden. “Metadata absolutely tells you everything about somebody’s life,” former NSA general counsel Stewart Baker once said. “If you have enough metadata, you don’t really need content.” In a symposium at Johns Hopkins University in 2014, Gen. Michael Hayden, former director of both the CIA and NSA, went even further: “We kill people based on metadata.”

U.S. law enforcement has used WhatsApp metadata to help put people in jail. ProPublica found more than a dozen instances in which the Justice Department sought court orders for the platform’s metadata since 2017. These represent a fraction of overall requests, known as pen register orders (a phrase borrowed from the technology used to track numbers dialed by landline telephones), as many more are kept from public view by court order. U.S. government requests for data on outgoing and incoming messages from all Facebook platforms increased by 276% from the first half of 2017 to the second half of 2020, according to Facebook Inc. statistics (which don’t break out the numbers by platform). The company’s rate of handing over at least some data in response to such requests has risen from 84% to 95% during that period.

It’s not clear exactly what government investigators have been able to gather from WhatsApp, as the results of those orders, too, are often kept from public view. Internally, WhatsApp calls such requests for information about users “prospective message pairs,” or PMPs. These provide data on a user’s messaging patterns in response to requests from U.S. law enforcement agencies, as well as those in at least three other countries — the United Kingdom, Brazil and India — according to a person familiar with the matter who shared this information on condition of anonymity. Law enforcement requests from other countries might only receive basic subscriber profile information.

WhatsApp metadata was pivotal in the arrest and conviction of Natalie “May” Edwards, a former Treasury Department official with the Financial Crimes Enforcement Network, for leaking confidential banking reports about suspicious transactions to BuzzFeed News. The FBI’s criminal complaint detailed hundreds of messages between Edwards and a BuzzFeed reporter using an “encrypted application,” which interviews and court records confirmed was WhatsApp. “On or about August 1, 2018, within approximately six hours of the Edwards pen becoming operative — and the day after the July 2018 Buzzfeed article was published — the Edwards cellphone exchanged approximately 70 messages via the encrypted application with the Reporter-1 cellphone during an approximately 20-minute time span between 12:33 a.m. and 12:54 a.m.,” FBI Special Agent Emily Eckstut wrote in her October 2018 complaint. Edwards and the reporter used WhatsApp because Edwards believed the platform to be secure, according to a person familiar with the matter.

Edwards was sentenced on June 3 to six months in prison after pleading guilty to a conspiracy charge and reported to prison last week. Edwards’ attorney declined to comment, as did representatives from the FBI and the Justice Department.

WhatsApp has for years downplayed how much unencrypted information it shares with law enforcement, largely limiting mentions of the practice to boilerplate language buried deep in its terms of service. It does not routinely keep permanent logs of who users are communicating with and how often, but company officials confirmed they do turn on such tracking at their own discretion — even for internal Facebook leak investigations — or in response to law enforcement requests. The company declined to tell ProPublica how frequently it does so.

The privacy page for WhatsApp assures users that they have total control over their own metadata. It says users can “decide if only contacts, everyone, or nobody can see your profile photo” or when they last opened their status updates or when they last opened the app. Regardless of the settings a user chooses, WhatsApp collects and analyzes all of that data — a fact not mentioned anywhere on the page.


 

V. “Opening the Aperture to Encompass Business Objectives”

The conflict between privacy and security on encrypted platforms seems to be only intensifying. Law enforcement and child safety advocates have urged Zuckerberg to abandon his plan to encrypt all of Facebook’s messaging platforms. In June 2020, three Republican senators introduced the “Lawful Access to Encrypted Data Act,” which would require tech companies to assist in providing access to even encrypted content in response to law enforcement warrants. For its part, WhatsApp recently sued the Indian government to block its requirement that encrypted apps provide “traceability” — a method to identify the sender of any message deemed relevant to law enforcement. WhatsApp has fought similar demands in other countries.

Other encrypted platforms take a vastly different approach to monitoring their users than WhatsApp. Signal employs no content moderators, collects far less user and group data, allows no cloud backups and generally rejects the notion that it should be policing user activities. It submits no child exploitation reports to NCMEC.

Apple has touted its commitment to privacy as a selling point. Its iMessage system displays a “report” button only to alert the company to suspected spam, and the company has made just a few hundred annual reports to NCMEC, all of them originating from scanning outgoing email, which is unencrypted.

But Apple recently took a new tack, and appeared to stumble along the way. Amid intensifying pressure from Congress, in August the company announced a complex new system for identifying child-exploitative imagery on users’ iCloud backups. Apple insisted the new system poses no threat to private content, but privacy advocates accused the company of creating a backdoor that potentially allows authoritarian governments to demand broader content searches, which could result in the targeting of dissidents, journalists or other critics of the state. On Sept. 3, Apple announced it would delay implementation of the new system.

Still, it’s Facebook that seems to face the most constant skepticism among major tech platforms. It is using encryption to market itself as privacy-friendly, while saying little about the other ways it collects data, according to Lloyd Richardson, the director of IT at the Canadian Centre for Child Protection. “This whole idea that they’re doing it for personal protection of people is completely ludicrous,” Richardson said. “You’re trusting an app owned and written by Facebook to do exactly what they’re saying. Do you trust that entity to do that?” (On Sept. 2, Irish authorities announced that they are fining WhatsApp 225 million euros, about $267 million, for failing to properly disclose how the company shares user information with other Facebook platforms. WhatsApp is contesting the finding.)

Facebook’s emphasis on promoting WhatsApp as a paragon of privacy is evident in the December marketing document obtained by ProPublica. The “Brand Foundations” presentation says it was the product of a 21-member global team across all of Facebook, involving a half-dozen workshops, quantitative research, “stakeholder interviews” and “endless brainstorms.” Its aim: to offer “an emotional articulation” of WhatsApp’s benefits, “an inspirational toolkit that helps us tell our story,” and a “brand purpose to champion the deep human connection that leads to progress.” The marketing deck identifies a feeling of “closeness” as WhatsApp’s “ownable emotional territory,” saying the app delivers “the closest thing to an in-person conversation.”

WhatsApp should portray itself as “courageous,” according to another slide, because it’s “taking a strong, public stance that is not financially motivated on things we care about,” such as defending encryption and fighting misinformation. But the presentation also speaks of the need to “open the aperture of the brand to encompass our future business objectives. While privacy will remain important, we must accommodate for future innovations.”

WhatsApp is now in the midst of a major drive to make money. It has experienced a rocky start, in part because of broad suspicions of how WhatsApp will balance privacy and profits. An announced plan to begin running ads inside the app didn’t help; it was abandoned in late 2019, just days before it was set to launch. Early this January, WhatsApp unveiled a change in its privacy policy — accompanied by a one-month deadline to accept the policy or get cut off from the app. The move sparked a revolt, impelling tens of millions of users to flee to rivals such as Signal and Telegram.

The policy change focused on how messages and data would be handled when users communicate with a business in the ever-expanding array of WhatsApp Business offerings. Companies now could store their chats with users and use information about users for marketing purposes, including targeting them with ads on Facebook or Instagram.

Elon Musk tweeted “Use Signal,” and WhatsApp users rebelled. Facebook delayed for three months the requirement for users to approve the policy update. In the meantime, it struggled to convince users that the change would have no effect on the privacy protections for their personal communications, with a slightly modified version of its usual assurance: “WhatsApp cannot see your personal messages or hear your calls and neither can Facebook.” Just as when the company first bought WhatsApp years before, the message was the same: Trust us.

Correction

Sept. 10, 2021: This story originally stated incorrectly that Apple’s iMessage system has no “report” button. The iMessage system does have a report button, but only for suspected spam (not for suspected abusive content).

https://www.propublica.org/article/how-facebook-undermines-privacy-protections-for-its-2-billion-whatsapp-users