In China, parents are buying smartwatches for children as young as 5, connecting them to a digital world that blends socializing with fierce competition.
Photo-Illustration: WIRED Staff; Getty Images
At what age should a kid ideally get a smartwatch? In China, parents are buying them for children as young as five. Adults want to be able to call their kids and track their location down to a specific building floor. But that’s not why children are clamoring for the devices, specifically ones made by a company called Xiaotiancai, which translates to Little Genius in English.
The watches, which launched in 2015 and cost up to $330, are a portal into an elaborate world that blends social engagement with relentless competition. Kids can use the watches to buy snacks at local shops, chat and share videos with friends, play games, and, sure, stay in touch with their families. But the main activity is accumulating as many “likes” as possible on their watch’s profile page. On the extreme end, Chinese media outlets have reported on kids who buy bots to juice their numbers, hack the watches to dox their enemies, and sometimes even find romantic partners. According to tech research firm Counterpoint Research, Little Genius accounts for nearly half of global market share for kids’ smartwatches.
Status Games
Over the past decade, Little Genius has found ways to gamify nearly every measurable activity in the life of a child—playing ping pong, posting updates, the list goes on. Earning more experience points boosts kids to a higher level, which increases the number of likes they can send to friends. It’s a game of reciprocity—you send me likes, and I’ll return the favor. One 18-year-old recently told Chinese media that she had struggled to make friends until four years ago when a classmate invited her into a Little Genius social circle. She racked up more than one million likes and became a mini-celebrity on the platform. She said she met all three of her boyfriends through the watch, two of whom she broke up with because they asked her to send erotic photos.
High like counts have become a sort of status symbol. Some enthusiastic Little Genius users have taken to RedNote (or Xiaohongshu), a prominent Chinese social media app, to hunt for new friends so as to collect more likes and badges. As video tutorials on the app explain, low-level users can only give out five likes a day to any one friend; higher-ranking users can give out 20. Because the watch limits its owner to a total of 150 friends, kids are therefore incentivized to maximize their number of high-level friends. Lower-status kids, in turn, are compelled to engage in competitive antics so they don’t get dumped by higher-ranking friends.
“They feel this sense of camaraderie and community,” said Ivy Yang, founder of New York-based consultancy Wavelet Strategy, who has studied Little Genius. “They have a whole world.” But Yang expressed reservations about the way the watch seems to commodify friendship. “It’s just very transactional,” she adds.
Engagement Hacks
On RedNote/Xiaohongshu, people post videos on circumventing Little Genius’s daily like limits, with titles such as “First in the world! Unlimited likes on Little Genius new homepage!” The competitive pressure has also spawned businesses that promise to help kids boost their metrics. Some high-ranking users sell their old accounts. Others sell bots that send likes or offer to help keep accounts active while the owner of a watch is in class.
Get enough likes—say, 800,000—and you become a “big shot” in the Little Genius community. Last month, a Chinese media outlet reported that a 17-year-old with more than 2 million likes used her online clout to sell bots and old accounts, earning her more than $8,000 in a year. Though she enjoyed the fame that the smartwatch brought her, she said she left the platform after getting into fights with other Little Genius “big shots” and facing cyberbullying.
In September, a Beijing-based organization called China’s Child Safety Emergency Response warned parents that children with Little Genius watches were at risk of developing dangerous relationships or falling victim to scams. Officials have also raised alarms about these hidden corners of the Little Genius universe. The Chinese government has begun drafting national safety standards for children’s watches, following growing concerns over internet addiction, content unfit for children, and overspending via the watch payment function. The company did not respond to requests for comment.
I talked to one parent who had been reluctant to buy the watch. Lin Hong, a 48-year-old mom in Beijing, worried that her nearsighted daughter, Yuanyuan, would become obsessed with its tiny screen. But once Yuanyuan turned 8, Lin relented and splurged on the device. Lin’s fears quickly materialized.
Yuanyuan loved starting her day by customizing her avatar’s appearance. She regularly sent likes to her friends and made an effort to run and jump rope to earn more points. “She would look for her smartwatch first thing every morning,” Lin said. “It was like adults, actually, they’re all a bit addicted.”
To curb her daughter’s obsession, Lin limited Yuanyuan’s time on the watch. Now she’s noticing that her daughter, who turns 9 soon, chafes at her mother’s digital supervision. “If I call her three times, she’ll finally pick up to say, ‘I’m still out, stop calling. I’m not done playing yet,’ and hang up,” Lin said. “If it’s like this, she probably won’t want to keep wearing the watch for much longer.”
Facebook has just turned 20 years old, and observers say: Social media has moved from places of exchange to places of passive consumption. Is that correct?
“Sorry, but Instagram sucks now,” was the title of a journalist in her article in “Vice” magazine. The author criticizes that the app shows too much advertising and too much irrelevant stuff like healthy recipes. „If it continues like this, Instagram will soon end up with the other social media zombies.“ She means, for example, Facebook. Because this platform is no longer what it once was.
Once upon a time, you could watch your friends explore all of Europe via Interrail, fidget to your favorite song in the club, paint a picture with watercolors or adore your new partner. You always stayed in touch via direct message – whether with your school colleagues or your holiday love. Instead, short videos, pictures and memes from accounts that you don’t actually follow are now appearing in the timeline. At best, an older relative posts a photo of her garden or a tasteless calendar saying.
On both platforms, this is due to the changed algorithm that the Meta Group introduced around two years ago. It ensures that users are no longer shown primarily what they want to see themselves, but rather what Instagram and Facebook believe is relevant to them.
This upsets quite a few people. A journalist from the online magazine “Krautreporter” says he mainly sees advertising and contributions from influencers. “I often wonder whether real people still post on social networks,” he continues. In another article he states: „Social media is dying out.“ Is it really over?
A similar sentence could recently be read in the Economist. The newspaper published an article entitled „The end of the social network“ to mark Facebook’s 20th anniversary in February. The cover photo shows an emoji with an offended expression about to sink into the sea. People would post less, the text says. The proportion of Americans who like to document their lives online has fallen from 40 percent to 28 percent since 2020. Can it really be that the time of social networks is over? Woman with smartphone Apparently people have less desire to document their lives online than before. Getty Images/iStockphoto
Anyone who reads the entire article in the „Economist“ will quickly realize that it’s not about declaring social media dead. Rather, there is talk of a transformation: the social aspect of social media is being lost. While they were originally intended to combine personal and mass media communication, that is now changing again. The two functions would gradually be separated again. Status updates from friends have given way to videos from strangers; Postings would no longer be posted on Facebook and Co, but rather on messenger services such as Whatsapp or Telegram. The new logic
The “Neue Zürcher Zeitung” (“NZZ”) analyzes that all of this is just the logical consequence of a development that has been going on for some time. In the early days of Facebook, users would have seen all posts regardless of their quality. It didn’t matter whether they actually interested her – the photo of a lunch, the selfie in the park, the crude thoughts about world events, all of that just appeared, in chronological order. Whoever had the most followers had the largest audience.
In 2009, Facebook began sorting posts by popularity. Whatever was liked the most was displayed at the top. Since then, the “closest circle” of friends and family has become increasingly less important, according to the “NZZ” analysis.
Recently, the Meta Group has also become increasingly oriented towards Tiktok. The app from China is particularly popular among younger people. Users can upload short videos there that are accompanied by music. Anyone who is on Tiktok will always see content from other people’s accounts. Ingrid Brodnig, a digital expert, isn’t surprised: „Facebook and Instagram have probably noticed that this is helpful so that people stay tuned.“ From the mass of posts, an artificial intelligence automatically selects those that the person in front of the screen might like. At the same time, fewer contributions come from friends. Let yourself be showered
Brodnig also noticed what other observers stated: „The social aspect of social media has become less.“
The trend is rather to let yourself be showered. „Tiktok, for example, works more like RTL 2 than like a classic social medium, which is about interpersonal exchange.“ On Tiktok, a minority publishes the majority of posts. Or to put it another way: A few show themselves, many others watch them – unlike what was previously used to on social networks, where everyone revealed something.
The fact that many people no longer have as much desire to expose themselves online does not only apply to the USA. In Austria it is also currently evident that young people are using social media less. The disillusionment began years ago when “people lost their jobs, for example, because they posted something on Facebook while drunk.” False reports, hate comments and ridicule would have done the rest. „If you get angry because you post your lunch, you might think: Then I won’t post or I’ll post less often.“
"If you get angry because you post your lunch, you might think: Then I won't post or I'll post less often." (Ingrid Brodnig, digital expert)
So who provides all the content? Influencers, politicians and people “with a great need to communicate”, for example with a political issue, says Brodnig. But also media and companies that want to advertise their products. Most others, however, are tired to a certain extent and prefer to sit back and consume. „The funny thing about it is that it’s not that noticeable because the feeds are of course designed so that we see the active ones. We don’t see that our friend Anna hasn’t posted anything for seven weeks, Facebook wouldn’t have any of that.“ Shifting to private groups
So if social networks are primarily there for consumption and the exchange shifts to private chats and chat groups: What are the consequences?
It could become a problem that messenger apps such as Whatsapp or Telegram are not moderated. Of course, this poses risks. In India, politicians used WhatsApp to spread lies – which might soon have been deleted on a platform like Facebook, according to the Economist. “Especially during the pandemic, you could see that a lot of misinformation and conspiracy myths were being spread via messenger apps,” says expert Brodnig. Furthermore: One should not believe that things are always peaceful and mindful in chat groups; debates would escalate there too.
Another possible consequence is that “social media will become a main entertainment channel,” says Brodnig. This is difficult, for example, for protest movements for which social media is an important place for networking. The expert also fears that an already very emotional public debate will be further emotionalized: „Tiktok is a system in which upsetting videos may have a good chance because people will stick around for longer.“ Not all bad
But not everything about it is bad. Brodnig suspects that people today should think more consciously about what they want to reveal about their private life and what they don’t. “We learned not to have to post everything.” Enjoying the moment seems to be more in vogue again. The channels would now also be viewed more critically. „When social media was new and fresh, there was a lot of enthusiasm. People thought: Wow, I can connect with my high school friend who I haven’t seen in ten years, or message friends from Argentina.“ Now many have recognized the dark side. „I believe that the golden age of social media is over.“
By the way, some are of the opinion that the future of platforms lies in their past. Unless they return to what they once were, they would surely perish. But are they right? The number of users on Instagram has grown steadily and is expected to continue to grow. So even though people are tired of posting, even though they’re annoyed – they stick with it.
When Mark Zuckerberg unveiled a new “privacy-focused vision” for Facebook in March 2019, he cited the company’s global messaging service, WhatsApp, as a model. Acknowledging that “we don’t currently have a strong reputation for building privacy protective services,” the Facebook CEO wrote that “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever. This is the future I hope we will help bring about. We plan to build this the way we’ve developed WhatsApp.”
Zuckerberg’s vision centered on WhatsApp’s signature feature, which he said the company was planning to apply to Instagram and Facebook Messenger: end-to-end encryption, which converts all messages into an unreadable format that is only unlocked when they reach their intended destinations. WhatsApp messages are so secure, he said, that nobody else — not even the company — can read a word. As Zuckerberg had put it earlier, in testimony to the U.S. Senate in 2018, “We don’t see any of the content in WhatsApp.”
WhatsApp emphasizes this point so consistently that a flag with a similar assurance automatically appears on-screen before users send messages: “No one outside of this chat, not even WhatsApp, can read or listen to them.”
Given those sweeping assurances, you might be surprised to learn that WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through millions of private messages, images and videos. They pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.
The workers have access to only a subset of WhatsApp messages — those flagged by users and automatically forwarded to the company as possibly abusive. The review is one element in a broader monitoring operation in which the company also reviews material that is not encrypted, including data about the sender and their account.
Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp. A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.” It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters.” The presentation does not mention the company’s content moderation efforts.
WhatsApp’s director of communications, Carl Woog, acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove “the worst” abusers. But Woog told ProPublica that the company does not consider this work to be content moderation, saying: “We actually don’t typically use the term for WhatsApp.” The company declined to make executives available for interviews for this article, but responded to questions with written comments. “WhatsApp is a lifeline for millions of people around the world,” the company said. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”
WhatsApp’s denial that it moderates content is noticeably different from what Facebook Inc. says about WhatsApp’s corporate siblings, Instagram and Facebook. The company has said that some 15,000 moderators examine content on Facebook and Instagram, neither of which is encrypted. It releases quarterly transparency reports that detail how many accounts Facebook and Instagram have “actioned” for various categories of abusive content. There is no such report for WhatsApp.
Deploying an army of content reviewers is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp users. Together, the company’s actions have left WhatsApp — the largest messaging app in the world, with two billion users — far less private than its users likely understand or expect. A ProPublica investigation, drawing on data, documents and dozens of interviews with current and former employees and contractors, reveals how, since purchasing WhatsApp in 2014, Facebook has quietly undermined its sweeping security assurances in multiple ways. (Twoarticles this summer noted the existence of WhatsApp’s moderators but focused on their working conditions and pay rather than their effect on users’ privacy. This article is the first to reveal the details and extent of the company’s ability to scrutinize messages and user data — and to examine what the company does with that information.)
Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission. The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false. “We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment.
Facebook Inc. has also downplayed how much data it collects from WhatsApp users, what it does with it and how much it shares with law enforcement authorities. For example, WhatsApp shares metadata, unencrypted records that can reveal a lot about a user’s activity, with law enforcement agencies such as the Department of Justice. Some rivals, such as Signal, intentionally gather much less metadata to avoid incursions on its users’ privacy, and thus share far less with law enforcement. (“WhatsApp responds to valid legal requests,” the company spokesperson said, “including orders that require us to provide on a real-time going forward basis who a specific person is messaging.”)
WhatsApp user data, ProPublica has learned, helped prosecutors build a high-profile case against a Treasury Department employee who leaked confidential documents to BuzzFeed News that exposed how dirty money flows through U.S. banks.
Like other social media and communications platforms, WhatsApp is caught between users who expect privacy and law enforcement entities that effectively demand the opposite: that WhatsApp turn over information that will help combat crime and online abuse. WhatsApp has responded to this dilemma by asserting that it’s no dilemma at all. “I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes,” said Will Cathcart, whose title is Head of WhatsApp, in a YouTube interview with an Australian think tank in July.
The tension between privacy and disseminating information to law enforcement is exacerbated by a second pressure: Facebook’s need to make money from WhatsApp. Since paying $22 billion to buy WhatsApp in 2014, Facebook has been trying to figure out how to generate profits from a service that doesn’t charge its users a penny.
That conundrum has periodically led to moves that anger users, regulators or both. The goal of monetizing the app was part of the company’s 2016 decision to start sharing WhatsApp user data with Facebook, something the company had told European Union regulators was technologically impossible. The same impulse spurred a controversial plan, abandoned in late 2019, to sell advertising on WhatsApp. And the profit-seeking mandate was behind another botched initiative in January: the introduction of a new privacy policy for user interactions with businesses on WhatsApp, allowing businesses to use customer data in new ways. That announcement triggered a user exodus to competing apps.
WhatsApp’s increasingly aggressive business plan is focused on charging companies for an array of services — letting users make payments via WhatsApp and managing customer service chats — that offer convenience but fewer privacy protections. The result is a confusing two-tiered privacy system within the same app where the protections of end-to-end encryption are further eroded when WhatsApp users employ the service to communicate with businesses.
The company’s December marketing presentation captures WhatsApp’s diverging imperatives. It states that “privacy will remain important.” But it also conveys what seems to be a more urgent mission: the need to “open the aperture of the brand to encompass our future business objectives.”
I. “Content Moderation Associates”
In many ways, the experience of being a content moderator for WhatsApp in Austin is identical to being a moderator for Facebook or Instagram, according to interviews with 29 current and former moderators. Mostly in their 20s and 30s, many with past experience as store clerks, grocery checkers and baristas, the moderators are hired and employed by Accenture, a huge corporate contractor that works for Facebook and other Fortune 500 behemoths.
The job listings advertise “Content Review” positions and make no mention of Facebook or WhatsApp. Employment documents list the workers’ initial title as “content moderation associate.” Pay starts around $16.50 an hour. Moderators are instructed to tell anyone who asks that they work for Accenture, and are required to sign sweeping non-disclosure agreements. Citing the NDAs, almost all the current and former moderators interviewed by ProPublica insisted on anonymity. (An Accenture spokesperson declined comment, referring all questions about content moderation to WhatsApp.)
When the WhatsApp team was assembled in Austin in 2019, Facebook moderators already occupied the fourth floor of an office tower on Sixth Street, adjacent to the city’s famous bar-and-music scene. The WhatsApp team was installed on the floor above, with new glass-enclosed work pods and nicer bathrooms that sparked a tinge of envy in a few members of the Facebook team. Most of the WhatsApp team scattered to work from home during the pandemic. Whether in the office or at home, they spend their days in front of screens, using a Facebook software tool to examine a stream of “tickets,” organized by subject into “reactive” and “proactive” queues.
Collectively, the workers scrutinize millions of pieces of WhatsApp content each week. Each reviewer handles upwards of 600 tickets a day, which gives them less than a minute per ticket. WhatsApp declined to reveal how many contract workers are employed for content review, but a partial staffing list reviewed by ProPublica suggests that, at Accenture alone, it’s more than 1,000. WhatsApp moderators, like their Facebook and Instagram counterparts, are expected to meet performance metrics for speed and accuracy, which are audited by Accenture.
Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.
Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.
The WhatsApp reviewers have three choices when presented with a ticket for either type of queue: Do nothing, place the user on “watch” for further scrutiny, or ban the account. (Facebook and Instagram content moderators have more options, including removing individual postings. It’s that distinction — the fact that WhatsApp reviewers can’t delete individual items — that the company cites as its basis for asserting that WhatsApp reviewers are not “content moderators.”)
WhatsApp moderators must make subjective, sensitive and subtle judgments, interviews and documents examined by ProPublica show. They examine a wide range of categories, including “Spam Report,” “Civic Bad Actor” (political hate speech and disinformation), “Terrorism Global Credible Threat,” “CEI” (child exploitative imagery) and “CP” (child pornography). Another set of categories addresses the messaging and conduct of millions of small and large businesses that use WhatsApp to chat with customers and sell their wares. These queues have such titles as “business impersonation prevalence,” “commerce policy probable violators” and “business verification.”
Moderators say the guidance they get from WhatsApp and Accenture relies on standards that can be simultaneously arcane and disturbingly graphic. Decisions about abusive sexual imagery, for example, can rest on an assessment of whether a naked child in an image appears adolescent or prepubescent, based on comparison of hip bones and pubic hair to a medical index chart. One reviewer recalled a grainy video in a political-speech queue that depicted a machete-wielding man holding up what appeared to be a severed head: “We had to watch and say, ‘Is this a real dead body or a fake dead body?’”
In late 2020, moderators were informed of a new queue for alleged “sextortion.” It was defined in an explanatory memo as “a form of sexual exploitation where people are blackmailed with a nude image of themselves which have been shared by them or someone else on the Internet.” The memo said workers would review messages reported by users that “include predefined keywords typically used in sextortion/blackmail messages.”
WhatsApp’s review system is hampered by impediments, including buggy language translation. The service has users in 180 countries, with the vast majority located outside the U.S. Even though Accenture hires workers who speak a variety of languages, for messages in some languages there’s often no native speaker on site to assess abuse complaints. That means using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”
The process can be rife with errors and misunderstandings. Companies have been flagged for offering weapons for sale when they’re selling straight shaving razors. Bras can be sold, but if the marketing language registers as “adult,” the seller can be labeled a forbidden “sexually oriented business.” And a flawed translation tool set off an alarm when it detected kids for sale and slaughter, which, upon closer scrutiny, turned out to involve young goats intended to be cooked and eaten in halal meals.
The system is also undercut by the human failings of the people who instigate reports. Complaints are frequently filed to punish, harass or prank someone, according to moderators. In messages from Brazil and Mexico, one moderator explained, “we had a couple of months where AI was banning groups left and right because people were messing with their friends by changing their group names” and then reporting them. “At the worst of it, we were probably getting tens of thousands of those. They figured out some words the algorithm did not like.”
Other reports fail to meet WhatsApp standards for an account ban. “Most of it is not violating,” one of the moderators said. “It’s content that is already on the internet, and it’s just people trying to mess with users.” Still, each case can reveal up to five unencrypted messages, which are then examined by moderators.
The judgment of WhatsApp’s AI is less than perfect, moderators say. “There were a lot of innocent photos on there that were not allowed to be on there,” said Carlos Sauceda, who left Accenture last year after nine months. “It might have been a photo of a child taking a bath, and there was nothing wrong with it.” As another WhatsApp moderator put it, “A lot of the time, the artificial intelligence is not that intelligent.”
Facebook’s written guidance to WhatsApp moderators acknowledges many problems, noting “we have made mistakes and our policies have been weaponized by bad actors to get good actors banned. When users write inquiries pertaining to abusive matters like these, it is up to WhatsApp to respond and act (if necessary) accordingly in a timely and pleasant manner.” Of course, if a user appeals a ban that was prompted by a user report, according to one moderator, it entails having a second moderator examine the user’s content.
II. “Industry Leaders” in Detecting Bad Behavior
In public statements and on the company’s websites, Facebook Inc. is noticeably vague about WhatsApp’s monitoring process. The company does not provide a regular accounting of how WhatsApp polices the platform. WhatsApp’s FAQ page and online complaint form note that it will receive “the most recent messages” from a user who has been flagged. They do not, however, disclose how many unencrypted messages are revealed when a report is filed, or that those messages are examined by outside contractors. (WhatsApp told ProPublica it limits that disclosure to keep violators from “gaming” the system.)
By contrast, both Facebook and Instagram post lengthy “Community Standards” documents detailing the criteria its moderators use to police content, along with articles and videos about “the unrecognized heroes who keep Facebook safe” and announcements on new content-review sites. Facebook’s transparency reports detail how many pieces of content are “actioned” for each type of violation. WhatsApp is not included in this report.
When dealing with legislators, Facebook Inc. officials also offer few details — but are eager to assure them that they don’t let encryption stand in the way of protecting users from images of child sexual abuse and exploitation. For example, when members of the Senate Judiciary Committee grilled Facebook about the impact of encrypting its platforms, the company, in written follow-up questions in Jan. 2020, cited WhatsApp in boasting that it would remain responsive to law enforcement. “Even within an encrypted system,” one response noted, “we will still be able to respond to lawful requests for metadata, including potentially critical location or account information… We already have an encrypted messaging service, WhatsApp, that — in contrast to some other encrypted services — provides a simple way for people to report abuse or safety concerns.”
Sure enough, WhatsApp reported 400,000 instances of possible child-exploitation imagery to the National Center for Missing and Exploited Children in 2020, according to its head, Cathcart. That was ten times as many as in 2019. “We are by far the industry leaders in finding and detecting that behavior in an end-to-end encrypted service,” he said.
During his YouTube interview with the Australian think tank, Cathcart also described WhatsApp’s reliance on user reporting and its AI systems’ ability to examine account information that isn’t subject to encryption. Asked how many staffers WhatsApp employed to investigate abuse complaints from an app with more than two billion users, Cathcart didn’t mention content moderators or their access to encrypted content. “There’s a lot of people across Facebook who help with WhatsApp,” he explained. “If you look at people who work full time on WhatsApp, it’s above a thousand. I won’t get into the full breakdown of customer service, user reports, engineering, etc. But it’s a lot of that.”
In written responses for this article, the company spokesperson said: “We build WhatsApp in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive. This work takes extraordinary effort from security experts and a valued trust and safety team that works tirelessly to help provide the world with private communication.” The spokesperson noted that WhatsApp has released new privacy features, including “more controls about how people’s messages can disappear” or be viewed only once. He added, “Based on the feedback we’ve received from users, we’re confident people understand when they make reports to WhatsApp we receive the content they send us.”
III. “Deceiving Users” About Personal Privacy
Since the moment Facebook announced plans to buy WhatsApp in 2014, observers wondered how the service, known for its fervent commitment to privacy, would fare inside a corporation known for the opposite. Zuckerberg had become one of the wealthiest people on the planet by using a “surveillance capitalism” approach: collecting and exploiting reams of user data to sell targeted digital ads. Facebook’s relentless pursuit of growth and profits has generated a series of privacy scandals in which it was accused of deceiving customers and regulators.
By contrast, WhatsApp knew little about its users apart from their phone numbers and shared none of that information with third parties. WhatsApp ran no ads, and its co-founders, Jan Koum and Brian Acton, both former Yahoo engineers, were hostile to them. “At every company that sells ads,” they wrote in 2012, “a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data and making sure it’s all being logged and collated and sliced and packed and shipped out,” adding: “Remember, when advertising is involved you the user are the product.” At WhatsApp, they noted, “your data isn’t even in the picture. We are simply not interested in any of it.”
Zuckerberg publicly vowed in a 2014 keynote speech that he would keep WhatsApp “exactly the same.” He declared, “We are absolutely not going to change plans around WhatsApp and the way it uses user data. WhatsApp is going to operate completely autonomously.”
In April 2016, WhatsApp completed its long-planned adoption of end-to-end encryption, which helped establish the app as a prized communications platform in 180 countries, including many where text messages and phone calls are cost-prohibitive. International dissidents, whistleblowers and journalists also turned to WhatsApp to escape government eavesdropping.
Four months later, however, WhatsApp disclosed it would begin sharing user data with Facebook — precisely what Zuckerberg had said would not happen — a move that cleared the way for an array of future revenue-generating plans. The new WhatsApp terms of service said the app would share information such as users’ phone numbers, profile photos, status messages and IP addresses for the purposes of ad targeting, fighting spam and abuse and gathering metrics. “By connecting your phone number with Facebook’s systems,” WhatsApp explained, “Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them.”
Such actions were increasingly bringing Facebook into the crosshairs of regulators. In May 2017, European Union antitrust regulators fined the company 110 million euros (about $122 million) for falsely claiming three years earlier that it would be impossible to link the user information between WhatsApp and the Facebook family of apps. The EU concluded that Facebook had “intentionally or negligently” deceived regulators. Facebook insisted its false statements in 2014 were not intentional, but didn’t contest the fine.
By the spring of 2018, the WhatsApp co-founders, now both billionaires, were gone. Acton, in what he later described as an act of “penance” for the “crime” of selling WhatsApp to Facebook, gave $50 million to a foundation backing Signal, a free encrypted messaging app that would emerge as a WhatsApp rival. (Acton’s donor-advised fund has also given money to ProPublica.)
Meanwhile, Facebook was under fire for its security and privacy failures as never before. The pressure culminated in a landmark $5 billion fine by the Federal Trade Commission in July 2019 for violating a previous agreement to protect user privacy. The fine was almost 20 times greater than any previous privacy-related penalty, according to the FTC, and Facebook’s transgressions included “deceiving users about their ability to control the privacy of their personal information.”
The FTC announced that it was ordering Facebook to take steps to protect privacy going forward, including for WhatsApp users: “As part of Facebook’s order-mandated privacy program, which covers WhatsApp and Instagram, Facebook must conduct a privacy review of every new or modified product, service, or practice before it is implemented, and document its decisions about user privacy.” Compliance officers would be required to generate a “quarterly privacy review report” and share it with the company and, upon request, the FTC.
Facebook agreed to the FTC’s fine and order. Indeed, the negotiations for that agreement were the backdrop, just four months before that, for Zuckerberg’s announcement of his new commitment to privacy.
By that point, WhatsApp had begun using Accenture and other outside contractors to hire hundreds of content reviewers. But the company was eager not to step on its larger privacy message — or spook its global user base. It said nothing publicly about its hiring of contractors to review content.
IV. “We Kill People Based On Metadata”
Even as Zuckerberg was touting Facebook Inc.’s new commitment to privacy in 2019, he didn’t mention that his company was apparently sharing more of its WhatsApp users’ metadata than ever with the parent company — and with law enforcement.
To the lay ear, the term “metadata” can sound abstract, a word that evokes the intersection of literary criticism and statistics. To use an old, pre-digital analogy, metadata is the equivalent of what’s written on the outside of an envelope — the names and addresses of the sender and recipient and the postmark reflecting where and when it was mailed — while the “content” is what’s written on the letter sealed inside the envelope. So it is with WhatsApp messages: The content is protected, but the envelope reveals a multitude of telling details (as noted: time stamps, phone numbers and much more).
Those in the information and intelligence fields understand how crucial this information can be. It was metadata, after all, that the National Security Agency was gathering about millions of Americans not suspected of a crime, prompting a global outcry when it was exposed in 2013 by former NSA contractor Edward Snowden. “Metadata absolutely tells you everything about somebody’s life,” former NSA general counsel Stewart Baker once said. “If you have enough metadata, you don’t really need content.” In a symposium at Johns Hopkins University in 2014, Gen. Michael Hayden, former director of both the CIA and NSA, went even further: “We kill people based on metadata.”
U.S. law enforcement has used WhatsApp metadata to help put people in jail. ProPublica found more than a dozen instances in which the Justice Department sought court orders for the platform’s metadata since 2017. These represent a fraction of overall requests, known as pen register orders (a phrase borrowed from the technology used to track numbers dialed by landline telephones), as many more are kept from public view by court order. U.S. government requests for data on outgoing and incoming messages from all Facebook platforms increased by 276% from the first half of 2017 to the second half of 2020, according to Facebook Inc. statistics (which don’t break out the numbers by platform). The company’s rate of handing over at least some data in response to such requests has risen from 84% to 95% during that period.
It’s not clear exactly what government investigators have been able to gather from WhatsApp, as the results of those orders, too, are often kept from public view. Internally, WhatsApp calls such requests for information about users “prospective message pairs,” or PMPs. These provide data on a user’s messaging patterns in response to requests from U.S. law enforcement agencies, as well as those in at least three other countries — the United Kingdom, Brazil and India — according to a person familiar with the matter who shared this information on condition of anonymity. Law enforcement requests from other countries might only receive basic subscriber profile information.
WhatsApp metadata was pivotal in the arrest and conviction of Natalie “May” Edwards, a former Treasury Department official with the Financial Crimes Enforcement Network, for leaking confidential banking reports about suspicious transactions to BuzzFeed News. The FBI’s criminal complaint detailed hundreds of messages between Edwards and a BuzzFeed reporter using an “encrypted application,” which interviews and court records confirmed was WhatsApp. “On or about August 1, 2018, within approximately six hours of the Edwards pen becoming operative — and the day after the July 2018 Buzzfeed article was published — the Edwards cellphone exchanged approximately 70 messages via the encrypted application with the Reporter-1 cellphone during an approximately 20-minute time span between 12:33 a.m. and 12:54 a.m.,” FBI Special Agent Emily Eckstut wrote in her October 2018 complaint. Edwards and the reporter used WhatsApp because Edwards believed the platform to be secure, according to a person familiar with the matter.
Edwards was sentenced on June 3 to six months in prison after pleading guilty to a conspiracy charge and reported to prison last week. Edwards’ attorney declined to comment, as did representatives from the FBI and the Justice Department.
WhatsApp has for years downplayed how much unencrypted information it shares with law enforcement, largely limiting mentions of the practice to boilerplate language buried deep in its terms of service. It does not routinely keep permanent logs of who users are communicating with and how often, but company officials confirmed they do turn on such tracking at their own discretion — even for internal Facebook leak investigations — or in response to law enforcement requests. The company declined to tell ProPublica how frequently it does so.
The privacy page for WhatsApp assures users that they have total control over their own metadata. It says users can “decide if only contacts, everyone, or nobody can see your profile photo” or when they last opened their status updates or when they last opened the app. Regardless of the settings a user chooses, WhatsApp collects and analyzes all of that data — a fact not mentioned anywhere on the page.
V. “Opening the Aperture to Encompass Business Objectives”
The conflict between privacy and security on encrypted platforms seems to be only intensifying. Law enforcement and child safety advocates have urged Zuckerberg to abandon his plan to encrypt all of Facebook’s messaging platforms. In June 2020, three Republican senators introduced the “Lawful Access to Encrypted Data Act,” which would require tech companies to assist in providing access to even encrypted content in response to law enforcement warrants. For its part, WhatsApp recently sued the Indian government to block its requirement that encrypted apps provide “traceability” — a method to identify the sender of any message deemed relevant to law enforcement. WhatsApp has fought similar demands in other countries.
Other encrypted platforms take a vastly different approach to monitoring their users than WhatsApp. Signal employs no content moderators, collects far less user and group data, allows no cloud backups and generally rejects the notion that it should be policing user activities. It submits no child exploitation reports to NCMEC.
Apple has touted its commitment to privacy as a selling point. Its iMessage system displays a “report” button only to alert the company to suspected spam, and the company has made just a few hundred annual reports to NCMEC, all of them originating from scanning outgoing email, which is unencrypted.
But Apple recently took a new tack, and appeared to stumble along the way. Amid intensifying pressure from Congress, in August the company announced a complex new system for identifying child-exploitative imagery on users’ iCloud backups. Apple insisted the new system poses no threat to private content, but privacy advocates accused the company of creating a backdoor that potentially allows authoritarian governments to demand broader content searches, which could result in the targeting of dissidents, journalists or other critics of the state. On Sept. 3, Apple announced it would delay implementation of the new system.
Still, it’s Facebook that seems to face the most constant skepticism among major tech platforms. It is using encryption to market itself as privacy-friendly, while saying little about the other ways it collects data, according to Lloyd Richardson, the director of IT at the Canadian Centre for Child Protection. “This whole idea that they’re doing it for personal protection of people is completely ludicrous,” Richardson said. “You’re trusting an app owned and written by Facebook to do exactly what they’re saying. Do you trust that entity to do that?” (On Sept. 2, Irish authorities announced that they are fining WhatsApp 225 million euros, about $267 million, for failing to properly disclose how the company shares user information with other Facebook platforms. WhatsApp is contesting the finding.)
Facebook’s emphasis on promoting WhatsApp as a paragon of privacy is evident in the December marketing document obtained by ProPublica. The “Brand Foundations” presentation says it was the product of a 21-member global team across all of Facebook, involving a half-dozen workshops, quantitative research, “stakeholder interviews” and “endless brainstorms.” Its aim: to offer “an emotional articulation” of WhatsApp’s benefits, “an inspirational toolkit that helps us tell our story,” and a “brand purpose to champion the deep human connection that leads to progress.” The marketing deck identifies a feeling of “closeness” as WhatsApp’s “ownable emotional territory,” saying the app delivers “the closest thing to an in-person conversation.”
WhatsApp should portray itself as “courageous,” according to another slide, because it’s “taking a strong, public stance that is not financially motivated on things we care about,” such as defending encryption and fighting misinformation. But the presentation also speaks of the need to “open the aperture of the brand to encompass our future business objectives. While privacy will remain important, we must accommodate for future innovations.”
WhatsApp is now in the midst of a major drive to make money. It has experienced a rocky start, in part because of broad suspicions of how WhatsApp will balance privacy and profits. An announced plan to begin running ads inside the app didn’t help; it was abandoned in late 2019, just days before it was set to launch. Early this January, WhatsApp unveiled a change in its privacy policy — accompanied by a one-month deadline to accept the policy or get cut off from the app. The move sparked a revolt, impelling tens of millions of users to flee to rivals such as Signal and Telegram.
The policy change focused on how messages and data would be handled when users communicate with a business in the ever-expanding array of WhatsApp Business offerings. Companies now could store their chats with users and use information about users for marketing purposes, including targeting them with ads on Facebook or Instagram.
Elon Musk tweeted “Use Signal,” and WhatsApp users rebelled. Facebook delayed for three months the requirement for users to approve the policy update. In the meantime, it struggled to convince users that the change would have no effect on the privacy protections for their personal communications, with a slightly modified version of its usual assurance: “WhatsApp cannot see your personal messages or hear your calls and neither can Facebook.” Just as when the company first bought WhatsApp years before, the message was the same: Trust us.
Gerade macht eine Nachricht aus den USA die Runde: Das Investigativmagazin ProPublica widmet dem Datenschutz bei WhatsApp einen ausführlichen Artikel und kommt zu dem Schluss, das Mutterunternehmen Facebook untergrabe die Privatsphäre der zwei Milliarden Nutzer:innen. So richtig diese Aussage ist, so problematisch ist das Framing der Autoren und vieler deutscher Medien, die die Meldung oberflächlich aufgreifen.
Im Hauptteil des Artikels geht es darum, dass Facebook ein Heer von Content-Moderator:innen beschäftigt, um gemeldete Inhalte in WhatsApp-Chats zu überprüfen. Das ist keine Neuigkeit, aber ProPublica kann erstmals ausführlicher darüber berichten, wie diese Arbeit abläuft. Dass potenziell jede WhatsApp-Nachricht von den Moderator:innen des Konzerns gelesen werden kann, stellen die Autoren dem Privacy-Versprechen des Messengers gegenüber: „No one outside of this chat, not even WhatsApp, can read or listen to them.”
Allerdings, und hier wird es problematisch, setzen die Autoren dann auf ein Framing, dass die Content-Moderation (die WhatsApp nicht so nennen will) als Schwächung der Ende-zu-Ende-Verschlüsselung darstellt. Ein ProPublica-Autor bezeichnete die Moderation sogar als „Backdoor“, was gemeinhin eine gezielt eingebaute Hintertür zum Umgehen von Verschlüsselung meint. Diverse Sicherheitsexpert:innen wie die Cybersecurity-Direktorin der Electronic Frontier Foundation, Eva Galperin, kritisieren deshalb die Berichterstattung.
Die Verschlüsselung tut, was sie soll
Wo also liegt das Problem? Klar ist: Mark Zuckerbergs 2018 gegebenes Versprechen, dass seine Firma keinerlei Kommunikationsinhalte aus WhatsApp-Chats lesen könne, ist irreführend. Jede Nachricht, jedes Bild und jedes Video, die von Chat-Teilnehmer:innen gemeldet werden, landen zur Überprüfung bei WhatsApp und deren Dienstleistern. Etwa 1000 Menschen seien in Austin, Dublin und Singapur rund um die Uhr im Einsatz, um die gemeldeten Inhalte zu sichten, berichtet ProPublica. Weil das Unternehmen das Privacy-Versprechen für sein Marketing benötigt, versteckt WhatsApp diese Info vor seinen Nutzer:innen.
Klar ist auch: Wie jede Form der Inhaltemoderation bringt dies erhebliche Probleme mit sich. So zeigen die Autoren nach Gesprächen mit diversen Quellen etwa, dass die Moderator:innen wenig Zeit für ihre schwerwiegenden Entscheidungen haben und mit teils missverständlichen Vorgaben arbeiten müssen. Wie bei der Moderation für Facebook und Instagram werden sie zudem von einem automatisierten System unterstützt, das mitunter fehlerhafte Vorschläge macht. Deshalb werden immer wieder Inhalte gesperrt, die eigentlich nicht gesperrt werden dürften, etwa harmlose Fotos oder Satire. Einen ordentlichen Widerspruchsmechanismus gibt es bei WhatsApp nicht und es ist ein Verdienst des Artikels, diese Schwierigkeiten ans Licht zu bringen.
Diese Probleme liegen jedoch nicht an einer mangelhaften Ende-zu-Ende-Verschlüsselung der WhatsApp-Nachrichten. Diese funktioniert technisch gesehen weiterhin gut. Die Nachrichten sind zunächst nur auf den Geräten der Kommunikationsteilnehmer:innen lesbar (sofern diese nicht durch kriminelle oder staatliche Hacker kompromittiert wurden). Die Nutzer:innen, die Inhalte aus Chats melden, leiten diese an WhatsApp weiter. Das kann jede:r tun und ist kein Verschlüsselungsproblem.
Die eigentliche Gefahr liegt woanders
Die Möglichkeit, missbräuchliche Inhalte zu melden, besteht bei WhatsApp schon seit Längerem. Das Meldesystem soll helfen, wenn etwa volksverhetztende Inhalte geteilt werden, Ex-Partner:innen bedroht oder in Gruppen zur Gewalt gegen Minderheiten aufgerufen wird. Es ist zwar ein Eingriff in private Kommunikation, aber man kann argumentieren, dass dieser in Abwägung mit den Gefahren gerechtfertigt ist. Selbstverständlich wäre WhatsApp in der Pflicht, seine Nutzer:innen besser darüber informieren, wie das Meldesystem funktioniert und dass ihre Nachrichten mit ein paar Klicks an Moderator:innen weitergeleitet werden können.
Die größere Gefahr für die Privatsphäre bei WhatsApp kommt jedoch von einer anderen Stelle: Es sind die Metadaten, die über Menschen ähnlich viel verraten wie die Inhalte ihrer Gespräche. Dazu gehört die Identität von Absender und Empfänger, ihre Telefonnummern und zugehörige Facebook-Konten, Profilfotos, Statusnachrichten sowie Akkustand des Telefons. Außerdem Informationen zum Kommunikationsverhalten: Wer kommuniziert mit wem? Wer nutzt die App wie häufig und wie lange?
Wie WhatsApp eine Whistleblowerin ans Messer lieferte
WhatsApp sammelt diese Daten im großen Stil, weil sie sich zu Geld machen lassen. Im Originalbericht von ProPublica kommt dieser Aspekt durchaus vor, in vielen deutschen Meldungen geht er leider unter. Tatsächlich berichtet das US-Medium sogar vom Fall einer Whistleblowerin, die ins Gefängnis musste, weil WhatsApp ihre Metadaten an das FBI weitergab. Natalie Edwards war im US-Finanzministerium angestellt und reichte Informationen über verdächtige Transaktionen an BuzzFeed News weiter. Entdeckt und verurteilt wurde sie unter anderem, weil die Strafverfolger nachweisen konnten, dass sie in regem WhatsApp-Kontakt mit dem BuzzFeed-Reporter stand.
Dem Bericht zufolge gibt WhatsApp in den USA derlei Metadaten regelmäßig an Ermittlungsbehörden weiter. Auch in Deutschland und Europa dürfte dies der Fall sein. Hinzukommt, dass nicht nur staatliche Stellen die verräterischen Informationen erhalten, sondern auch Facebook. Dort werden sie genutzt, um die Datenprofile der Nutzer:innen zu verfeinern und in weiten Teilen der Welt auch, um Werbeanzeigen besser zuschneiden zu können. Als der Datenkonzern den Messenger 2014 aufkaufte, versprach er der europäischen Wettbewerbsbehörde, dass dies technisch überhaupt nicht möglich sei. Eine dreiste Lüge, für die das Unternehmen mehr als 100 Millionen Euro Strafe zahlen musste.
Constantly posting content on social media can erode your privacy—and sense of self.Photograph: Luka Milanovic/Getty Images
To be online is to be constantly exposed. While it may seem normal, it’s a level of exposure we’ve never dealt with before as human beings. We’re posting on Twitter, and people we’ve never met are responding with their thoughts and criticisms. People are looking at your latest Instagram selfie. They’re literally swiping on your face. Messages are piling up. It can sometimes feel like the whole world has its eyes on you.Being observed by so many people appears to have significant psychological effects. There are, of course, good things about this ability to connect with others. It was crucial during the height of the pandemic when we couldn’t be close to our loved ones, for example. However, experts say there are also numerous downsides, and these may be more complex and persistent than we realize.Studies have found that high levels of social media use are connected with an increased risk of symptoms of anxiety and depression. There appears to be substantial evidence connecting people’s mental health and their online habits. Furthermore, many psychologists believe people may be dealing with psychological effects that are pervasive but not always obvious.
“What we’re finding is people are spending way more time on screens than previously reported or than they believe they are,” says Larry Rosen, professor emeritus of psychology at California State University, Dominguez Hills. “It’s become somewhat of an epidemic.”Rosen has been studying the psychological effects of technology since 1984, and he says he’s watched things “spiral out of control.” He says people are receiving dozens of notifications every day and that they often feel they can’t escape their online lives.“Even when you’re not on the screens, the screens are in your head,” Rosen says.One value of privacy is that it gives us space to operate without judgment. When we’re using social media, there are often a lot of strangers viewing our content, liking it, commenting on it, and sharing it with their own communities. Any time we post something online, thus exposing a part of who we are, we don’t fully know how we’re being received in the virtual world. Fallon Goodman, an assistant professor of psychology at George Washington University, says not knowing what kind of impression you’re making online can cause stress and anxiety.
“When you post a picture, the only real data you get are people’s likes and comments. That’s not necessarily a true indication of what the world feels about your picture or your post,” Goodman says. “Now you’ve put yourself out there—in a semi-permanent way—and you have limited information about how that was received, so you have limited information about the evaluations people are making about you.”Anna Lembke, a professor of psychiatric and behavioral sciences at Stanford University, says we construct our identities through how we’re seen by others. Much of that identity is now formed on the internet, and that can be difficult to grapple with.“This virtual identity is a composition of all of these online interactions that we have. It is a very vulnerable identity because it exists in cyberspace. In a weird kind of way we don’t have control over it,” Lembke says. “We’re very exposed.”
Without the ability to find out how their identity is ricocheting around the virtual world, people often feel a fight-or-flight response when they’ve been online for many hours—and even after they’ve logged off.
“It’s kind of an adapted hyper-vigilance. As soon as you send something out into the virtual world, you’re sort of sitting on pins and needles waiting for a response,” Lembke says. “That alone—that kind of expectancy—is a state of hyperarousal. How will people respond to this? When will they respond? What will they say?”It would be one thing if only you saw any negative reactions, Lembke says, but they’re often available for everyone to see. She says this exacerbates feelings of shame and self-loathing that are already “endemic” in the modern world.We are social creatures, and our brains evolved to form communities, communicate with each other, and work together. We have not evolved to expose ourselves to the judgment of the whole world on a daily basis. These things affect everyone differently, but it’s clear many people regularly feel overwhelmed by this exposure level.
If we’re not careful, our online lives can become a source of chronic stress that subtly seeps into everything. Everyone needs some privacy, but we often don’t provide it for ourselves and end up feeling like we’re constantly battling invisible enemies.
There are things you can do for yourself, however. You can turn off your notifications for social media apps, reduce how much time you spend on them, limit when you allow yourself to use them, and more. Goodman says it sometimes helps to keep your phone in the other room so you’re not so easily tempted to pick it up.Lembke says we need to change how we think about social media and internet use as a society. She calls it a “collective” problem, not just an individual one.“We need to come up with a kind of cultural etiquette around what appropriate and healthy consumption is, just like we have for other consumptive problems,” Lembke says. “We have nonsmoking areas. We don’t eat ice cream for breakfast. We have all kinds of laws around who can buy and consume alcohol, who can go into a casino. We need guardrails for these digital products, especially for minors.”
By Andrew Hutchinson Content and Social Media Manager
Have you found yourself using Instagram way less of late? The once trendsetting social platform seems to have lost its luster, in large part due to Instagram’s insistence on pumping more content from accounts that you don’t follow into your main IG feed. The ‘inspiration’ for that approach is TikTok, which has seen great success by focusing on content, as opposed to creators, with the app opening to a ‘For You’ feed of algorithmically-selected clips, based on your viewing habits. Instagram, as usual, saw that as an opportunity, and it’s since been working to negate your direct input – i.e. the accounts that you’ve chosen to follow – by showing you more and more stuff that it thinks you’ll like. Which is annoying, and personally, I don’t find Instagram anywhere near as engaging as it once was.
And it seems many other users agree – according to a new report from The Wall Street Journal, Instagram engagement is declining, with Reels, in particular, seeing a significant drop-off in user engagement of late. As reported by WSJ, TikTok users are spending over 10x as many hours consuming content in that app as Instagram users currently spend viewing Reels. According to a leaked internal report, Reels engagement is also in decline, dropping 13.6% in recent months – while ‘most Reels users have no engagement whatsoever.’ Meta has lightly refuted the claims, by stating that the usage data doesn’t provide the full picture. Though it declined to add any more context – which is Meta’s usual process when it can’t dispel such with its own insight. Take, for example, total time spent in its apps. Back in 2016, as part of its regular performance reporting, Meta noted that people were spending more than 50 minutes per day, on average, using Facebook, Instagram and Messenger.
It hasn’t reported any official stats on this ever since, which many believe is because that number has been in steady decline, and Meta sees no value in reporting that it’s losing ground, and has been for years now. Meta, instead, is keen to talk about daily and monthly active users, where its figures are solid. But this almost feels like misdirection – Facebook and Instagram, in particular, have traditionally been based on building your social graph, and establishing a digital connection with the people that you know and want to stay connected with, and informed about.
As such, it makes sense that a lot of people log onto these apps each day just to see if their friends and family have shared anything new. That doesn’t, however, mean that they’re spending a lot of time in these apps. Which is another reason why Meta’s trying to push more interesting content into your main feed, and in between updates from your connections – because if it can hook those people that are just checking in, then logging straight back out, that could be a key way to get its engagement stats back on track. But it’s not working.
Again, Facebook and Instagram have spent years pushing you to establish connections with the people that you care about, even introducing an algorithm to ensure that you see the most important updates from these users and Pages every day. At one point, Facebook noted that an average user was eligible to see over 1,500 posts every day, based on the people and Pages they were connected to – which is way more than they could ever view in a single day. So it brought in the algorithm to help maximize engagement – which also had the added benefit of squeezing Page reach, and forcing more brands to pay up. But now, Facebook is actively working to add in even more content, cluttering your feed beyond the posts that you could already be shown, and making it harder than ever to see posts from the people you actually want to stay updated on. Hard to see how that serves the user interests.
And again, it seems that users are understandably frustrated by this, based on these latest engagement stats, and previously reported info from Facebook which showed that young users are spending less and less time in the app. Because it’s fundamentally going against its own ethos, purely for its own gain. Accept it or not, people go to different apps for different purpose, which is the whole point of differentiation and finding a niche in the industry. People go to TikTok for entertainment, not for connecting with friends (worth noting that TikTok has actually labeled itself an ‘entertainment app’, as opposed to a social network), while users go to Facebook and IG to see the latest updates from people they care about.
The focus is not the same, and in this new, more entertainment-aligned paradigm, Meta’s once all-powerful, unmatched social graph is no longer the market advantage that it once was. But Meta, desperately seeking to counter its engagement declines, keeps trying to get people to stick around, which is seemingly having the opposite effect. Of course, Meta needs to try, it needs to seek ways to negate user losses as best it can – it makes sense that it’s testing out these new approaches. But they’re not the solution. How, then, can Instagram and Facebook actually re-engage users and stem the tide of people drifting across to TikTok? There are no easy answers, but I’m tipping the next phase will involve exclusive contracts with popular creators, as they become the key pawns in the new platform wars. TikTok’s monetization systems are not as evolved, and YouTube and Meta could theoretically blow it out of the water if they could rope in the top stars from across the digital ecosphere. That could keep people coming to their apps instead, which could see TikTok engagement wither, like Vine before it.
But other than forcing people to spend more time on Facebook, by hijacking their favorite stars, there’s not a lot of compelling reasons for people to spend more time in Meta’s apps. At least, not right now, as they increasingly dilute any form of differentiation.
But essentially, it comes down to a major shift in user behaviors, away from following your friends, and seeing all the random stuff that they post, to following trends, and engaging with the most popular, most engaging content from across the platform, as opposed to walling off your own little space.
At one stage, the allure of social media was that it gave everyone their own soapbox, a means to share their voice, their opinion, to be their own celebrity in their own right, at least among their own networks. But over time, we’ve seen the negatives of that too. Over-sharing can lead to problems when it’s saved in the internet’s perfect memory for all time, while increasing division around political movements has also made people less inclined to share their own thoughts, for fear of unwanted criticism or misunderstanding. Which is why entertainment has now become the focus of the next generation – it’s less about personal insights and more about engaging in cultural trends. That’s why TikTok is winning, and why Facebook and Instagram are losing out, despite their frantic efforts.
Signal founder Moxie Marlinspike, whom MobileCoin previously described as a technical adviser, may have been more deeply involved in the cryptocurrency project.
An earlier, nearly identical white paper found online, which MobileCoin CEO Joshua Goldbard called „erroneous,“ lists Marlinspike as the project’s original CTO.
The founder and CEO of encrypted messaging app Signal, Moxie Marlinspike may have been the former CTO of MobileCoin, a cryptocurrency that Signal recently integrated for in-app payments, early versions of MobileCoin technical documents suggest.
MobileCoin CEO Joshua Goldbard told CoinDesk this 2017 white paper is “not something [he] or anyone at MobileCoin wrote,” though it is very nearly a verbatim precursor to MobileCoin’s current white paper. Additionally, snapshots of MobileCoin’s homepage from Dec. 18, 2017, until April 2018, list Marlinspike as one of three members of “The Team,” though his title is not given there. He is not listed as an adviser until May 2018.
The team for the self-described privacy coin has always acknowledged Marlinspike as an adviser to the project, but neither the team nor Marlinspike has ever disclosed direct involvement through an in-house role, much less one so involved as Chief Technical Officer.
If Marlinspike actually was involved as a CTO in MobileCoin’s early days, the recent Signal integration raises questions of MobileCoin’s motivation for associating itself with the renowned cryptographer, along with his own motive for aligning with the project, given the MOB team has historically downplayed this involvement.
“Signal sold out their user base by creating and marketing a cryptocurrency based solely on their ability to sell the future tokens to a captive audience,” said Bitcoin Core developer Matt Corallo, who also used to contribute to Signal’s open-source software.
A screenshot of MobileCoin’s website frontpage on Dec. 18, 2017. Marlinspike is listed as a team member until May 2018. (Wayback Machine)
Goldbard shared another document dated Nov. 13, 2017, same as the other white paper, which does not list a team for the project. He claimed that this white paper was the authentic one and the other was not.
“Moxie was never CTO. A white paper we never wrote was erroneously linked to in our new book, ‘The Mechanics of MobileCoin.’ That erroneous white paper listed Moxie as CTO and, again, we never wrote that paper and Moxie was never CTO,” Goldbard told CoinDesk.
This book is actually the most recent “comprehensive, conceptual (and technical) exploration of the cryptocurrency MobileCoin” posted on the MobileCoin Foundation GitHub, which Goldbard describes as project’s “source of truth” and serves as the most up-to-date technical documentation for the project.
This ”real” version of the paper is nearly identical to the “erroneous” white paper except there is no mention of team members or MobileCoin’s pre-sale details. (Both white papers and current MobileCoin technical documents are embedded at the end of this article for reference.)
Goldbard said the “erroneous” white paper was accidentally added as a footnote to this latest collection of technical documents compiled by Koe, a pseudonymous cryptographer who recently joined MobileCoin’s team. That footnote also lists Marlinspike as a co-author of the paper along with Goldbard.
“He just googled it, like everyone on the internet seems to be doing today, and put [it in] as a footnote. It was an oversight. I did not notice it in my review of the book prior to publishing,” Goldbard told CoinDesk.
A metadata analysis of the papers run by CoinDesk shows that the “erroneous” paper was generated on Dec. 9, 2017, while the “real” paper was generated two days later.
A meta analysis of MobileCoin’s disputed white paper.(Colin Harper)
A meta analysis of MobileCoin’s „real“ white paper.(Colin Harper)
Marlinspike declined to comment on the record about his professional relationship with MobileCoin.
A tale of two papers
In a December 2017 Wired article titled “The Creator of Signal Has a Plan to Fix Cryptocurrency,” Marlinspike went on the record as a “technical adviser,” a title CoinDesk has also used to describe his relationship with MobileCoin in the past.
“There are lots of potential applications for MobileCoin, but Goldbard and Marlinspike envision it first as an integration in chat apps like Signal or WhatsApp,” the article reads.
It also states that “Marlinspike first experimented with [Software Guard Extensions (SGX)] for Signal.” These special (and expensive) Intel SGX chips create a “secure enclave” within a device to protect software, and MobileCoin validators require them to function (validators, as in other permissioned databases, are chosen by the foundation behind MobileCoin).
In the 2017 white paper that Goldbard disavows, Marlinspike is listed under the “team” section as CTO, with experience including being “the lead developer of Open Whisper Systems, [meaning] Moxie is responsible for the entirety of Signal,” which had just over 10 million users at the time. This same white paper describes MobileCoin’s Goldbard as a “high school dropout who thinks deeply about narratives and information systems.”
Signal’s code has historically been open source, though this changed about a year ago; code for the MobileCoin integration was added in Signal’s last beta. The nonprofit, which has five full-time employees, subsists largely on donations and has no clear revenue model, though Whatsapp co-founder Brian Acton injected $50 million into the app in 2018. A 2018 tax filing shows revenue of just over $600,000 for the fiscal year and over $100,000,000 in assets and $105,000,000 in liabilities.
MobileCoin supply and other details
The disavowed white paper also shows details of MobileCoin’s proposed distribution, which the paper says included selling 37.5 million MOB tokens (out of a 250 million supply) in a private presale at a price of $0.80 each for a total of $30 million.
Indeed, in the spring of 2018, MOB raised $30 million from crypto exchange Binance and others in such a private presale, TechCrunch’s Taylor Hatmaker reported. Goldbard referred to the TechCrunch article when discussing MobileCoin’s financing with CoinDesk.
“Supply: 250mill MOB; Circulating supply: impossible to know (‘circulating’ is pretty hard to define anyway),” Koe responded. MobileCoin does not currently have online tools such as a blockchain explorer to search the network for data.
One user chimed in to say that because all 250 million MOB were generated from a “premine,” or creation of maximum supply before launch, there’s no way for users to earn them through staking or mining.
“I suppose you could request donations,” Koe replied.
Perhaps summing up the sense of betrayal the Signal community feels, one post simply reads, ‚Et tu, Signal?‘
MobileCoin’s consensus model copies Stellar’s, meaning only MobileCoin Foundation-approved nodes, which must run on a machine that uses the aforementioned Intel SGX chips, can partake in consensus. The white paper makes no references to rewards or payouts to validators from MOB supply.
MobileCoin Token Services, an affiliate of the MobileCoin Foundation, is currently selling MOB (presumably the remaining coins that did not sell in the presale) to non-U.S. investors by taking orders over email.
When the coin began trading in January, it first listed for around $5. Now, it’s worth about $55 (which, assuming a supply of 250 million MOB, gives the coin roughly the same market cap as Chainlink or Litecoin, the 10th and 9th most value cryptoassets by market cap). The coin clocked over $15 million in volume over the past 24 hours between FTX and Bitfinex, according to exchange data.
Speaking to the coin’s design, the founder of privacy coin monero (XMR, +2.85%), Richard Spagni, claimed that MobileCoin uses the privacy building blocks of his project’s source code for its own design without giving credit.
Who is Moxie Marlinspike?
Something of a legend in cryptography circles, Marlinspike began working on Signal in 2014 after founding Open Whisper Systems in 2013. Before this, he served as Twitter’s head of security after his 2010 startup, Whisper Systems, was acquired by the social network in 2011.
His only on-the-record professional relationship with MobileCoin comes from his technical advisory role, which he took on in late 2017 at the height of bitcoin’s last bull market and its accompanying initial coin offering bubble.
Reporting on the project in 2019, the New York Times’ Nathaniel Popper and Mike Isaac originally wrote that “Signal … has its own coin in the works” before amending the article to clarify that “MobileCoin will work with Signal, but it is being developed independently of Signal.” The correction seems to typify the shifting narrative of Marlinspike’s and MOB’s relationship across various records. (Wired’s 2017 coverage, for example, says that “The Creator of Signal Has a Plan to Fix Cryptocurrency.”)
“I think usability is the biggest challenge with cryptocurrency today,” Marlinspike told Wired in the December 2017 article. “The innovations I want to see are ones that make cryptocurrency deployable in normal environments, without sacrificing the properties that distinguish cryptocurrency from existing payment mechanisms.”
Signal’s own users are less convinced.
The app’s Reddit page is plastered with submissions complaining about the decision to add MOB, with many confused as to why Signal would integrate a coin in the first place, let alone one that isn’t very well known (and which only went live this year).
“Using your messenger service to sit on the blockchain hype for no good reason, bloat a clean messenger app and introduce privacy concerns was more than unnecessary,” one post reads.
Perhaps summing up the sense of betrayal the Signal community feels, one post simply reads, “Et tu Signal?”
Speaking on Moxie’s involvement and the app’s decision to add MOB, Anderson Kill partner Stephen Palley said, “I can’t speak to the discrepancy between investor materials and what you’re being told, but I don’t necessarily judge them for wanting to make a buck after years of providing great open-source software basically for free.”
Signal first out the gate (but tripping)
Other messaging apps like Telegram and Kik have tried and failed to launch in-app cryptocurrency payments by rolling their own coins. Both attempts were promptly quashed by regulators. Encrypted messaging app Keybase was the first messaging app to add cryptocurrency payments when it integrated Stellar’s XLM (+14.33%) in 2018.
Given Facebook’s ownership of WhatsApp, its involvement in the Libra coin project (now known as Diem) may be seen as a similar attempt.
Oddly, Signal’s addition of MobileCoin is the first instance of a messaging app actually pulling off a crypto integration.
The question now is how many of Signal’s 50 million users, many of whom aren’t crypto enthusiasts, will use it.
Read the official and disputed MobileCoin white papers below:
Signal Adds a Payments Feature—With a Privacy-Focused CryptocurrencyThe encrypted messaging app is integrating support for MobileCoin in a bid to keep up with the features offered by its more mainstream rivals.
MobileCoin will bring payments to Signal, but also added complexity and potential regulation. Illustration: Elena Lacey
When the encrypted communications app Signal launched nearly seven years ago, it brought the promise of the strongest available encryption to a dead-simple interface for calling and texting. Now, Signal is incorporating what it describes as a way to bring that same ease of use and security to a third, fundamentally distinct feature: payments.Signal today plans to announce that it’s rolling out the ability for some of its users to send money to one another within its fast-growing encrypted communications network. To do so, it has integrated support for the cryptocurrency MobileCoin, a form of digital cash designed to work efficiently on mobile devices while protecting users‘ privacy and even their anonymity. For now, the payment feature will be available only to users in the UK, and only on iOS and Android, not the desktop. But the new feature nonetheless represents an experiment in bringing privacy-focused cryptocurrency to millions of users, one that Signal hopes to eventually expand around the world.Moxie Marlinspike, the creator of Signal and CEO of the nonprofit that runs it, describes the new payments feature as an attempt to extend Signal’s privacy protections to payments with the same seamless experience that Signal has offered for encrypted conversations. „There’s a palpable difference in the feeling of what it’s like to communicate over Signal, knowing you’re not being watched or listened to, versus other communication platforms,“ Marlinspike told WIRED in an interview. „I would like to get to a world where not only can you feel that when you talk to your therapist over Signal, but also when you pay your therapist for the session over Signal.“
Unlike payment features integrated into other messaging apps like WhatsApp or iMessage, which typically link a user’s bank account, Signal wants to provide a way to send money that no one other than the sender and recipient can observe or track. Financial institutions routinely sell their users‘ private transaction data to marketing firms and advertisers or hand it over to law enforcement. Bitcoin wouldn’t do the trick, either. As with many cryptocurrencies, its protections against fraud and counterfeiting are based on a public, distributed accounting ledger—a blockchain—that can in many cases reveal who sent money to whom.So Signal looked to privacy-preserving cryptocurrency, or „privacy coins,“ that both circumvent banks and are specially designed to protect users‘ identities and the details of their payments on a blockchain. While more established privacy-focused cryptocurrencies like Zcash and Monero have been more widely used and arguably better tested, Marlinspike says Signal chose to integrate MobileCoin because it has the most seamless user experience on mobile devices, requiring little storage space on the phone and needing only seconds for transactions to be confirmed. Zcash or Monero payments, by contrast, take minutes to complete transactions. „You’re using a cryptocurrency with state-of-the-art encryption, but from your perspective, it feels like Venmo,“ says MobileCoin’s founder Josh Goldbard.
Signal’s choice of MobileCoin is no surprise for anyone watching the cryptocurrency’s development since it launched in late 2017. Marlinspike has served as a paid technical adviser for the project since its inception, and he’s worked with Goldbard to design MobileCoin’s mechanics with a possible future integration into apps like Signal in mind. (Marlinspike notes, however, that neither he nor Signal own any MobileCoins.)
MobileCoin only began trading as an actual currency with real value in December of last year—until then, it was running as a valueless „testnet“—and its 250 million coins, at around $69 each, are currently worth almost $17 billion dollars in total. For now it’s listed for sale on just one cryptocurrency exchange, FTX, which doesn’t allow trades by US users, though Goldbard says there’s no reason that US exchanges couldn’t also list the coin for trade. Signal chose to roll out its MobileCoin integration in the UK in part because the cryptocurrency can’t yet be bought by users in the US, Marlinspike says, but also because it represents a smaller, English-speaking user base to test out the new payments feature, which he hopes will make diagnosing issues easier.“You’re using a cryptocurrency with state-of-the-art encryption, but from your perspective, it feels like Venmo.“Josh Goldbard, MobileCoinPayments present a tough dilemma for Signal: To keep pace with the features on other messaging apps, it needs to let users send money. But to do so without compromising its sterling privacy assurances poses a unique challenge. Despite Marlinspike’s and MobileCoin’s intentions, using any cryptocurrency today remains much more complex than Signal’s other features. Even if users can send MobileCoin back and forth, they’ll still likely need to cash them out into traditional currency to spend them, given that MobileCoin isn’t widely accepted for real-world goods and services. And aside from that need for exchanges and the lack of availability in the US, MobileCoin also remains even more volatile than older cryptocurrencies, with constant price swings that will significantly change the balances in a user’s Signal wallet over the course of days or even hours—hardly the sort of issue that Venmo users have to deal with. (Since March 27, MobileCoin’s value has shot up nearly 600 percent, possibly due to rumors of the impending Signal integration or possibly the result of a „short-squeeze.“)
To try to tame that volatility problem, Marlinspike and Goldbard say they imagine adding a feature in the future that will automatically exchange users‘ payments in dollars or another more stable currency for MobileCoin only when they make a payment, and then exchange it back on the recipient’s side—though it’s not yet clear if those trades could be made without leaving a trail that might identify the user. „There’s a world where maybe when you receive money, it can optionally just automatically settle into a pegged thing,“ Marlinspike says. „And then when you send money it converts back out.“The mechanics of how MobileCoin works to ensure its transactions‘ privacy and anonymity are—even for the world of cryptocurrency—practically a Rube Goldberg machine in their complexity. Like Monero, MobileCoin uses a protocol called CryptoNote and a technique it integrates known as Ring Confidential Transactions to mix up users‘ transactions, which makes tracing them vastly far more difficult and also hides the amount of transactions. But like Zcash, it also uses a technique called zero-knowledge proofs—specifically a form of those mathematical proofs known as Bulletproofs—that can guarantee a transaction has occurred without revealing its value.On top of all those techniques, MobileCoin takes advantage of the SGX feature of Intel processors, which is designed to allow a server to run code that even the server’s operator can’t alter.
MobileCoin uses that feature to ensure that servers in its network are deleting all lingering information about the transactions they carry out after the fact and leave only a kind of cryptographic receipt that proves the transaction occurred. Goldbard compares the entire process of a MobileCoin transaction to depositing a check at a bank, but one in which the check’s amount is obscured and it’s mixed up in a bag with nine other checks before it’s handed to a robotic bank teller. After handing back a deposit slip that proves the check was received, the robot shreds all 10 checks. „As long as SGX is working as promised, you can prove every robot cashier is working the same way and shredding every check,“ Goldbard says. And even if Intel’s SGX fails—security researchers have found numerous vulnerabilities in the feature over the last several years—Goldbard says that MobileCoin’s other privacy features still reduce any ability to identify users‘ transactions to low-probability guesses.If MobileCoin’s privacy promises hold true, Marlinspike says he hopes the cryptocurrency can help Signal reverse a troubling trend toward financial surveillance. If successful, Signal’s use of MobileCoin will also face the same hurdles and critiques that surround all privacy-preserving cryptocurrencies. Any technology that offers a way to anonymously spend money raises the specter of black market uses—from drug sales to money laundering to the evasion of international sanctions—along with the accompanying crush of financial regulations. And that means integrating MobileCoin could expose Signal to new regulatory risks that don’t apply to mere encrypted communications.
„I think it’s phenomenal from a civil liberties perspective,“ says Marta Belcher, a privacy-focused cryptocurrency lawyer who serves at special counsel at the Electronic Frontier Foundation. But Belcher points to a coming wave of regulation to control exactly the sort of anonymous cryptocurrency transactions Signal hopes to enable, including a new „enforcement framework“ the Justice Department published last fall and new regulations from FinCEN that could force more players in the cryptocurrency industry to collect identification details of users. „Anyone who’s dealing with cryptocurrency transactions, especially private cryptocurrency transactions, should be really concerned about all of these proposals and the government pushing financial surveillance to cryptocurrency,“ Belcher says.Matt Green, a cryptographer at Johns Hopkins University, puts it in starker terms.
„I’m terrified for Signal,“ says Green, who helped develop an early version of Zcash and now sits on the Zcash Foundation board as an unpaid member. „Signal as an encrypted messaging product is really valuable. Speaking solely as a person who is really into encrypted messaging, it terrifies me that they’re going to take this really clean story of an encrypted messenger and mix it up with the nightmare of laws and regulations and vulnerability that is cryptocurrency.“But Marlinspike and Goldbard counter that Signal’s new features won’t give it any control of MobileCoin or turn it into a MobileCoin exchange, which might lead to more regulatory scrutiny. Instead, it will merely add support for spending and receiving it. „The regulatory landscape is complicated, but there are ways to do privacy-protecting payments safely,“ says Goldbard. „To be frank, there’s a moral imperative to do so, because Signal has to offer payments in order to remain competitive with the world’s top messaging apps.“As for the possibility of enabling dangerous criminals and money launderers, Marlinspike offers an answer that mirrors one he’s long given for encrypted communications. Just as criminals used encryption for decades before Signal, they’ve used anonymous cryptocurrencies for years before Signal added MobileCoin payments as a feature.
For those criminals, the threat of law enforcement made using even clunky, tough-to-use tools necessary. By making those secure communications and payments easier, Marlinspike argues, Signal didn’t enable those criminals, but instead simply made their tools available to more casual, non-criminal users.“With Signal, we didn’t invent cryptography. We’re just making it accessible to people who didn’t want to cut and paste a lot of gobbledegook every time they sent a message,“ Marlinspike says. „I see a lot of parallels with this. We’re not inventing private payments…Privacy preserving cryptocurrencies have existed for years and will continue to exist. What we’re doing is just, again, a part of trying to make that accessible to ordinary people.“
Community groups, charities, sport clubs, arts centres, unions and emergency services all rely on the social media giant.
Its platform plays the role of an important public messaging board.
But in a country with so little civil society infrastructure, our heavy reliance on a corporation to provide such a fundamental public service is deeply problematic.
Facebook, Inc. doesn’t care about your fundraiser or political protest.
It couldn’t care less about your art exhibition.
What it cares about is your personal data, which it harvests in unimaginable quantities.
And the methods it uses to keep its 2.7 billion monthly active users „engaged“ on its website (so it can keep learning more about them) are also deeply problematic.
Jaron Lanier, one of the founders of the field of virtual reality, has been warning about social media and tech giants for years.
„Everyone has been placed under a level of dystopian surveillance straight out of a dystopian science fiction novel,“ he wrote in 2018 about the technological architecture created by these companies.
„Spying is accomplished mostly through connected personal devices — especially, for now, smartphones — that people keep practically glued to their bodies.
„Data is gathered about each person’s communications, interests, movements, contact with others, emotional reactions to circumstances, facial expressions, purchases, vital signs: an ever-growing, boundless variety of data.“
Mr Lanier says the ocean of personal data these companies extract from the internet is turned into behavioural data that allows them to predict and manipulate our behaviour.
„Facebook was wrong“: Josh Frydenberg criticises restrictions on Australian news.
„[These] platforms have proudly reported on experimenting with making people sad, changing voter turnout, and reinforcing brand loyalty,“ he said.
It is a member of a group of companies that are engaged in something called „surveillance capitalism“.
According to Professor Shoshana Zuboff, the author who coined the term, surveillance capitalism refers to the „new economic order“ that has emerged in the age of the internet and smartphone.
She says the companies that practice it lay claim to our personal information, our „data“, as „free raw material“ to be aggressively harvested.
Some of the data they collect are used for product or service improvement, but the rest is considered as a proprietary „behavioural surplus“.
That surplus data is then fed into machine intelligence which turns the data into „prediction products“ that „anticipate what you will do now, soon and later“.
According to Professor Zuboff, social media companies trade those „prediction products“ in a new kind of marketplace for behavioural predictions which she calls „behavioural futures markets“.
„Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are eager to lay bets on our future behaviour,“ she wrote in her 2019 book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.
„The competitive dynamics of these new markets drive surveillance capitalists to acquire ever-more-predictive sources of behavioural surplus: our voices, personalities, and emotions.
„Surveillance capitalists discovered that the most-predictive behavioural data come from intervening in the state of play in order to nudge, coax, tune, and herd behaviour towards profitable outcomes.
„It has become difficult to escape this bold market project, whose tentacles reach from the gentle herding of innocent Pokemon Go players to eat, drink, and purchase in the restaurants, bars, fast-food joints, and shops that pay to play in its behavioural futures markets to the ruthless expropriation of surplus from Facebook profiles for the purposes of shaping individual behaviour, whether it’s buying pimple cream at 5:45pm on a Friday, clicking ‚yes‘ on an offer of new running shoes as the endorphins race through your brain after your long Sunday morning run, or voting next week.
„Just as industrial capitalism was driven to the continuous intensification of the means of production, so surveillance capitalists and their market players are not locked into the continuous intensification of the means of behavioural modification and the gathering might of instrumentarian power.“
Mark Zuckerberg’s Facebook is a member of a group of companies engaged in „surveillance capitalism“.(AP: Trent Nelson via The Salt Lake Tribune)
Google invented surveillance capitalism
Professor Zuboff says Google invented and perfected surveillance capitalism in the early 2000s „in much the same way that a century ago General Motors invented and perfected managerial capitalism“.
„Google was the pioneer of surveillance capitalism in thought and practice, the deep pocket research and development, and the trailblazer in experimentation and implementation, but it is no longer the only actor on this path,“ she wrote.
„Surveillance capitalism quickly spread to Facebook and later to Microsoft. Evidence suggests that Amazon has veered in this direction, and it is a constant challenge to Apple, both as an external threat and as a source of internal debate and conflict.“
She published those words in 2019.
A little later that year, the Guardian described the book as an „epoch-defining international bestseller, drawing comparisons to Rachel Carson’s Silent Spring“.
The mass surveillance of society has made companies extremely wealthy
One of the points Professor Zuboff has repeatedly made about surveillance capitalism is how profitable it is for the companies that practice it.
The ocean of personal data they hoover up is turned into unimaginable wealth and power, making the companies more powerful than nation-states.
It helps to explain why those tech companies have come to dominate stock markets.
News organisations including the ABC have been impacted, along with community groups, charities, sport clubs, arts centres, unions, emergency services and more.(Supplied)
Last year, when researchers at the International Monetary Fund tried to figure out why there seemed to be a large disconnect between stock markets and the real world during one of the worst global recessions in memory, one thesis they considered was that the outsize influence of the big five tech companies — Google, Facebook, Microsoft, Amazon and Apple, which accounted for 22 per cent of the market capitalisation on US stock markets — was making US financial markets appear healthier than they were.
At any rate, it comes back to the question of what type of organisation should be running a country’s quasi-public messaging board.
Are we happy to leave it to surveillance capitalists to run a „public good“ of that kind?
Facebook’s decision to ban legitimate news from being shared in the middle of a global pandemic is a breathtaking display of defiance. It is also entirely consistent with the social media behemoth’s belligerent corporate character.
The move – which inadvertently resulted in Facebook pages of health departments in Queensland, WA and ACT being wiped just before a critical vaccine rollout begins – shocked the Australian media and political establishment. But, in hindsight, nobody should have been surprised. This was vintage Zuckerberg. You don’t blitzscale your way from Harvard dorm room to trillion-dollar titan in the space of a few years without putting lots of noses out of joint.
Facebook CEO Mark Zuckerberg arrives to testify before a joint hearing of Congress.Credit:AP
The Australian government’s media bargaining code, which is at the centre of the dispute, has been endlessly debated over the past year. Media companies say they should be paid for producing journalism that benefits the platforms, but they lack the bargaining power to extract any value for it. Tech giants claim they do not really benefit from the existence of news, that news represents a small part of the overall activity on their platforms, and since they actually send these news organisations free traffic they shouldn’t be paying them anything.
There are merits to both sides of the argument.
Yet there is little doubt stronger regulation of Google and Facebook is urgently needed. The two companies have scarily dominant positions in their respective markets of search and social media, and also an entrenched duopoly in digital advertising. Meanwhile, their ascent has coincided with a host of societal problems ranging from rising misinformation and fake news, to a troubling surge in online conspiracy theories and growing internet addiction.
The media bargaining code attempts to revolve the digital duopoly’s market dominance by using the threat of arbitration to force Google and Facebook to strike commercial deals with media companies. Could there have been a more straightforward solution? A digital platform tax or levy may have been cleaner and simpler and has existing parallels elsewhere in the economy.
There are already taxes on addictive and harmful products (think cigarette excise), and levies on disruptive new market entrants that are used to compensate legacy incumbents also exist (for example, the levies on Uber rides that are distributed to taxi licence holders).
Regardless, the debate about the merits of the media bargaining code in Australia has now become moot. The bill to bring the code into law has sailed through the lower house of Parliament and is all but certain to be passed by the Senate. Facebook is effectively saying that the overwhelming majority of elected officials in a sovereign parliament are wrong.
It is possible that a news-free Facebook could be positive for society and the media industry in the medium term. But at this fragile moment in history – a once in a century health crisis coupled with a fake news epidemic – for the primary gateway to information for millions of people to block critical information from being shared was chillingly irresponsible.
Throughout its relatively short history, Facebook has pursued a win at all costs, take no prisoners approach to business. It has also shown little regard for the wreckage it has left behind. For many years its official corporate mantra was “move fast and break things”.
When a potential competitor emerges, Facebook either buys it (as it did with WhatsApp and Instagram) or copies its key features (as it has done with Snapchat and Tiktok).
Facebook has pursued a win at all costs, take no prisoners approach to business.Credit:Bloomberg
It has repeatedly abused the privacy of its users and demonstrated a shocking ineptitude at thwarting the misinformation and conspiracy theories that have flourished on its platform, which are now demonstrably weakening democracies.
The spat over the media bargaining code highlights the fiendishly complex task governments face in regulating digital giants with operations that span the globe, billions of users and perhaps unrivalled power.
Tech proponents argue Australia’s regulation is deeply flawed – and to an extent they may have a point. But there is flawed regulation all across the economy. Most wildly profitable and dominant companies (even Google) begrudgingly accept these kinds of impositions as part of their social licence to operate, a cost of doing business. Not Facebook.
Mark Zuckerberg’s middle finger to the Australian government has been noticed all around the world. Already Canada is signaling it will copy the media code, while Europe (which has tried repeatedly to force the digital giants to pay news organisations, with much less success than Australia) is likely to follow.
Facebook has repeatedly shown it does not mind a scrap. But this may be its biggest fight yet, and it is only just beginning.