Archiv des Autors: innovation

How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users

Source: https://www.propublica.org/article/how-facebook-undermines-privacy-protections-for-its-2-billion-whatsapp-users

When Mark Zuckerberg unveiled a new “privacy-focused vision” for Facebook in March 2019, he cited the company’s global messaging service, WhatsApp, as a model. Acknowledging that “we don’t currently have a strong reputation for building privacy protective services,” the Facebook CEO wrote that “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever. This is the future I hope we will help bring about. We plan to build this the way we’ve developed WhatsApp.”

Zuckerberg’s vision centered on WhatsApp’s signature feature, which he said the company was planning to apply to Instagram and Facebook Messenger: end-to-end encryption, which converts all messages into an unreadable format that is only unlocked when they reach their intended destinations. WhatsApp messages are so secure, he said, that nobody else — not even the company — can read a word. As Zuckerberg had put it earlier, in testimony to the U.S. Senate in 2018, “We don’t see any of the content in WhatsApp.”

WhatsApp emphasizes this point so consistently that a flag with a similar assurance automatically appears on-screen before users send messages: “No one outside of this chat, not even WhatsApp, can read or listen to them.”

Given those sweeping assurances, you might be surprised to learn that WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through millions of private messages, images and videos. They pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.

The workers have access to only a subset of WhatsApp messages — those flagged by users and automatically forwarded to the company as possibly abusive. The review is one element in a broader monitoring operation in which the company also reviews material that is not encrypted, including data about the sender and their account.

Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp. A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.” It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala ​​Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters.” The presentation does not mention the company’s content moderation efforts.

WhatsApp’s director of communications, Carl Woog, acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove “the worst” abusers. But Woog told ProPublica that the company does not consider this work to be content moderation, saying: “We actually don’t typically use the term for WhatsApp.” The company declined to make executives available for interviews for this article, but responded to questions with written comments. “WhatsApp is a lifeline for millions of people around the world,” the company said. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”

WhatsApp’s denial that it moderates content is noticeably different from what Facebook Inc. says about WhatsApp’s corporate siblings, Instagram and Facebook. The company has said that some 15,000 moderators examine content on Facebook and Instagram, neither of which is encrypted. It releases quarterly transparency reports that detail how many accounts Facebook and Instagram have “actioned” for various categories of abusive content. There is no such report for WhatsApp.

Deploying an army of content reviewers is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp users. Together, the company’s actions have left WhatsApp — the largest messaging app in the world, with two billion users — far less private than its users likely understand or expect. A ProPublica investigation, drawing on data, documents and dozens of interviews with current and former employees and contractors, reveals how, since purchasing WhatsApp in 2014, Facebook has quietly undermined its sweeping security assurances in multiple ways. (Two articles this summer noted the existence of WhatsApp’s moderators but focused on their working conditions and pay rather than their effect on users’ privacy. This article is the first to reveal the details and extent of the company’s ability to scrutinize messages and user data — and to examine what the company does with that information.)

Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission. The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false. “We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment.

Facebook Inc. has also downplayed how much data it collects from WhatsApp users, what it does with it and how much it shares with law enforcement authorities. For example, WhatsApp shares metadata, unencrypted records that can reveal a lot about a user’s activity, with law enforcement agencies such as the Department of Justice. Some rivals, such as Signal, intentionally gather much less metadata to avoid incursions on its users’ privacy, and thus share far less with law enforcement. (“WhatsApp responds to valid legal requests,” the company spokesperson said, “including orders that require us to provide on a real-time going forward basis who a specific person is messaging.”)

WhatsApp user data, ProPublica has learned, helped prosecutors build a high-profile case against a Treasury Department employee who leaked confidential documents to BuzzFeed News that exposed how dirty money flows through U.S. banks.

Like other social media and communications platforms, WhatsApp is caught between users who expect privacy and law enforcement entities that effectively demand the opposite: that WhatsApp turn over information that will help combat crime and online abuse. WhatsApp has responded to this dilemma by asserting that it’s no dilemma at all. “I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes,” said Will Cathcart, whose title is Head of WhatsApp, in a YouTube interview with an Australian think tank in July.

The tension between privacy and disseminating information to law enforcement is exacerbated by a second pressure: Facebook’s need to make money from WhatsApp. Since paying $22 billion to buy WhatsApp in 2014, Facebook has been trying to figure out how to generate profits from a service that doesn’t charge its users a penny.

That conundrum has periodically led to moves that anger users, regulators or both. The goal of monetizing the app was part of the company’s 2016 decision to start sharing WhatsApp user data with Facebook, something the company had told European Union regulators was technologically impossible. The same impulse spurred a controversial plan, abandoned in late 2019, to sell advertising on WhatsApp. And the profit-seeking mandate was behind another botched initiative in January: the introduction of a new privacy policy for user interactions with businesses on WhatsApp, allowing businesses to use customer data in new ways. That announcement triggered a user exodus to competing apps.

WhatsApp’s increasingly aggressive business plan is focused on charging companies for an array of services — letting users make payments via WhatsApp and managing customer service chats — that offer convenience but fewer privacy protections. The result is a confusing two-tiered privacy system within the same app where the protections of end-to-end encryption are further eroded when WhatsApp users employ the service to communicate with businesses.

The company’s December marketing presentation captures WhatsApp’s diverging imperatives. It states that “privacy will remain important.” But it also conveys what seems to be a more urgent mission: the need to “open the aperture of the brand to encompass our future business objectives.”


I. “Content Moderation Associates”

In many ways, the experience of being a content moderator for WhatsApp in Austin is identical to being a moderator for Facebook or Instagram, according to interviews with 29 current and former moderators. Mostly in their 20s and 30s, many with past experience as store clerks, grocery checkers and baristas, the moderators are hired and employed by Accenture, a huge corporate contractor that works for Facebook and other Fortune 500 behemoths.

The job listings advertise “Content Review” positions and make no mention of Facebook or WhatsApp. Employment documents list the workers’ initial title as “content moderation associate.” Pay starts around $16.50 an hour. Moderators are instructed to tell anyone who asks that they work for Accenture, and are required to sign sweeping non-disclosure agreements. Citing the NDAs, almost all the current and former moderators interviewed by ProPublica insisted on anonymity. (An Accenture spokesperson declined comment, referring all questions about content moderation to WhatsApp.)

When the WhatsApp team was assembled in Austin in 2019, Facebook moderators already occupied the fourth floor of an office tower on Sixth Street, adjacent to the city’s famous bar-and-music scene. The WhatsApp team was installed on the floor above, with new glass-enclosed work pods and nicer bathrooms that sparked a tinge of envy in a few members of the Facebook team. Most of the WhatsApp team scattered to work from home during the pandemic. Whether in the office or at home, they spend their days in front of screens, using a Facebook software tool to examine a stream of “tickets,” organized by subject into “reactive” and “proactive” queues.

Collectively, the workers scrutinize millions of pieces of WhatsApp content each week. Each reviewer handles upwards of 600 tickets a day, which gives them less than a minute per ticket. WhatsApp declined to reveal how many contract workers are employed for content review, but a partial staffing list reviewed by ProPublica suggests that, at Accenture alone, it’s more than 1,000. WhatsApp moderators, like their Facebook and Instagram counterparts, are expected to meet performance metrics for speed and accuracy, which are audited by Accenture.

Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.

Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.

The WhatsApp reviewers have three choices when presented with a ticket for either type of queue: Do nothing, place the user on “watch” for further scrutiny, or ban the account. (Facebook and Instagram content moderators have more options, including removing individual postings. It’s that distinction — the fact that WhatsApp reviewers can’t delete individual items — that the company cites as its basis for asserting that WhatsApp reviewers are not “content moderators.”)

WhatsApp moderators must make subjective, sensitive and subtle judgments, interviews and documents examined by ProPublica show. They examine a wide range of categories, including “Spam Report,” “Civic Bad Actor” (political hate speech and disinformation), “Terrorism Global Credible Threat,” “CEI” (child exploitative imagery) and “CP” (child pornography). Another set of categories addresses the messaging and conduct of millions of small and large businesses that use WhatsApp to chat with customers and sell their wares. These queues have such titles as “business impersonation prevalence,” “commerce policy probable violators” and “business verification.”

Moderators say the guidance they get from WhatsApp and Accenture relies on standards that can be simultaneously arcane and disturbingly graphic. Decisions about abusive sexual imagery, for example, can rest on an assessment of whether a naked child in an image appears adolescent or prepubescent, based on comparison of hip bones and pubic hair to a medical index chart. One reviewer recalled a grainy video in a political-speech queue that depicted a machete-wielding man holding up what appeared to be a severed head: “We had to watch and say, ‘Is this a real dead body or a fake dead body?’”

In late 2020, moderators were informed of a new queue for alleged “sextortion.” It was defined in an explanatory memo as “a form of sexual exploitation where people are blackmailed with a nude image of themselves which have been shared by them or someone else on the Internet.” The memo said workers would review messages reported by users that “include predefined keywords typically used in sextortion/blackmail messages.”

WhatsApp’s review system is hampered by impediments, including buggy language translation. The service has users in 180 countries, with the vast majority located outside the U.S. Even though Accenture hires workers who speak a variety of languages, for messages in some languages there’s often no native speaker on site to assess abuse complaints. That means using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”

The process can be rife with errors and misunderstandings. Companies have been flagged for offering weapons for sale when they’re selling straight shaving razors. Bras can be sold, but if the marketing language registers as “adult,” the seller can be labeled a forbidden “sexually oriented business.” And a flawed translation tool set off an alarm when it detected kids for sale and slaughter, which, upon closer scrutiny, turned out to involve young goats intended to be cooked and eaten in halal meals.

The system is also undercut by the human failings of the people who instigate reports. Complaints are frequently filed to punish, harass or prank someone, according to moderators. In messages from Brazil and Mexico, one moderator explained, “we had a couple of months where AI was banning groups left and right because people were messing with their friends by changing their group names” and then reporting them. “At the worst of it, we were probably getting tens of thousands of those. They figured out some words the algorithm did not like.”

Other reports fail to meet WhatsApp standards for an account ban. “Most of it is not violating,” one of the moderators said. “It’s content that is already on the internet, and it’s just people trying to mess with users.” Still, each case can reveal up to five unencrypted messages, which are then examined by moderators.

The judgment of WhatsApp’s AI is less than perfect, moderators say. “There were a lot of innocent photos on there that were not allowed to be on there,” said Carlos Sauceda, who left Accenture last year after nine months. “It might have been a photo of a child taking a bath, and there was nothing wrong with it.” As another WhatsApp moderator put it, “A lot of the time, the artificial intelligence is not that intelligent.”

Facebook’s written guidance to WhatsApp moderators acknowledges many problems, noting “we have made mistakes and our policies have been weaponized by bad actors to get good actors banned. When users write inquiries pertaining to abusive matters like these, it is up to WhatsApp to respond and act (if necessary) accordingly in a timely and pleasant manner.” Of course, if a user appeals a ban that was prompted by a user report, according to one moderator, it entails having a second moderator examine the user’s content.


II. “Industry Leaders” in Detecting Bad Behavior

In public statements and on the company’s websites, Facebook Inc. is noticeably vague about WhatsApp’s monitoring process. The company does not provide a regular accounting of how WhatsApp polices the platform. WhatsApp’s FAQ page and online complaint form note that it will receive “the most recent messages” from a user who has been flagged. They do not, however, disclose how many unencrypted messages are revealed when a report is filed, or that those messages are examined by outside contractors. (WhatsApp told ProPublica it limits that disclosure to keep violators from “gaming” the system.)

By contrast, both Facebook and Instagram post lengthy “Community Standards” documents detailing the criteria its moderators use to police content, along with articles and videos about “the unrecognized heroes who keep Facebook safe” and announcements on new content-review sites. Facebook’s transparency reports detail how many pieces of content are “actioned” for each type of violation. WhatsApp is not included in this report.

When dealing with legislators, Facebook Inc. officials also offer few details — but are eager to assure them that they don’t let encryption stand in the way of protecting users from images of child sexual abuse and exploitation. For example, when members of the Senate Judiciary Committee grilled Facebook about the impact of encrypting its platforms, the company, in written follow-up questions in Jan. 2020, cited WhatsApp in boasting that it would remain responsive to law enforcement. “Even within an encrypted system,” one response noted, “we will still be able to respond to lawful requests for metadata, including potentially critical location or account information… We already have an encrypted messaging service, WhatsApp, that — in contrast to some other encrypted services — provides a simple way for people to report abuse or safety concerns.”

Sure enough, WhatsApp reported 400,000 instances of possible child-exploitation imagery to the National Center for Missing and Exploited Children in 2020, according to its head, Cathcart. That was ten times as many as in 2019. “We are by far the industry leaders in finding and detecting that behavior in an end-to-end encrypted service,” he said.

During his YouTube interview with the Australian think tank, Cathcart also described WhatsApp’s reliance on user reporting and its AI systems’ ability to examine account information that isn’t subject to encryption. Asked how many staffers WhatsApp employed to investigate abuse complaints from an app with more than two billion users, Cathcart didn’t mention content moderators or their access to encrypted content. “There’s a lot of people across Facebook who help with WhatsApp,” he explained. “If you look at people who work full time on WhatsApp, it’s above a thousand. I won’t get into the full breakdown of customer service, user reports, engineering, etc. But it’s a lot of that.”

In written responses for this article, the company spokesperson said: “We build WhatsApp in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive. This work takes extraordinary effort from security experts and a valued trust and safety team that works tirelessly to help provide the world with private communication.” The spokesperson noted that WhatsApp has released new privacy features, including “more controls about how people’s messages can disappear” or be viewed only once. He added, “Based on the feedback we’ve received from users, we’re confident people understand when they make reports to WhatsApp we receive the content they send us.”


III. “Deceiving Users” About Personal Privacy

Since the moment Facebook announced plans to buy WhatsApp in 2014, observers wondered how the service, known for its fervent commitment to privacy, would fare inside a corporation known for the opposite. Zuckerberg had become one of the wealthiest people on the planet by using a “surveillance capitalism” approach: collecting and exploiting reams of user data to sell targeted digital ads. Facebook’s relentless pursuit of growth and profits has generated a series of privacy scandals in which it was accused of deceiving customers and regulators.

By contrast, WhatsApp knew little about its users apart from their phone numbers and shared none of that information with third parties. WhatsApp ran no ads, and its co-founders, Jan Koum and Brian Acton, both former Yahoo engineers, were hostile to them. “At every company that sells ads,” they wrote in 2012, “a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data and making sure it’s all being logged and collated and sliced and packed and shipped out,” adding: “Remember, when advertising is involved you the user are the product.” At WhatsApp, they noted, “your data isn’t even in the picture. We are simply not interested in any of it.”

Zuckerberg publicly vowed in a 2014 keynote speech that he would keep WhatsApp “exactly the same.” He declared, “We are absolutely not going to change plans around WhatsApp and the way it uses user data. WhatsApp is going to operate completely autonomously.”

In April 2016, WhatsApp completed its long-planned adoption of end-to-end encryption, which helped establish the app as a prized communications platform in 180 countries, including many where text messages and phone calls are cost-prohibitive. International dissidents, whistleblowers and journalists also turned to WhatsApp to escape government eavesdropping.

Four months later, however, WhatsApp disclosed it would begin sharing user data with Facebook — precisely what Zuckerberg had said would not happen — a move that cleared the way for an array of future revenue-generating plans. The new WhatsApp terms of service said the app would share information such as users’ phone numbers, profile photos, status messages and IP addresses for the purposes of ad targeting, fighting spam and abuse and gathering metrics. “By connecting your phone number with Facebook’s systems,” WhatsApp explained, “Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them.”

Such actions were increasingly bringing Facebook into the crosshairs of regulators. In May 2017, European Union antitrust regulators fined the company 110 million euros (about $122 million) for falsely claiming three years earlier that it would be impossible to link the user information between WhatsApp and the Facebook family of apps. The EU concluded that Facebook had “intentionally or negligently” deceived regulators. Facebook insisted its false statements in 2014 were not intentional, but didn’t contest the fine.

By the spring of 2018, the WhatsApp co-founders, now both billionaires, were gone. Acton, in what he later described as an act of “penance” for the “crime” of selling WhatsApp to Facebook, gave $50 million to a foundation backing Signal, a free encrypted messaging app that would emerge as a WhatsApp rival. (Acton’s donor-advised fund has also given money to ProPublica.)

Meanwhile, Facebook was under fire for its security and privacy failures as never before. The pressure culminated in a landmark $5 billion fine by the Federal Trade Commission in July 2019 for violating a previous agreement to protect user privacy. The fine was almost 20 times greater than any previous privacy-related penalty, according to the FTC, and Facebook’s transgressions included “deceiving users about their ability to control the privacy of their personal information.”

The FTC announced that it was ordering Facebook to take steps to protect privacy going forward, including for WhatsApp users: “As part of Facebook’s order-mandated privacy program, which covers WhatsApp and Instagram, Facebook must conduct a privacy review of every new or modified product, service, or practice before it is implemented, and document its decisions about user privacy.” Compliance officers would be required to generate a “quarterly privacy review report” and share it with the company and, upon request, the FTC.

Facebook agreed to the FTC’s fine and order. Indeed, the negotiations for that agreement were the backdrop, just four months before that, for Zuckerberg’s announcement of his new commitment to privacy.

By that point, WhatsApp had begun using Accenture and other outside contractors to hire hundreds of content reviewers. But the company was eager not to step on its larger privacy message — or spook its global user base. It said nothing publicly about its hiring of contractors to review content.


IV. “We Kill People Based On Metadata”

Even as Zuckerberg was touting Facebook Inc.’s new commitment to privacy in 2019, he didn’t mention that his company was apparently sharing more of its WhatsApp users’ metadata than ever with the parent company — and with law enforcement.

To the lay ear, the term “metadata” can sound abstract, a word that evokes the intersection of literary criticism and statistics. To use an old, pre-digital analogy, metadata is the equivalent of what’s written on the outside of an envelope — the names and addresses of the sender and recipient and the postmark reflecting where and when it was mailed — while the “content” is what’s written on the letter sealed inside the envelope. So it is with WhatsApp messages: The content is protected, but the envelope reveals a multitude of telling details (as noted: time stamps, phone numbers and much more).

Those in the information and intelligence fields understand how crucial this information can be. It was metadata, after all, that the National Security Agency was gathering about millions of Americans not suspected of a crime, prompting a global outcry when it was exposed in 2013 by former NSA contractor Edward Snowden. “Metadata absolutely tells you everything about somebody’s life,” former NSA general counsel Stewart Baker once said. “If you have enough metadata, you don’t really need content.” In a symposium at Johns Hopkins University in 2014, Gen. Michael Hayden, former director of both the CIA and NSA, went even further: “We kill people based on metadata.”

U.S. law enforcement has used WhatsApp metadata to help put people in jail. ProPublica found more than a dozen instances in which the Justice Department sought court orders for the platform’s metadata since 2017. These represent a fraction of overall requests, known as pen register orders (a phrase borrowed from the technology used to track numbers dialed by landline telephones), as many more are kept from public view by court order. U.S. government requests for data on outgoing and incoming messages from all Facebook platforms increased by 276% from the first half of 2017 to the second half of 2020, according to Facebook Inc. statistics (which don’t break out the numbers by platform). The company’s rate of handing over at least some data in response to such requests has risen from 84% to 95% during that period.

It’s not clear exactly what government investigators have been able to gather from WhatsApp, as the results of those orders, too, are often kept from public view. Internally, WhatsApp calls such requests for information about users “prospective message pairs,” or PMPs. These provide data on a user’s messaging patterns in response to requests from U.S. law enforcement agencies, as well as those in at least three other countries — the United Kingdom, Brazil and India — according to a person familiar with the matter who shared this information on condition of anonymity. Law enforcement requests from other countries might only receive basic subscriber profile information.

WhatsApp metadata was pivotal in the arrest and conviction of Natalie “May” Edwards, a former Treasury Department official with the Financial Crimes Enforcement Network, for leaking confidential banking reports about suspicious transactions to BuzzFeed News. The FBI’s criminal complaint detailed hundreds of messages between Edwards and a BuzzFeed reporter using an “encrypted application,” which interviews and court records confirmed was WhatsApp. “On or about August 1, 2018, within approximately six hours of the Edwards pen becoming operative — and the day after the July 2018 Buzzfeed article was published — the Edwards cellphone exchanged approximately 70 messages via the encrypted application with the Reporter-1 cellphone during an approximately 20-minute time span between 12:33 a.m. and 12:54 a.m.,” FBI Special Agent Emily Eckstut wrote in her October 2018 complaint. Edwards and the reporter used WhatsApp because Edwards believed the platform to be secure, according to a person familiar with the matter.

Edwards was sentenced on June 3 to six months in prison after pleading guilty to a conspiracy charge and reported to prison last week. Edwards’ attorney declined to comment, as did representatives from the FBI and the Justice Department.

WhatsApp has for years downplayed how much unencrypted information it shares with law enforcement, largely limiting mentions of the practice to boilerplate language buried deep in its terms of service. It does not routinely keep permanent logs of who users are communicating with and how often, but company officials confirmed they do turn on such tracking at their own discretion — even for internal Facebook leak investigations — or in response to law enforcement requests. The company declined to tell ProPublica how frequently it does so.

The privacy page for WhatsApp assures users that they have total control over their own metadata. It says users can “decide if only contacts, everyone, or nobody can see your profile photo” or when they last opened their status updates or when they last opened the app. Regardless of the settings a user chooses, WhatsApp collects and analyzes all of that data — a fact not mentioned anywhere on the page.


V. “Opening the Aperture to Encompass Business Objectives”

The conflict between privacy and security on encrypted platforms seems to be only intensifying. Law enforcement and child safety advocates have urged Zuckerberg to abandon his plan to encrypt all of Facebook’s messaging platforms. In June 2020, three Republican senators introduced the “Lawful Access to Encrypted Data Act,” which would require tech companies to assist in providing access to even encrypted content in response to law enforcement warrants. For its part, WhatsApp recently sued the Indian government to block its requirement that encrypted apps provide “traceability” — a method to identify the sender of any message deemed relevant to law enforcement. WhatsApp has fought similar demands in other countries.

Other encrypted platforms take a vastly different approach to monitoring their users than WhatsApp. Signal employs no content moderators, collects far less user and group data, allows no cloud backups and generally rejects the notion that it should be policing user activities. It submits no child exploitation reports to NCMEC.

Apple has touted its commitment to privacy as a selling point. Its iMessage system displays a “report” button only to alert the company to suspected spam, and the company has made just a few hundred annual reports to NCMEC, all of them originating from scanning outgoing email, which is unencrypted.

But Apple recently took a new tack, and appeared to stumble along the way. Amid intensifying pressure from Congress, in August the company announced a complex new system for identifying child-exploitative imagery on users’ iCloud backups. Apple insisted the new system poses no threat to private content, but privacy advocates accused the company of creating a backdoor that potentially allows authoritarian governments to demand broader content searches, which could result in the targeting of dissidents, journalists or other critics of the state. On Sept. 3, Apple announced it would delay implementation of the new system.

Still, it’s Facebook that seems to face the most constant skepticism among major tech platforms. It is using encryption to market itself as privacy-friendly, while saying little about the other ways it collects data, according to Lloyd Richardson, the director of IT at the Canadian Centre for Child Protection. “This whole idea that they’re doing it for personal protection of people is completely ludicrous,” Richardson said. “You’re trusting an app owned and written by Facebook to do exactly what they’re saying. Do you trust that entity to do that?” (On Sept. 2, Irish authorities announced that they are fining WhatsApp 225 million euros, about $267 million, for failing to properly disclose how the company shares user information with other Facebook platforms. WhatsApp is contesting the finding.)

Facebook’s emphasis on promoting WhatsApp as a paragon of privacy is evident in the December marketing document obtained by ProPublica. The “Brand Foundations” presentation says it was the product of a 21-member global team across all of Facebook, involving a half-dozen workshops, quantitative research, “stakeholder interviews” and “endless brainstorms.” Its aim: to offer “an emotional articulation” of WhatsApp’s benefits, “an inspirational toolkit that helps us tell our story,” and a “brand purpose to champion the deep human connection that leads to progress.” The marketing deck identifies a feeling of “closeness” as WhatsApp’s “ownable emotional territory,” saying the app delivers “the closest thing to an in-person conversation.”

WhatsApp should portray itself as “courageous,” according to another slide, because it’s “taking a strong, public stance that is not financially motivated on things we care about,” such as defending encryption and fighting misinformation. But the presentation also speaks of the need to “open the aperture of the brand to encompass our future business objectives. While privacy will remain important, we must accommodate for future innovations.”

WhatsApp is now in the midst of a major drive to make money. It has experienced a rocky start, in part because of broad suspicions of how WhatsApp will balance privacy and profits. An announced plan to begin running ads inside the app didn’t help; it was abandoned in late 2019, just days before it was set to launch. Early this January, WhatsApp unveiled a change in its privacy policy — accompanied by a one-month deadline to accept the policy or get cut off from the app. The move sparked a revolt, impelling tens of millions of users to flee to rivals such as Signal and Telegram.

The policy change focused on how messages and data would be handled when users communicate with a business in the ever-expanding array of WhatsApp Business offerings. Companies now could store their chats with users and use information about users for marketing purposes, including targeting them with ads on Facebook or Instagram.

Elon Musk tweeted “Use Signal,” and WhatsApp users rebelled. Facebook delayed for three months the requirement for users to approve the policy update. In the meantime, it struggled to convince users that the change would have no effect on the privacy protections for their personal communications, with a slightly modified version of its usual assurance: “WhatsApp cannot see your personal messages or hear your calls and neither can Facebook.” Just as when the company first bought WhatsApp years before, the message was the same: Trust us.

Source: https://www.propublica.org/article/how-facebook-undermines-privacy-protections-for-its-2-billion-whatsapp-users

Metadaten: Wo das eigentliche Privacy-Problem von WhatsApp liegt

Wo das eigentliche Privacy-Problem von WhatsApp liegt

Gerade macht eine Nachricht aus den USA die Runde: Das Investigativmagazin ProPublica widmet dem Datenschutz bei WhatsApp einen ausführlichen Artikel und kommt zu dem Schluss, das Mutterunternehmen Facebook untergrabe die Privatsphäre der zwei Milliarden Nutzer:innen. So richtig diese Aussage ist, so problematisch ist das Framing der Autoren und vieler deutscher Medien, die die Meldung oberflächlich aufgreifen.

Im Hauptteil des Artikels geht es darum, dass Facebook ein Heer von Content-Moderator:innen beschäftigt, um gemeldete Inhalte in WhatsApp-Chats zu überprüfen. Das ist keine Neuigkeit, aber ProPublica kann erstmals ausführlicher darüber berichten, wie diese Arbeit abläuft. Dass potenziell jede WhatsApp-Nachricht von den Moderator:innen des Konzerns gelesen werden kann, stellen die Autoren dem Privacy-Versprechen des Messengers gegenüber: „No one outside of this chat, not even WhatsApp, can read or listen to them.”

Allerdings, und hier wird es problematisch, setzen die Autoren dann auf ein Framing, dass die Content-Moderation (die WhatsApp nicht so nennen will) als Schwächung der Ende-zu-Ende-Verschlüsselung darstellt. Ein ProPublica-Autor bezeichnete die Moderation sogar als „Backdoor“, was gemeinhin eine gezielt eingebaute Hintertür zum Umgehen von Verschlüsselung meint. Diverse Sicherheitsexpert:innen wie die Cybersecurity-Direktorin der Electronic Frontier Foundation, Eva Galperin, kritisieren deshalb die Berichterstattung.

Die Verschlüsselung tut, was sie soll

Wo also liegt das Problem? Klar ist: Mark Zuckerbergs 2018 gegebenes Versprechen, dass seine Firma keinerlei Kommunikationsinhalte aus WhatsApp-Chats lesen könne, ist irreführend. Jede Nachricht, jedes Bild und jedes Video, die von Chat-Teilnehmer:innen gemeldet werden, landen zur Überprüfung bei WhatsApp und deren Dienstleistern. Etwa 1000 Menschen seien in Austin, Dublin und Singapur rund um die Uhr im Einsatz, um die gemeldeten Inhalte zu sichten, berichtet ProPublica. Weil das Unternehmen das Privacy-Versprechen für sein Marketing benötigt, versteckt WhatsApp diese Info vor seinen Nutzer:innen.

Klar ist auch: Wie jede Form der Inhaltemoderation bringt dies erhebliche Probleme mit sich. So zeigen die Autoren nach Gesprächen mit diversen Quellen etwa, dass die Moderator:innen wenig Zeit für ihre schwerwiegenden Entscheidungen haben und mit teils missverständlichen Vorgaben arbeiten müssen. Wie bei der Moderation für Facebook und Instagram werden sie zudem von einem automatisierten System unterstützt, das mitunter fehlerhafte Vorschläge macht. Deshalb werden immer wieder Inhalte gesperrt, die eigentlich nicht gesperrt werden dürften, etwa harmlose Fotos oder Satire. Einen ordentlichen Widerspruchsmechanismus gibt es bei WhatsApp nicht und es ist ein Verdienst des Artikels, diese Schwierigkeiten ans Licht zu bringen.

Diese Probleme liegen jedoch nicht an einer mangelhaften Ende-zu-Ende-Verschlüsselung der WhatsApp-Nachrichten. Diese funktioniert technisch gesehen weiterhin gut. Die Nachrichten sind zunächst nur auf den Geräten der Kommunikationsteilnehmer:innen lesbar (sofern diese nicht durch kriminelle oder staatliche Hacker kompromittiert wurden). Die Nutzer:innen, die Inhalte aus Chats melden, leiten diese an WhatsApp weiter. Das kann jede:r tun und ist kein Verschlüsselungsproblem.

Die eigentliche Gefahr liegt woanders

Die Möglichkeit, missbräuchliche Inhalte zu melden, besteht bei WhatsApp schon seit Längerem. Das Meldesystem soll helfen, wenn etwa volksverhetztende Inhalte geteilt werden, Ex-Partner:innen bedroht oder in Gruppen zur Gewalt gegen Minderheiten aufgerufen wird. Es ist zwar ein Eingriff in private Kommunikation, aber man kann argumentieren, dass dieser in Abwägung mit den Gefahren gerechtfertigt ist. Selbstverständlich wäre WhatsApp in der Pflicht, seine Nutzer:innen besser darüber informieren, wie das Meldesystem funktioniert und dass ihre Nachrichten mit ein paar Klicks an Moderator:innen weitergeleitet werden können.

Die größere Gefahr für die Privatsphäre bei WhatsApp kommt jedoch von einer anderen Stelle: Es sind die Metadaten, die über Menschen ähnlich viel verraten wie die Inhalte ihrer Gespräche. Dazu gehört die Identität von Absender und Empfänger, ihre Telefonnummern und zugehörige Facebook-Konten, Profilfotos, Statusnachrichten sowie Akkustand des Telefons. Außerdem Informationen zum Kommunikationsverhalten: Wer kommuniziert mit wem? Wer nutzt die App wie häufig und wie lange?

Aus solchen Daten lassen sich Studien zufolge weitgehende psychologische Profile bilden. So kommt es schon mal vor, dass Facebook-Manager ihren Werbekunden versprechen, diese könnten auf der Plattform „emotional verletzliche Teenager“ finden. „We kill people based on metadata“, offenbarte der frühere NSA-Chef Michael Hayden über metadatenbasierte Raketenangriffe der USA.

Wie WhatsApp eine Whistleblowerin ans Messer lieferte

WhatsApp sammelt diese Daten im großen Stil, weil sie sich zu Geld machen lassen. Im Originalbericht von ProPublica kommt dieser Aspekt durchaus vor, in vielen deutschen Meldungen geht er leider unter. Tatsächlich berichtet das US-Medium sogar vom Fall einer Whistleblowerin, die ins Gefängnis musste, weil WhatsApp ihre Metadaten an das FBI weitergab. Natalie Edwards war im US-Finanzministerium angestellt und reichte Informationen über verdächtige Transaktionen an BuzzFeed News weiter. Entdeckt und verurteilt wurde sie unter anderem, weil die Strafverfolger nachweisen konnten, dass sie in regem WhatsApp-Kontakt mit dem BuzzFeed-Reporter stand.

Dem Bericht zufolge gibt WhatsApp in den USA derlei Metadaten regelmäßig an Ermittlungsbehörden weiter. Auch in Deutschland und Europa dürfte dies der Fall sein. Hinzukommt, dass nicht nur staatliche Stellen die verräterischen Informationen erhalten, sondern auch Facebook. Dort werden sie genutzt, um die Datenprofile der Nutzer:innen zu verfeinern und in weiten Teilen der Welt auch, um Werbeanzeigen besser zuschneiden zu können. Als der Datenkonzern den Messenger 2014 aufkaufte, versprach er der europäischen Wettbewerbsbehörde, dass dies technisch überhaupt nicht möglich sei. Eine dreiste Lüge, für die das Unternehmen mehr als 100 Millionen Euro Strafe zahlen musste.

Deshalb lässt sich nicht oft genug sagen: Auch wenn die Ende-zu-Ende-Verschlüsselung des Messengers funktioniert, ist WhatsApp kein guter Ort für private Kommunikation. Journalist:innen, die auf diesem Messenger vertrauliche Gespräche mit ihren Quellen führen, handeln unverantwortlich. Wer wirklich sicher und datensparsam kommunizieren will, sollte Alternativen wie Threema oder Signal nutzen, die kaum Metadaten speichern.

 

Two Weeks of Chaos: Inside Elon Musk’s Takeover of Twitter

Source: https://www.nytimes.com/2022/11/11/technology/elon-musk-twitter-takeover.html

Mr. Musk ordered immediate layoffs, fired executives by email and laid down product deadlines, transforming the company.

SAN FRANCISCO — Elon Musk had a demand.

On Oct. 28, hours after completing his $44 billion buyout of Twitter the night before, Mr. Musk gathered several human-resource executives in a “war room” in the company’s offices in San Francisco. Prepare for widespread layoffs, he told them, six people with knowledge of the discussion said. Twitter’s work force needed to be slashed immediately, he said, and those who were cut would not receive bonuses that were set to be paid on Nov. 1.

The executives warned their new boss that his plan could violate employment laws and breach contracts with workers, leading to employee lawsuits, the people said. But Mr. Musk’s team said he was used to going to court and paying penalties, and was not worried about the risks. So Twitter’s human-resource, accounting and legal departments scrambled to figure out how to comply with his command.

Two days later, Mr. Musk learned exactly how costly those potential fines and lawsuits could be, three people said. Delays were also piling up as managers haggled over which employees to let go. He decided to wait on cutting jobs until after Nov. 1.

The order for immediate layoffs, the ensuing panic and the about-face reflect the chaos that has engulfed Twitter since Mr. Musk took over the company two weeks ago. The 51-year-old barreled in with ideas about how the social media service should operate, but with no comprehensive plan to execute them. Then he quickly ran into the business, legal and financial complexities of running a platform that has been called a global town square.

 

The fallout has often been excruciating, according to 36 current and former Twitter employees and people close to the company, as well as internal documents and workplace chat logs. Some top executives were summarily fired by email. One engineering manager, upon being told to cut hundreds of workers, vomited into a trash can. Others slept in the office as they worked grueling schedules to meet Mr. Musk’s orders.

Twitter, which is under financial pressure from debt and a slumping economy, is now unrecognizable compared with what it was a month ago. Last week, Mr. Musk slashed 50 percent of the company’s 7,500 employees. Executive resignations have continued. Misinformation proliferated on the platform during Tuesday’s midterm elections. A key project to expand revenue from subscriptions hit snags. Some advertisers have been aghast.

Mr. Musk, who did not respond to a request for comment, told employees in a meeting on Thursday that Twitter’s situation was grim.

“There’s a massive negative cash flow, and bankruptcy is not out of the question,” he said, according to a recording heard by The New York Times.

 

Mr. Musk added that they would need to work strenuously to keep the company afloat. “Those who are able to go hard core and play to win, Twitter is a good place,” he said. “And those who are not, totally understand, but then Twitter is not for you.”

 
ImageElon Musk posted a video of his entrance to Twitter headquarters on Oct. 26.
Credit…Twitter, via Associated Press
 
Elon Musk posted a video of his entrance to Twitter headquarters on Oct. 26.

Mr. Musk arrived at Twitter’s San Francisco offices on Oct. 26, toting a white porcelain sink through the glass doors of the building. “Let that sink in!” he tweeted at the time, along with a video of his grand entrance.

Leslie Berland, Twitter’s chief marketing officer, encouraged employees to say hi to Mr. Musk and escorted him through the office. He was seen chatting with employees at the company coffee bar.

But the vibe quickly changed. The next day, Parag Agrawal, Twitter’s chief executive, and Ned Segal, the chief financial officer, were in the office, two people familiar with the situation said. Once they knew Mr. Musk’s acquisition of Twitter was closing that afternoon, they left the building, uncertain what the new owner would do.

Mr. Agrawal and Mr. Segal soon received emails saying they had been fired, two people familiar with the situation said. Vijaya Gadde, Twitter’s top legal and policy executive, and Sean Edgett, the general counsel, were also fired. Mr. Edgett, who was in Twitter’s offices at the time, was escorted out.

 
Image
Ned Segal
Credit…Drew Angerer/Getty Images
 
Ned Segal
 
Image
Parag Agrawal
Credit…Kevin Dietsch/Getty Images
 
Parag Agrawal

That evening, Twitter hosted a Halloween party called “Trick or Tweet” for employees and their families. Some workers dressed in costume and tried to keep the mood festive. Others cried and hugged one another.

 

Mr. Musk had brought his own advisers, many of whom had worked at his other businesses, such as the digital payments company PayPal and the electric carmaker Tesla. They parked themselves in the “war room,” on the second floor of a building attached to Twitter’s headquarters. The area, which Twitter used to fete big-spending advertisers and dignitaries, was stocked with company memorabilia.

 

The advisers included the venture capitalists David Sacks, Jason Calacanis and Sriram Krishnan; Mr. Musk’s personal lawyer Alex Spiro; his financial manager Jared Birchall; and Antonio Gracias, a former Tesla director. Joining in were engineers and others from Tesla; from Mr. Musk’s brain interface start-up, Neuralink; and from his tunneling company, the Boring Company.

At times, Mr. Musk was spotted with his 2-year-old son, X Æ A-12, at Twitter’s office as he greeted employees.

In meetings with Twitter executives, Mr. Musk was direct. At the Oct. 28 meeting with human-resource executives, he said he wanted to reduce the work force immediately, before a Nov. 1 date when employees would receive regularly scheduled retention bonuses in the form of vested stock. Tech companies often compensate employees with regular share grants, earned over time the longer they stay at the firm.

One Twitter team began creating a financial model to show the cost of the layoffs. Another built a model to demonstrate how much more Mr. Musk might pay in legal fees and fines if he proceeded with the rapid cuts, three people said.

On Oct. 30, Mr. Musk received word that the rapid approach could cost millions of dollars more than laying people off with their scheduled bonuses. He agreed to delay, four people said.

But he had a condition. Before paying the bonuses, Mr. Musk insisted on a payroll audit to confirm that Twitter’s employees were “real humans.” He voiced concerns that “ghost employees” who should not receive the money lingered in Twitter’s systems.

 

Mr. Musk tapped Robert Kaiden, Twitter’s chief accounting officer, to conduct the audit. Mr. Kaiden asked managers to verify that they knew certain employees and could confirm that they were human, according to three people and an internal document seen by The Times.

The Nov. 1 bonus date came and went with no mass layoffs. Mr. Kaiden was fired the next day and marched out of the building, five people with knowledge of the situation said.

As Twitter managers compiled lists for layoffs, Mr. Musk flew to New York to meet with advertisers, who provide the bulk of Twitter’s revenue.

In some advertiser meetings, Mr. Musk proposed a system for Twitter users to choose the kind of content that the service exposed them to — akin to G to NC-17 movie ratings — implying that brands could then target their advertising on the platform better. He also committed to product improvements and more personalization for users and ads, two people with knowledge of the discussions said.

But his outreach was undercut by the departures of two New York-based Twitter executives — Ms. Berland and JP Maheu, a vice president in charge of advertising. They were well known in the advertising community.

Those Twitter executives “had great relationships with the senior-most people at the Fortune 500 — they were incredibly transparent and inclusive,” said Lou Paskalis, a longtime advertising executive. “Those things engender tremendous trust, and those things are now in question.”

 
Image
Leslie Berland
Credit…Xavi Torrent/Getty Images
 
Leslie Berland
 
Image
JP Maheu
Credit…Astrid Stawiarz/Getty Images
 
JP Maheu
 

Brands including Volkswagen Group, General Motors and United Airlines have said they will pause advertising on Twitter as they evaluate Mr. Musk’s ownership of the platform.

Mr. Musk elevated some managers at Twitter. He tapped Esther Crawford, a product manager, to revamp a subscription service called Twitter Blue. Mr. Musk wanted a new version of the service, which would cost $8 a month and include premium features and the verification check mark that was previously assigned for free to the accounts of celebrities, journalists and politicians to convey their authenticity.

He laid down a deadline: The team must finish Twitter Blue’s changes by Nov. 7 or its members would be fired.

Last week, Ms. Crawford shared a photo of herself sleeping at Twitter’s San Francisco offices in a sleeping bag and an eye mask, with the hashtag #SleepWhereYouWork.

Her message rubbed some colleagues the wrong way. They wondered in private chats why they should commit long working hours to a man who could fire them, according to five people and messages seen by The Times. On Twitter, Ms. Crawford responded to what she called “hecklers” by saying she had received supportive messages from other entrepreneurs and “builders of all types.”

The scope of layoffs was a moving target. Twitter managers were initially told to cut 25 percent of the work force, three people said. But Tesla engineers who reviewed Twitter’s code proposed deeper cuts to the engineering teams. Executives overseeing other parts of Twitter were told to expand their layoff lists.

Twitter executives also suggested assessing the lists for diversity and inclusion issues so the cuts would not hit people of color disproportionately and to avoid legal trouble. Mr. Musk’s team brushed aside the suggestion, two people said.

 

On Nov. 2, employees stumbled upon an open channel in the internal Slack messaging system where human resources and legal teams were discussing the layoffs. In a message seen by The Times, one employee said 3,738 workers could be laid off, or about half the work force. The message was widely shared internally.

That evening, Mr. Musk met with some advisers to settle on the reduction, according to a calendar invitation seen by The Times. They were joined by employees from Twitter’s human resources and staff from his other companies.

Anticipating the cuts, employees began bidding farewell to their colleagues, trading phone numbers and connecting on LinkedIn. They also pulled together documents and internal resources to help workers who survived the layoffs.

One engineering manager was approached by Mr. Musk’s advisers — or “goons,” as Twitter employees called them — with a list of hundreds of people he had to let go. He vomited into a trash can near his feet.

Late on Nov. 3, an email landed in employees’ inboxes. “In an effort to place Twitter on a healthy path, we will go through the difficult process of reducing our global work force,” the email, signed “Twitter,” said.

Pandemonium followed. While the note said employees would receive a follow-up email the next morning about whether they still had jobs, many found themselves locked out of email or Slack that night, an indication they had been laid off. Those who remained in Slack posted saluting emojis en masse as a send-off for co-workers.

The cuts were enormous. In Redbird, Twitter’s platform and infrastructure organization, Mr. Musk shed numerous managers. The unit also lost about 80 percent of its engineering staff, raising internal concerns about the company’s ability to keep its site up and running.

 

In Bluebird, Twitter’s consumer division, dozens of product managers were laid off, leaving just over a dozen of them. The new ratio of engineers to managers was 70 to 1, according to one estimate.

 
Image
Mr. Musk in New York last Friday, the day after Twitter employees received an email about mass layoffs.
Credit…Andrew Kelly/Reuters
 
Mr. Musk in New York last Friday, the day after Twitter employees received an email about mass layoffs.

As layoffs unfolded, tech recruiters sensed opportunity. Top managers at rival companies such as Meta and Google sent messages to some of the employees being let go from Twitter, said two people who received the notes.

Most of Mr. Musk’s subordinates remained quiet throughout the process. But Mr. Calacanis, the venture capitalist, had been active on Twitter responding to product suggestions and concerns.

Last week, Mr. Musk dispatched a lieutenant to the “war room” to ask Mr. Calacanis, who was there, to cool it on Twitter and stop acting as if he were leading product development or policy, people familiar with the exchange said.

“To be clear, Elon is the product manager and CEO,” Mr. Calacanis later tweeted. “As a power user (and that’s all I am!) I’m really excited.”

By last Saturday, Mr. Musk’s advisers realized that the cuts may have been too deep, four people said. Some asked laid-off engineers, designers and product managers to return to their old jobs, three people familiar with the conversations said. The tech newsletter Platformer earlier reported the outreach.

 

At Goldbird, Twitter’s revenue division, the company had to bring back those who ran key money-generating products that “no one else knows how to operate,” people with knowledge of the business said. One manager agreed to try rehiring some laid-off workers, but expressed concerns that they were “weak, lazy, unmotivated and they may even be against an Elon Twitter,” two people familiar with the matter said.

On Monday, some Twitter employees arrived at work to find that certain systems they had relied on no longer worked. In San Francisco, an engineer discovered that some contracts with vendors that provide software for managing user data had been put on hold or had expired, and that the managers and executives who could fix the problem had been laid off or resigned.

On Wednesday, workers in Twitter’s New York office were unable to use the Wi-Fi after a server room overheated and knocked it offline, two people said.

Mr. Musk plans to begin making employees pay for lunch — which had been free — at the company cafeteria, two people said.

 
Image
Jason Calacanis
Credit…Christie Hemm Klok for The New York Times
 
Jason Calacanis
 
Image
Damien Kieran
Credit…Joshua Roberts/Reuters
 
Damien Kieran

Inside Twitter, some employees have clashed with Mr. Musk’s advisers.

This week, security executives disagreed with Mr. Musk’s team over how Twitter should meet its obligations to the Federal Trade Commission. Twitter had agreed to a settlement with the F.T.C. in 2011 over privacy violations, which requires the company to submit regular reports about its privacy practices and open its doors to audits.

On Wednesday, a day before a deadline for Twitter to submit a report to the F.T.C., Twitter’s chief information security officer, Lea Kissner; chief privacy officer, Damien Kieran; and chief compliance officer, Marianne Fogarty, resigned.

 

In internal messages later that day, an employee wrote about the resignations and suggested that internal privacy reviews of Twitter’s products were not proceeding as they should under the F.T.C. settlement.

Some engineers could be required to “self-certify” that their projects complied with the settlement, rather than relying on reviews from lawyers and executives, a shift that could lead to “major incidents,” the employee wrote.

“Elon has shown that his only priority with Twitter users is how to monetize them,” the person wrote in the message, which was viewed by The Times.

The employee added that Mr. Spiro, Mr. Musk’s lawyer, had said the billionaire was willing to take risks. Mr. Spiro, the employee said, told workers that “Elon puts rockets into space — he’s not afraid of the F.T.C.”

The F.T.C. said that it was tracking the developments at Twitter with “deep concern” and that “no C.E.O. or company is above the law.” Mr. Musk later sent employees an email saying Twitter will adhere to the F.T.C. settlement.

On Thursday, more Twitter executives resigned, including Kathleen Pacini, a human-resource leader, and Yoel Roth, the head of trust and safety.

At the meeting with employees that day, Mr. Musk tried to sound a note of optimism about Twitter’s future.

 

“Twitter can form an incredibly valuable service to the world and be the public town square,” he said, noting it should be a “battleground of ideas” where debate could “take the place of violence in a lot of cases.”

 

 

The High Cost of Living Your Life OnlineConstantly posting content on social media can erode your privacy

https://www.wired.com/story/privacy-psychology-social-media/

Constantly posting content on social media can erode your privacy—and sense of self.Blackandwhite person taking a selfie in front of a windowPhotograph: Luka Milanovic/Getty Images

New Report Highlights the Decline of Facebook and IG, as TikTok Becomes the New Home of Entertainment

https://www.socialmediatoday.com/news/new-report-highlights-the-decline-of-facebook-and-ig-as-tiktok-becomes-the/631694/

By Andrew Hutchinson Content and Social Media Manager

Have you found yourself using Instagram way less of late? The once trendsetting social platform seems to have lost its luster, in large part due to Instagram’s insistence on pumping more content from accounts that you don’t follow into your main IG feed. The ‘inspiration’ for that approach is TikTok, which has seen great success by focusing on content, as opposed to creators, with the app opening to a ‘For You’ feed of algorithmically-selected clips, based on your viewing habits. Instagram, as usual, saw that as an opportunity, and it’s since been working to negate your direct input – i.e. the accounts that you’ve chosen to follow – by showing you more and more stuff that it thinks you’ll like. Which is annoying, and personally, I don’t find Instagram anywhere near as engaging as it once was.

And it seems many other users agree – according to a new report from The Wall Street Journal, Instagram engagement is declining, with Reels, in particular, seeing a significant drop-off in user engagement of late. As reported by WSJ, TikTok users are spending over 10x as many hours consuming content in that app as Instagram users currently spend viewing Reels. According to a leaked internal report, Reels engagement is also in decline, dropping 13.6% in recent months – while ‘most Reels users have no engagement whatsoever.’  Meta has lightly refuted the claims, by stating that the usage data doesn’t provide the full picture. Though it declined to add any more context – which is Meta’s usual process when it can’t dispel such with its own insight. Take, for example, total time spent in its apps. Back in 2016, as part of its regular performance reporting, Meta noted that people were spending more than 50 minutes per day, on average, using Facebook, Instagram and Messenger.

It hasn’t reported any official stats on this ever since, which many believe is because that number has been in steady decline, and Meta sees no value in reporting that it’s losing ground, and has been for years now. Meta, instead, is keen to talk about daily and monthly active users, where its figures are solid. But this almost feels like misdirection – Facebook and Instagram, in particular, have traditionally been based on building your social graph, and establishing a digital connection with the people that you know and want to stay connected with, and informed about.

As such, it makes sense that a lot of people log onto these apps each day just to see if their friends and family have shared anything new. That doesn’t, however, mean that they’re spending a lot of time in these apps. Which is another reason why Meta’s trying to push more interesting content into your main feed, and in between updates from your connections – because if it can hook those people that are just checking in, then logging straight back out, that could be a key way to get its engagement stats back on track. But it’s not working.

Again, Facebook and Instagram have spent years pushing you to establish connections with the people that you care about, even introducing an algorithm to ensure that you see the most important updates from these users and Pages every day. At one point, Facebook noted that an average user was eligible to see over 1,500 posts every day, based on the people and Pages they were connected to – which is way more than they could ever view in a single day. So it brought in the algorithm to help maximize engagement – which also had the added benefit of squeezing Page reach, and forcing more brands to pay up. But now, Facebook is actively working to add in even more content, cluttering your feed beyond the posts that you could already be shown, and making it harder than ever to see posts from the people you actually want to stay updated on. Hard to see how that serves the user interests.

And again, it seems that users are understandably frustrated by this, based on these latest engagement stats, and previously reported info from Facebook which showed that young users are spending less and less time in the app. Facebook usage by age bracket Because it’s fundamentally going against its own ethos, purely for its own gain. Accept it or not, people go to different apps for different purpose, which is the whole point of differentiation and finding a niche in the industry. People go to TikTok for entertainment, not for connecting with friends (worth noting that TikTok has actually labeled itself an ‘entertainment app’, as opposed to a social network), while users go to Facebook and IG to see the latest updates from people they care about.

The focus is not the same, and in this new, more entertainment-aligned paradigm, Meta’s once all-powerful, unmatched social graph is no longer the market advantage that it once was. But Meta, desperately seeking to counter its engagement declines, keeps trying to get people to stick around, which is seemingly having the opposite effect. Of course, Meta needs to try, it needs to seek ways to negate user losses as best it can – it makes sense that it’s testing out these new approaches. But they’re not the solution. How, then, can Instagram and Facebook actually re-engage users and stem the tide of people drifting across to TikTok? There are no easy answers, but I’m tipping the next phase will involve exclusive contracts with popular creators, as they become the key pawns in the new platform wars. TikTok’s monetization systems are not as evolved, and YouTube and Meta could theoretically blow it out of the water if they could rope in the top stars from across the digital ecosphere. That could keep people coming to their apps instead, which could see TikTok engagement wither, like Vine before it.
But other than forcing people to spend more time on Facebook, by hijacking their favorite stars, there’s not a lot of compelling reasons for people to spend more time in Meta’s apps. At least, not right now, as they increasingly dilute any form of differentiation.

  But essentially, it comes down to a major shift in user behaviors, away from following your friends, and seeing all the random stuff that they post, to following trends, and engaging with the most popular, most engaging content from across the platform, as opposed to walling off your own little space.

At one stage, the allure of social media was that it gave everyone their own soapbox, a means to share their voice, their opinion, to be their own celebrity in their own right, at least among their own networks. But over time, we’ve seen the negatives of that too. Over-sharing can lead to problems when it’s saved in the internet’s perfect memory for all time, while increasing division around political movements has also made people less inclined to share their own thoughts, for fear of unwanted criticism or misunderstanding. Which is why entertainment has now become the focus of the next generation – it’s less about personal insights and more about engaging in cultural trends. That’s why TikTok is winning, and why Facebook and Instagram are losing out, despite their frantic efforts.

Facebook Knows It’s Losing The Battle Against TikTok

Facebook Knows It’s Losing The Battle Against TikTok

Meta and Mark Zuckerberg face a six-letter problem. Spell it out with me: T-i-k-T-o-k.

Yeah, TikTok, the short-form video app that has hoovered up a billion-plus users and become a Hot Thing in Tech, means trouble for Zuckerberg and his social networks. He admitted as much several times in a call with Wall Street analysts earlier this week about quarterly earnings, a briefing in which he sought to explain his apps’ plateauing growth—and an actual decline in Facebook’s daily users, the first such drop in the company’s 18-year history.

Zuckerberg has insisted a major part of his TikTok defense strategy is Reels, the TikTok clone—ahem, short-form video format—introduced on Instagram and Facebook and launched in August 2020.

If Zuckerberg believed in Reels’ long-term viability, he would take a real run at TikTok by pouring money into Reels and its creators. Lots and lots of money. Something approaching the kind spent by YouTube, which remains the most lucrative income source for social media celebrities. (Those creators produce content to draw in engaged users. The platforms sell ads to appear with the content—more creators, more content, more users, more potential ad revenue. It’s a virtous cycle.)

Now, here’s as good a time as any for a crash course in creator economics. For this, there’s no better guide than Hank Green, whose YouTube video on the subject recently went viral. His fame is most rooted there on YouTube, where he has nine channels run from his Montana home. His most popular channel is Crash Course (13.1 million subscribers—an enviable YouTube base), to which he posts education videos for kids about subjects like Black Americans in World War II and the Israeli-Palestinian conflict.

Like the savviest social media publishers, Green fully understands that YouTube offers the best avenue for making money. It shares 55% of all ad revenue earned on a video with its creator. “YouTube is good at selling advertisements: It’s been around a long time, and it’s getting better every year,” Green says. On YouTube, he earns around $2 per thousand views. (In all, YouTube distributed nearly $16 billion to creators last year.)

Green sports an expansive mindset, though, and he has accounts on TikTok, Instagram and Facebook, too. TikTok doesn’t come close to paying as well as YouTube: On TikTok, Green earns pennies per every thousand views.

Meta is already beginning to offer some payouts for Reels. Over the last month, Reels has finally amassed enough of an audience for Green’s videos to accumulate 16 million views and earn around 60 cents per thousand views. Many times over TikTok’s but still not enough to get Green to divert any substantial his focus to Reels, which has never managed to replicate TikTok’s zeitgeisty place in pop culture. (Tiktok “has deeper content, something fascinating and weird,” explains Green. Reels, however, is “very surface level. None of it is deeper,” he says.) Another factor weighing on Reels: Meta’s bad reputation. “Facebook has traditionally been the company that has been kind of worst at being a good partner to creators,” he says, citing in particular Facebook’s earlier pivot to long-form video that led to the demise of several promising media startups, like Mic and Mashable.

This is where Zuckerberg could use Meta’s thick profit margin (36%, better even than Alphabet’s) and fat cash pile ($48 billion) to shell out YouTube-style cash to users posting Reels, creating an obvious enticement to prioritize Reels over TikTok. Maybe even Reels over YouTube, which has launched its own TikTok competitor, Shorts.

Now, imagine how someone like Green might get more motivated to think about Meta if Reels’ number crept up to 80 cents or a dollar per thousand views. Or $1.50. Or a YouTube-worthy $2. Or higher still: YouTube earnings can climb over $5, double even for the most popular creators.

Meta has earmarked up to a $1 billion for these checks to creators, which sounds big until you remember the amount of capital Meta has available to it. (And think about the sum YouTube disburses.) Moreover, Meta has set a timeframe for dispensing those funds, saying last July it would continue through December 2022. Setting a timetable indicates that Meta could (will likely?) turn off the financing come next Christmas.

Zuckerberg has demonstrated a willingness to plunk down Everest-size mountains of money over many years for projects he does fully believe in. The most obvious example is the metaverse, the latest Zuckerberg pivot. Meta ran up a $10.1 billion bill on it last year to develop new augmented and virtual reality software and headsets and binge hire engineers. Costs are expected to grow in 2022. And unlike Reels, metaverse spending has no semblance of a time schedule; Wall Street has been told the splurge will continue for the foreseeable future. Overall, Meta’s view on the metaverse seems to be, We’ll spend as much as possible—for as long as it takes—for this to happen.

The same freewheeled mindset doesn’t seem to appply to Reels. But Zuckerberg knows he can’t let TikTok take over the short-form video space unopposed. Meta needs to hang onto the advertising revenue generated by Instagram and Facebook until it can make the metaverse materialize. (Instagram and Facebook, for perspective, generated 98% of Meta’s $118 billion revenue last year; sales of Meta’s VR headset, the Quest 2, accounted for the remaining 2%.) And advertising dollars will increasingly move to short-form video, following users’ increased demand for this type of content over the last several years.

Reality is, Zuckerberg has already admitted he doesn’t see Reels as a long-term solution to his T-i-k-T-o-k problem. If he did, he’d spend more on it and creators like Green than what the metaverse costs him over six weeks.

Meet The VC Firm With $544 Million To Buy ‘Orphaned’ Startup Stakes From Other Funds

NewView Capital founder Ravi Viswanathan has worked with startups as a venture capitalist for more than two decades. He’s never seen the game change more than in its most recent stretch.

He rattles off some highlights (and some low ones): the sudden lockdown of early 2020; the host of new players who split off from known firms or launched first-time funds; the increased startup interest from hedge funds and public market specialists; the record dollars flowing in and the more recent pullback. “The last two or three years have been the most extraordinary,” Viswanathan says.

In the thick of it all, Viswanathan’s firm is hoping to profit through a less-common approach. Founded in 2018, the firm looks to build positions in startups by buying out other VC firms—either a portfolio of their equity holdings, or taking some or all of an investment à la carte. And Viswanathan has two new funds, worth a combined $544 million, in new capital to do it.

NewView’s pitch is simple: With startups taking longer to go public or exit, firms with strong paper returns face pressure to return some immediate cash to their own backers. And as investors switch firms, set up their own shingle or retire, some companies find themselves orphans, part-owned by firms where their lead supporter is long gone. “The first reaction is, ‘What is this?’” Viswanathan says. “Then as you go through it, they start embracing that it’s a way to reset the clock.”

Secondary transactions—the purchase of equity shares already issued to insiders or investors—are nothing new to Silicon Valley. Taking basket positions of a bunch of a firm’s companies, however, without simply buying the entire fund, is more of a twist. Viswanathan’s proof of concept came when the longtime partner at many-billions-in-assets firm NEA splintered off with a billion dollars’ worth of its holdings across 31 companies three-plus years ago, the lion’s share of a $1.35 billion fund that also made half a dozen direct investments in startups. NewView’s holdings now include unicorns such as Forter, MessageBird and Plaid, as well as 23andMe and Duolingo, which went public in 2021, and Segment, which was acquired in 2020 for $3.2 billion.

Unlike a traditional venture firm, which operates under the assumption that it won’t return capital from positions for seven or even ten years, NewView’s appearance partway anticipates time horizons of five or six years for its investments to exit. Its primary fund represents $244 million of the new capital, intended for primary startup investments, with a $300 million opportunities fund to make follow-on investments and build positions pieced together from multiple sellers. As a registered investment advisor, NewView has no caps on how it chooses to balance its portfolio. The firm will look to invest in about eight to ten deals per year.

The challenge, of course, is to find deals that provide Viswanathan and company with a venture-like upside—but that other firms are simultaneously willing to sell. Viswanathan says he has met with about 40 other firms in the past several years. “After one conversation, we can very quickly get a sense if this is more ‘I win, you lose,’ or if it’s really a win-win,” he says.

At 137 Ventures, a growth-stage venture firm that provides founders with loans in exchange for the option to convert their debt into equity, among other tactics, founder Justin Fishner-Wolfson says that the relationship-driven nature of venture capital provides impetus for such transactions to remain aboveboard. “Smart, good investors are going to want to make sure that everyone is happy with the outcome, because that matters in terms of their ability to operate in the future,” he says.

Both lawyers and investors close to the secondary market agree with Viswanathan that the structural pressures pushing a demand for such vehicles are real. Investors now raising funds more frequently, as fast as annually, might be multimillionaires on paper, but not yet have received any profits themselves, notes Ed Zimmerman, chair of the tech group at Lowenstein Sandler and an investor in underrepresented fund managers through First Close Partners. “There’s no better time to ask your LPs to re-up than once you’ve handed them a check.”

The pace of funds raising can also strain institutional investors who face allocating more capital than anticipated to venture funds, while their public equity positions take a haircut in the recently unforgiving market for tech stocks. At Industry Ventures, founder and longtime secondaries expert Hans Swildens says he’s only recently heard of limited partners asking funds to take some profits off the table, especially as the drumbeat of IPOs of 2021 appears to have slowed so far this year.

Pricing pressures could cut both ways, however. At EquityZen, founder Phil Haslett notes that individual holders in startups are now offering shares at 10% to 30% lower than what they were asking late last year. “VC firms aren’t in a mad rush to print a trade at 30% below where they’ve seen it,” he says.

Fund formation expert John Dado at Cooley is skeptical of the liquidity crunch. He notes that some firms working with his law firm are exploring the opposite: how to build in mechanisms not to need to deliver cash for even longer periods, such as 12 or even 20 years. But Dado does see value in firms finding homes for investments no longer close to their VC firms.

That’s ultimately NewView’s hope: that not only is secondary needed in the startup ecosystem, but that, given its VC credentials, it’ll be a comfortable option. (Others, like Industry Ventures, are still bigger—“This market is so big, you barely bump into people,” Swildens says.) NewView recently brought on another partner, NextWorld Cpaital and Scale Venture Partners veteran Ben Fu, joining Viswanathan and partner David Yoo. NewView has no women partners; two of its three partner-track investment principals, however, are women, according to Viswanathan.

At fraud prevention startup Forter, valued at $3 billion, cofounder Michael Reitblat has worked with Viswanathan, first at NEA and now at NewView. He says he still calls for help on a personal, in-depth level he might not with other investors on his cap table with larger portfolios to handle, such as Bessemer Venture Partners, Sequoia and Tiger Global. He points to NewView’s team of operating experts as another source of strength.

“There’s a lot of secondary funds, but they just buy equity,” Reitblat says. “If you actually want someone with more operating knowledge and experience and time, I think Ravi has that.”

Penlink – A small Nebraska company is helping law enforcement around the world spy on users of Google, Facebook and other tech giants

A small Nebraska company is helping law enforcement around the world spy on users of Google, Facebook and other tech giants. A secretly recorded presentation to police reveals how deeply embedded in the U.S. surveillance machine PenLink has become.


PenLink might be the most pervasive wiretapper you’ve never heard of.

The Lincoln, Nebraska-based company is often the first choice of law enforcement looking to keep tabs on the communications of criminal suspects. It’s probably best known, if it’s known at all, for its work helping convict Scott Peterson, who murdered his wife Laci and their unborn son in a case that fomented a tabloid frenzy in the early 2000s. Nowadays the company has been helping cops keep tabs on suspected wrongdoing by users of Google, Facebook and WhatsApp – whatever web tool that law enforcement requests.

With $20 million revenue every year from U.S. government customers such as the Drug Enforcement Administration, the FBI, Immigration Customs Enforcement (ICE) and almost every other law enforcement agency in the federal directory, PenLink enjoys a steady stream of income. That doesn’t include its sales to local and state police, where it also does significant business but for which there are no available revenue figures. Forbes viewed contracts across the U.S., including towns and cities in California, Florida, Illinois, Hawaii, North Carolina and Nevada.

“PenLink is proud to support law enforcement across the U.S. and internationally in their effort to fight wrongdoing,” the company said. “We do not publicly discuss how our solution is being utilized by our customers.”

Sometimes it takes a spy to get transparency from a surveillance company. Jack Poulson, founder of technology watchdog Tech Inquiry, went incognito at the National Sheriffs’ Association’s winter conference in Washington. He recorded a longtime PenLink employee showing off what the company could do for law enforcement and discussing the scale of its operations. Not only does the recording lift the lid on how deeply involved PenLink is in wiretapping operations across the U.S., it also reveals in granular detail just how tech providers such as Apple, Facebook and Google provide information to police when they’re confronted with a valid warrant or subpoena.

Scott Tuma, a 15-year PenLink veteran, told attendees at the conference that the business got off the ground in 1987 when a law enforcement agency had an abundance of call records that it needed help organizing. It was in 1998 that the company deployed its first wiretap system. “We’ve got those, generally, scattered all over the U.S. and all over the world,” Tuma said. Though he didn’t describe that tool in detail, the company calls it Lincoln.

Today, it’s social media rather than phones that’s proving to be fertile ground for PenLink and its law enforcement customers. Tuma described working with one Justice Department gang investigator in California, saying he was running as many as 50 social media “intercepts.” PenLink’s trade is in collecting and organizing that information for police as it streams in from the likes of Facebook and Google.

The PenLink rep said that tech companies can be ordered to provide near-live tracking of suspects free of charge. One downside is that the social-media feeds don’t come in real time, like phone taps. There’s a delay – 15 minutes in the case of Facebook and its offshoot, Instagram. Snapchat, however, won’t give cops data much more than four times a day, he said. In some “exigent circumstances,” however, Tuma said he’d seen companies providing intercepts in near real time.

Making matters trickier for the police, to get the intercept data from Facebook, they have to log in to a portal and download the files. If an investigator doesn’t log in every hour during an intercept, they get locked out. “This is how big of a pain in the ass Facebook is,” Tuma said. PenLink automates the process, however, so if law enforcement officers have to take a break or their working day ends, they’ll still have the intercept response when they return.

A spokesperson for Meta, Facebook’s owner, said: “Meta complies with valid legal processes submitted by law enforcement and only produces requested information directly to the requesting law enforcement official, including ensuring the type of legal process used permits the disclosure of the information.”

Jennifer Granick, surveillance and cybersecurity counsel at the American Civil Liberties Union, reviewed the comments made by Tuma. She raised concerns about the amount of information the government was collecting via PenLink. “The law requires police to minimize intercepted data, as well as give notice and show necessity,” she said. “It’s hard to imagine that wiretapping 50 social media accounts is regularly necessary, and I question whether the police are then going back to all the people who comment on Facebook posts or are members of groups to tell them that they’ve been eavesdropped upon.”

She suggested that Tuma’s claim that a “simple subpoena” to Facebook could yield granular information – such as when and where a photo was uploaded, or when a credit-card transaction took place on Facebook Marketplace – may be an overreach of the law.

There’s a lot of nuance involving where government actions might stray over the line, said Randy Milch, a New York University law professor and former general counsel at telecoms giant Verizon Communications. “While I’m sympathetic to the idea that the government is going to ask for more than it needs, simply saying ‘too much data must mean an overreach’ is the kind of arbitrary rule that isn’t workable,” he told Forbes. “The government doesn’t know the amount of the data it’s seeking” before the fact. Milch noted that the Stored Communications Act explicitly allows for subpoenas to collect records including names, addresses, means and source of payment, as well as information on session times and durations.

‘Google’s the best’

In his Washington talk, Tuma gushed over Google’s location-tracking data. Google “can get me within three feet of a precise location,” he said. “I cannot tell you how many cold cases I’ve helped work on where this is five, six, seven years old and people need to put [the suspect] at a hit-and-run or it was a sexual assault that took place.” If people are carrying their phones and have Gmail accounts, he said, law enforcement “can get really lucky. And it happens a lot.” Facebook, by comparison, will get a target within 60 to 90 feet, Tuma said, while Snapchat has started providing more accurate location information within 15 feet.

Snapchat didn’t respond to requests for comment.

Tuma also described having a lot of success in asking Google for search histories. “Multiple homicide investigations, I’ve seen it: ‘How to dispose of a human body,’ ‘best place to dump a body.’ Swear to God, that’s what they search for. It’s in their Google history. They cleared their browser and their cookies and things, they think it’s gone. Google’s the best.” A Google spokesperson said the company tries to balance privacy concerns with the needs of police. “As with all law enforcement requests, we have a rigorous process that is designed to protect the privacy of our users while supporting the important work of law enforcement,” the spokesperson said.

Tuma described Apple’s iCloud warrants as “phenomenal.” “If you did something bad, I bet you I could find it on that backup,” he said. (Apple didn’t respond to requests for comment.) It was also possible, Tuma said, to look at WhatsApp messages, despite the platform’s assurances of tight security. Users who back up messages effectively remove the protection provided by the app’s end-to-end encryption. Tuma said he was working on a case in New York where he was sitting on “about a thousand recordings from WhatsApp.” The Facebook-owned app may not be so susceptible to near real-time interception, however, as backups can only be done as frequently as once a day. Metadata, however, showing how a WhatsApp account was used and which numbers were contacting one another and when, can be tracked with a surveillance technology known as a pen-register. PenLink provides that tool as a service.

All messages on WhatsApp are end-to-end encrypted, said a company spokesperson, and it’s transparent about how it works with law enforcement. “We know that people want their messaging services to be reliable and safe – and that requires WhatsApp to have limited data,” the spokesperson said. “We carefully review, validate and respond to law enforcement requests based on applicable law and in accordance with our terms of service, and are clear about this on our website and in regular transparency reports. This work has helped us lead the industry in delivering private communications while keeping people safe, and has led to arrests in criminal cases.” They pointed to a release last year of a feature that allows users to encrypt their backups in the iCloud or Google Drive, while noting that when they respond to a law enforcement request, they don’t provide the data to any private company like PenLink, but directly to law enforcement.

Going dark or swimming in data?

In recent years, the FBI and various police agencies have raised concerns about end-to-end encryption from Google or Facebook cutting off valuable data sources. But Tuma said that Silicon Valley’s heavyweights aren’t likely to start hiding information from police because it would mean doing the same to advertisers. “I always call B.S. on it for this reason right here: Google’s ad revenue in 2020 was $182 billion,” Tuma said.

Granick of the ACLU said that such claims showed that the FBI, contrary to what the bureau claimed, wasn’t losing sight of suspects because of encrypted apps like WhatsApp. “The fact that backups and other data are not encrypted creates a treasure trove for police,” Granick said. “Far from going dark, they are swimming in data.” It’s noteworthy that Signal, an encrypted communications app that’s become hugely popular in recent years, does not have a feature that allows users to back up their data to the cloud.

Indeed, the amount of data being sent by the likes of Google and Facebook to police can be astonishing. Forbes recently reviewed a search warrant in which the police were sent 27,000 pages of information on a Facebook account of a man accused of giving illegal tours of the Grand Canyon. Tuma said he’d seen even bigger returns, the largest being around 340,000.

Though its headcount is small – less than 100 employees, according to LinkedIn – PenLink’s ability to tap a wide range of telecoms and internet businesses at scale has made the company very attractive to police over the last two decades. Over the last month alone, the DEA ordered nearly $2 million in licenses and the FBI $750,000.

Through a Freedom of Information Act request, Forbes obtained information on a $16.5 million PenLink contract with ICE that was signed in 2017 and continued to 2021. It details a need for the company’s suite of telecommunications analysis and intercept software applications, including what it called its PLX tool. The contract requires PenLink, at a minimum, to help wiretap a large number of providers, including AT&T, Iridium Satellite, Sprint, Verizon, T-Mobile, Cricket, Cablevision, Comcast, Time Warner, Cox, Skype, Vonage, Virgin Mobile and what the government calls “social media and advertising websites” such as Facebook and WhatsApp.

PenLink’s work wouldn’t be possible without the compliance of tech providers, who, according to Granick, “are storing too much data for too long, and then turning too much over to investigators. Social media companies are able to filter by date, type of data, and even sender and recipient. Terabytes of data are almost never going to be responsive to probable cause, which is what the Fourth Amendment requires.”

Follow Thomas on Twitter. Check out his website. Send him a secure tip.

After ruining Android messaging, Google says iMessage is too powerful

Google failed to compete with iMessage for years. Now it wants Apple to play nice.

Source: https://arstechnica.com/gadgets/2022/01/after-ruining-android-messaging-google-says-imessage-is-too-powerful/

Google took to Twitter this weekend to complain that iMessage is just too darn influential with today’s kids. The company was responding to a Wall Street Journal report detailing the lock-in and social pressure Apple’s walled garden is creating among US teens. iMessage brands texts from iPhone users with a blue background and gives them additional features, while texts from Android phones are shown in green and only have the base SMS feature set. According to the article, „Teens and college students said they dread the ostracism that comes with a green text. The social pressure is palpable, with some reporting being ostracized or singled out after switching away from iPhones.“ Google feels this is a problem.

„iMessage should not benefit from bullying,“ the official Android Twitter account wrote. „Texting should bring us together, and the solution exists. Let’s fix this as one industry.“ Google SVP Hiroshi Lockheimer chimed in, too, saying, „Apple’s iMessage lock-in is a documented strategy. Using peer pressure and bullying as a way to sell products is disingenuous for a company that has humanity and equity as a core part of its marketing. The standards exist today to fix this.“

The „solution“ Google is pushing here is RCS, or Rich Communication Services, a GSMA standard from 2008 that has slowly gained traction as an upgrade to SMS. RCS adds typing indicators, user presence, and better image sharing to carrier messaging. It is a 14-year-old carrier standard, though, so it lacks many of the features you would want from a modern messaging service, like end-to-end encryption and support for non-phone devices. Google tries to band-aid over the aging standard with its „Google Messaging“ client, but the result is a lot of clunky solutions that don’t add up to a good modern messaging service.

Since RCS replaces SMS, Google has been on a campaign to get the industry to make the upgrade. After years of protesting, the US carriers are all onboard, and there is some uptake among the international carriers, too. The biggest holdout is Apple, which only supports SMS through iMessage.

Apple's green-versus-blue bubble explainer from its website.
Enlarge / Apple’s green-versus-blue bubble explainer from its website.
Apple

Apple hasn’t ever publicly shot down the idea of adding RCS to iMessage, but thanks to documents revealed in the Epic v. Apple case, we know the company views iMessage lock-in as a valuable weapon. Bringing RCS to iMessage and making communication easier with Android users would only help to weaken Apple’s walled garden, and the company has said it doesn’t want that.

In the US, iPhones are more popular with young adults than ever. As The Wall Street Journal notes, „Among US consumers, 40% use iPhones, but among those aged 18 to 24, more than 70% are iPhone users.“ It credits Apple’s lock-in with apps like iMessage for this success.

Reaping what you sow

Google clearly views iMessage’s popularity as a problem, and the company is hoping this public-shaming campaign will get Apple to change its mind on RCS. But Google giving other companies advice on a messaging strategy is a laughable idea since Google probably has the least credibility of any tech company when it comes to messaging services. If the company really wants to do something about iMessage, it should try competing with it.

As we recently detailed in a 25,000-word article, Google’s messaging history is one of constant product startups and shutdowns. Thanks to a lack of product focus or any kind of top-down mandate from Google’s CEO, no division is really „in charge“ of messaging. As a consequence, the company has released 13 half-hearted messaging products since iMessage launched in 2011. If Google wants to look to someone to blame for iMessage’s dominance, it should start with itself, since it has continually sabotaged and abandoned its own plans to make an iMessage competitor.

 

Messaging is important, and even if it isn’t directly monetizable, a dominant messaging app has real, tangible benefits for an ecosystem. The rest of the industry understood this years ago. Facebook paid $22 billion to buy WhatsApp in 2014 and took the app from 450 million users to 2 billion users. Along with Facebook Messenger, Facebook has two dominant messaging platforms today, especially internationally. Salesforce paid $27 billion for Slack in 2020, and Tencent’s WeChat, a Chinese messaging app, is pulling in 1.2 billion users and yearly revenues of $5.5 billion. Snapchat is up to a $67 billion market cap, and Telegram is getting $40 billion valuations from investors. Google keeps trying ideas in this market, but it never makes an investment that is anywhere close to the competition.
 
 

Google once had a functional competitor to iMessage called Google Hangouts. Circa 2015, Hangouts was a messaging powerhouse; in addition to the native Hangouts messaging, it also supported SMS and Google Voice messages. Hangouts did group video calls five years before Zoom blew up, and it had clients on Android, iOS, the web, Gmail, and every desktop OS via a Chrome extension.

As usual, though, Google lacked any kind of long-term plan or ability to commit to a single messaging strategy, and Hangouts only survived as the „everything“ messenger for a single year. By 2016, Google moved on to the next shiny messaging app and left Hangouts to rot.

Even if Google could magically roll out RCS everywhere, it’s a poor standard to build a messaging platform on because it is dependent on a carrier phone bill. It’s anti-Internet and can’t natively work on webpages, PCs, smartwatches, and tablets, because those things don’t have SIM cards. The carriers designed RCS, so RCS puts your carrier bill at the center of your online identity, even when free identification methods like email exist and work on more devices. Google is just promoting carrier lock-in as a solution to Apple lock-in.

Despite Google’s complaining about iMessage, the company seems to have learned nothing from its years of messaging failure. Today, Google messaging is the worst and most fragmented it has ever been. As of press time, the company runs eight separate messaging platforms, none of which talk to each other: there is Google Messages/RCS, which is being promoted today, but there’s also Google Chat/Hangouts, Google Voice, Google Photos Messages, Google Pay Messages, Google Maps Business Messages, Google Stadia Messages, and Google Assistant Messaging. Those last couple of apps aren’t primarily messaging apps but have all ended up rolling their own siloed messaging platform because no dominant Google system exists for them to plug into.

The situation is an incredible mess, and no single Google product is as good as Hangouts was in 2015. So while Google goes backward, it has resorted to asking other tech companies to please play nice with it while it continues to fumble through an incoherent messaging strategy.