Archiv der Kategorie: Privacy

The Privacy-Friendly Tech to Replace Your US-Based Email, Browser, and Search

Thanks to drastic policy changes in the US and Big Tech’s embrace of the second Trump administration, many people are moving their digital lives abroad. Here are a few options to get you started.

Image may contain Electronics Screen Computer Hardware Hardware and Monitor

From your email to your web browsing, it’s highly likely that your daily online life is dominated by a small number of tech giants—namely Google, Microsoft, and Apple. But since Big Tech has been cozying up to the second Trump administration, which has taken an aggressive stance on foreign policy, and Elon Musk’s so-called Department of Government Efficiency (DOGE) has ravaged through the government, some attitudes towards using US-based digital services have been changing.

While movements to shift from US digital services aren’t new, they’ve intensified in recent months. Companies in Europe have started moving away from some US cloud giants in favor of services that handle data locally, and there have been efforts from officials in Europe to shift to homegrown tech that has fewer perceived risks. For example, the French and German governments have created their own Docs word processor to rival Google Docs.

Meanwhile, one consumer poll released in March had 62 percent of people from nine European countries saying that large US tech companies were a threat to the continent’s sovereignty. At the same time, lists of non-US tech alternatives and European-based tech options have seen a surge in visitors in recent months.

For three of the most widely used tech services—email, web browsers, and search engines—we’ve been through some of the alternatives that are privacy-focused and picked some options you may want to consider. Other options are available, but these organizations and companies aim to minimize data they collect and often put privacy first.

There are caveats, though. While many of the services on this list are based outside of the US, there’s still the potential that some of them rely upon Big Tech services themselves—for instance, some search engines can use results or indexes provided by Big Tech, while companies may use software or services, such as cloud hosting, that are created by US tech firms. So trying to distance yourself entirely may not be as straightforward as it first looks.

Web Browsers

Mullvad

Based in Sweden, Mullvad is perhaps best known for its VPN, but in 2023 the organization teamed up with digital anonymity service Tor to create the Mullvad Browser. The open source browser, which is available only on desktop, says it collects no user data and is focused on privacy. The browser has been designed to stop people from tracking you via browser fingerprinting as you move around the web, plus it has a “private mode” that isolates tracking cookies enabled by default. “The underlying policy of Mullvad is that we never store any activity logs of any kind,” its privacy policy says. The browser is designed to work with Mullvad’s VPN but is also compatible with any VPN that you might use.

Vivaldi

WIRED’s Scott Gilbertson swears by Vivaldi and has called it the web’s best browser. Available on desktop and mobile, the Norwegian-headquartered browser says it doesn’t profile your behavior. “The sites you visit, what you type in the browser, your downloads, we have no access to that data,” the company says. “It either stays on your local machine or gets encrypted.” It also blocks trackers and hosts data in Iceland, which has strong data protection laws. Its privacy policy says it anonymizes IP addresses and doesn’t share browsing data.

Search Engines

Qwant French search engine Qwant has built its own search index, crawling more than 20 billion pages to create its own records of the web. Creating a search index is a hugely costly, laborious process, and as a result, many alternative search engines will not create an extensive index and instead use search results from Google or Microsoft’s Bing—enhancing them with their own data and algorithms. Qwant says it uses Bing to “supplement” search results that it hasn’t indexed. Beyond this, Qwant says it does not use targeted advertising, or store people’s search history. “Your data remains confidential, and the processing of your data remains the same,” the company says in its privacy policy.

Mojeek

Mojeek, based out of the United Kingdom, has built its own web crawler and index, saying that its search results are “100% independent.” The search engine does not track you, it says in its privacy policy, and only keeps some specific logs of information. “Mojeek removes any possibility of tracking or identifying any particular user,” its privacy policy says. It uses its own algorithms to rank search results, not using click or personalization data to create ranks, and says that this can mean two people searching for the same thing while in different countries can receive the same search results.

Startpage

Based in the Netherlands, Startpage says that when you make a search request, the first thing that happens is it removes your IP address and personal data—it doesn’t use any tracking cookies, it says. The company uses Google and Bing to provide its search results but says it acts as an “intermediary” between you and the providers. “Startpage submits your query to Google and Bing anonymously on your behalf, then returns the results to you, privately,” it says on its website. “Google and Microsoft do not know who made the search request—instead, they only see Startpage.”

Ecosia

Nonprofit search engine Ecosia uses the money it makes to help plant trees. The company also offers various privacy promises when you search with it, too. Based in Germany, the company says it doesn’t collect excessive data and doesn’t use search data to personalize ads. Like other search alternatives, Ecosia uses Google’s and Bing’s search results (you can pick which one in the settings). “We only collect and process data that is necessary to provide you with the best search results (which includes your IP address, search terms and session behavioral data),” the company says on its website. The information it collects is gathered to provide search results from its Big Tech partners and detect fraud, it says. (At the end of 2024, Ecosia partnered with Qwant to build more search engine infrastructure in Europe).

Email Providers

ProtonMail

Based in Switzerland, Proton started with a privacy-focused email service and has built out a series of apps, including cloud storage, docs, and a VPN to rival Google. The company says it cannot read any messages in people’s inboxes, and it offers end-to-end encryption for emails sent to other Proton Mail addresses, as well as a way to send password protected emails to non Proton accounts. It blocks trackers in emails and has multiple account options, including both free and paid choices. Its privacy policy describes what information the company has access to, which includes sender and recipient email addresses, plus IP addresses where messages arrive from, message subject lines, and when emails are sent. (Despite Switzerland’s strong privacy laws, the government has recently announced it may require encrypted services to keep user’s data, something that Proton has pushed back on).

Tuta

Tuta, which used to be called Tutanota and is based in Germany, says it encrypts email content, subject lines, calendars, address books, and other data in your inbox. “The only unencrypted data are mail addresses of users as well as senders and recipients of emails,” it says on its website, adding that users‘ encryption keys cannot be accessed by developers. Like Proton, emails sent between Tuta accounts are end-to-end encrypted, and you can send password protected emails when messaging an account from another email provider. The company also has an end-to-end encrypted calendar and offers both free and paid plans.

Source: https://www.wired.com/story/the-privacy-friendly-tech-to-replace-your-us-based-email-browser-and-search/

Kids in China Are Using Bots and Engagement Hacks to Look More Popular on Their Smartwatches

 
 
In China, parents are buying smartwatches for children as young as 5, connecting them to a digital world that blends socializing with fierce competition.
Image may contain Meng Xiaodong Body Part Finger Hand Person Baby and Shelf
Photo-Illustration: WIRED Staff; Getty Images
 
 

At what age should a kid ideally get a smartwatch? In China, parents are buying them for children as young as five. Adults want to be able to call their kids and track their location down to a specific building floor. But that’s not why children are clamoring for the devices, specifically ones made by a company called Xiaotiancai, which translates to Little Genius in English.

The watches, which launched in 2015 and cost up to $330, are a portal into an elaborate world that blends social engagement with relentless competition. Kids can use the watches to buy snacks at local shops, chat and share videos with friends, play games, and, sure, stay in touch with their families. But the main activity is accumulating as many “likes” as possible on their watch’s profile page. On the extreme end, Chinese media outlets have reported on kids who buy bots to juice their numbers, hack the watches to dox their enemies, and sometimes even find romantic partners. According to tech research firm Counterpoint Research, Little Genius accounts for nearly half of global market share for kids’ smartwatches.

Status Games

Over the past decade, Little Genius has found ways to gamify nearly every measurable activity in the life of a child—playing ping pong, posting updates, the list goes on. Earning more experience points boosts kids to a higher level, which increases the number of likes they can send to friends. It’s a game of reciprocity—you send me likes, and I’ll return the favor. One 18-year-old recently told Chinese media that she had struggled to make friends until four years ago when a classmate invited her into a Little Genius social circle. She racked up more than one million likes and became a mini-celebrity on the platform. She said she met all three of her boyfriends through the watch, two of whom she broke up with because they asked her to send erotic photos.

 

High like counts have become a sort of status symbol. Some enthusiastic Little Genius users have taken to RedNote (or Xiaohongshu), a prominent Chinese social media app, to hunt for new friends so as to collect more likes and badges. As video tutorials on the app explain, low-level users can only give out five likes a day to any one friend; higher-ranking users can give out 20. Because the watch limits its owner to a total of 150 friends, kids are therefore incentivized to maximize their number of high-level friends. Lower-status kids, in turn, are compelled to engage in competitive antics so they don’t get dumped by higher-ranking friends.

“They feel this sense of camaraderie and community,” said Ivy Yang, founder of New York-based consultancy Wavelet Strategy, who has studied Little Genius. “They have a whole world.” But Yang expressed reservations about the way the watch seems to commodify friendship. “It’s just very transactional,” she adds.

Engagement Hacks

On RedNote/Xiaohongshu, people post videos on circumventing Little Genius’s daily like limits, with titles such as “First in the world! Unlimited likes on Little Genius new homepage!” The competitive pressure has also spawned businesses that promise to help kids boost their metrics. Some high-ranking users sell their old accounts. Others sell bots that send likes or offer to help keep accounts active while the owner of a watch is in class.

Get enough likes—say, 800,000—and you become a “big shot” in the Little Genius community. Last month, a Chinese media outlet reported that a 17-year-old with more than 2 million likes used her online clout to sell bots and old accounts, earning her more than $8,000 in a year. Though she enjoyed the fame that the smartwatch brought her, she said she left the platform after getting into fights with other Little Genius “big shots” and facing cyberbullying.

 

In September, a Beijing-based organization called China’s Child Safety Emergency Response warned parents that children with Little Genius watches were at risk of developing dangerous relationships or falling victim to scams. Officials have also raised alarms about these hidden corners of the Little Genius universe. The Chinese government has begun drafting national safety standards for children’s watches, following growing concerns over internet addiction, content unfit for children, and overspending via the watch payment function. The company did not respond to requests for comment.

I talked to one parent who had been reluctant to buy the watch. Lin Hong, a 48-year-old mom in Beijing, worried that her nearsighted daughter, Yuanyuan, would become obsessed with its tiny screen. But once Yuanyuan turned 8, Lin relented and splurged on the device. Lin’s fears quickly materialized.

 

Yuanyuan loved starting her day by customizing her avatar’s appearance. She regularly sent likes to her friends and made an effort to run and jump rope to earn more points. “She would look for her smartwatch first thing every morning,” Lin said. “It was like adults, actually, they’re all a bit addicted.”

 

To curb her daughter’s obsession, Lin limited Yuanyuan’s time on the watch. Now she’s noticing that her daughter, who turns 9 soon, chafes at her mother’s digital supervision. “If I call her three times, she’ll finally pick up to say, ‘I’m still out, stop calling. I’m not done playing yet,’ and hang up,” Lin said. “If it’s like this, she probably won’t want to keep wearing the watch for much longer.”


This is an edition of Zeyi Yang and Louise Matsakis Made in China newsletter. Read previous newsletters here.

 

Anyone Can Buy Data Tracking US Soldiers and Spies to Nuclear Vaults and Brothels in Germany

Source: https://www.wired.com/story/phone-data-us-soldiers-spies-nuclear-germany/

by Dhruv Mehrotra Dell Cameron

Nearly every weekday morning, a device leaves a two-story home near Wiesbaden, Germany, and makes a 15-minute commute along a major autobahn. By around 7 am, it arrives at Lucius D. Clay Kaserne—the US Army’s European headquarters and a key hub for US intelligence operations.

The device stops near a restaurant before heading to an office near the base that belongs to a major government contractor responsible for outfitting and securing some of the nation’s most sensitive facilities.

For roughly two months in 2023, this device followed a predictable routine: stops at the contractor’s office, visits to a discreet hangar on base, and lunchtime trips to the base’s dining facility. Twice in November of last year, it made a 30-minute drive to the Dagger Complex, a former intelligence and NSA signals processing facility. On weekends, the device could be traced to restaurants and shops in Wiesbaden.

The individual carrying this device likely isn’t a spy or high-ranking intelligence official. Instead, experts believe, they’re a contractor who works on critical systems—HVAC, computing infrastructure, or possibly securing the newly built Consolidated Intelligence Center, a state-of-the-art facility suspected to be used by the National Security Agency.

Whoever they are, the device they’re carrying with them everywhere is putting US national security at risk.

A joint investigation by WIRED, Bayerischer Rundfunk (BR), and Netzpolitik.org reveals that US companies legally collecting digital advertising data are also providing the world a cheap and reliable way to track the movements of American military and intelligence personnel overseas, from their homes and their children’s schools to hardened aircraft shelters within an airbase where US nuclear weapons are believed to be stored.

A collaborative analysis of billions of location coordinates obtained from a US-based data broker provides extraordinary insight into the daily routines of US service members. The findings also provide a vivid example of the significant risks the unregulated sale of mobile location data poses to the integrity of the US military and the safety of its service members and their families overseas.

We tracked hundreds of thousands of signals from devices inside sensitive US installations in Germany. That includes scores of devices within suspected NSA monitoring or signals-analysis facilities, more than a thousand devices at a sprawling US compound where Ukrainian troops were being being trained in 2023, and nearly 2,000 others at an air force base that has crucially supported American drone operations.

A device likely tied to an NSA or intelligence employee broadcast coordinates from inside a windowless building with a metal exterior known as the “Tin Can,” which is reportedly used for NSA surveillance, according to agency documents leaked by Edward Snowden. Another device transmitted signals from within a restricted weapons testing facility, revealing its zig-zagging movements across a high-security zone used for tank maneuvers and live munitions drills.

We traced these devices from barracks to work buildings, Italian restaurants, Aldi grocery stores, and bars. As many as four devices that regularly pinged from Ramstein Air Base were later tracked to nearby brothels off base, including a multistory facility called SexWorld.

Experts caution that foreign governments could use this data to identify individuals with access to sensitive areas; terrorists or criminals could decipher when US nuclear weapons are least guarded; or spies and other nefarious actors could leverage embarrassing information for blackmail.

“The unregulated data broker industry poses a clear threat to national security,” says Ron Wyden, a US senator from Oregon with more than 20 years overseeing intelligence work. “It is outrageous that American data brokers are selling location data collected from thousands of brave members of the armed forces who serve in harms’ way around the world.”

Wyden approached the US Defense Department in September after initial reporting by BR and netzpolitik.org raised concerns about the tracking of potential US service members. DoD failed to respond. Likewise, Wyden’s office has yet to hear back from members of US president Joe Biden’s National Security Council, despite repeated inquiries. The NSC did not immediately respond to a request for comment.

“There is ample blame to go around,” says Wyden, “but unless the incoming administration and Congress act, these kinds of abuses will keep happening, and they’ll cost service members‘ lives.”

The Oregon senator also raised the issue earlier this year with the Federal Trade Commission, following an FTC order that imposed unprecedented restrictions against a US company it accused of gathering data around “sensitive locations.” Douglas Farrar, the FTC’s director of public affairs, declined a request to comment.

WIRED can now exclusively report, however, that the FTC is on the verge of fulfilling Wyden’s request. An FTC source, granted anonymity to discuss internal matters, says the agency is planning to file multiple lawsuits soon that will formally recognize US military installations as protected sites. The source adds that the lawsuits are in keeping with years‘ worth of work by FTC Chair Lina Khan aimed at shielding US consumers—including service members—from harmful surveillance practices.

Before a targeted ad appears on an app or website, third-party software often embedded in apps called software development kits transmit information about their users to data brokers, real-time bidding platforms, and ad exchanges—often including location data. Data brokers often will collect that data, analyze it, repackage it, and sell it.

In February of 2024, reporters from BR and Netzpolitik.org obtained a free sample of this kind of data from Datastream Group, a Florida-based data broker. The dataset contains 3.6 billion coordinates—some recorded at millisecond intervals—from up to 11 million mobile advertising IDs in Germany over what the company says is a 59-day span from October through December 2023.

Mobile advertising IDs are unique identifiers used by the advertising industry to serve personalized ads to smartphones. These strings of letters and numbers allow companies to track user behavior and target ads effectively. However, mobile advertising IDs can also reveal much more sensitive information, particularly when combined with precise geolocation data.

In total, our analysis revealed granular location data from up to 12,313 devices that appeared to spend time at or near at least 11 military and intelligence sites, potentially exposing crucial details like entry points, security practices, and guard schedules—information that, in the hands of hostile foreign governments or terrorists, could be deadly.

Our investigation uncovered 38,474 location signals from up to 189 devices inside Büchel Air Base, a high-security German installation where as many as 15 US nuclear weapons are reportedly stored in underground bunkers. At Grafenwöhr Training Area, where thousands of US troops are stationed and have trained Ukrainian soldiers on Abrams tanks, we tracked 191,415 signals from up to 1,257 devices.

At Lucius D. Clay Kaserne, the US Army’s European headquarters, we identified 74,968 location signals from as many as 799 devices, including some at the European Technical Center, once the NSA’s communication hub in Europe.Courtesy of OpenMapTiles

In Wiesbaden, home to the US Army’s European headquarters at Lucius D. Clay Kaserne, 74,968 location signals from as many as 799 devices were detected—some originating from sensitive intelligence facilities like the European Technical Center, once the NSA’s communication hub in Europe, and newly built intelligence operations centers.

At Ramstein Air Base, which supports some US drone operations, 164,223 signals from nearly 2,000 devices were tracked. That included devices tracked to Ramstein Elementary and High School, base schools for the children of military personnel.

Of these devices, 1,326 appeared at more than one of these highly sensitive military sites, potentially mapping the movements of US service members across Europe’s most secure locations.

The data is not infallible. Mobile ad IDs can be reset, meaning multiple IDs can be assigned to the same device. Our analysis found that, in some instances, devices were assigned more than 10 mobile ad IDs.

The location data’s precision at the individual device level can also be inconsistent. By contacting several people whose movements were revealed in the dataset, the reporting collective confirmed that much of the data was highly accurate—identifying work commutes and dog walks of individuals contacted. However, this wasn’t always the case. One reporter whose ID appears in the dataset found that it often placed him a block away from his apartment and during times when he was out of town. A study from NATO Strategic Communications Center of Excellence found that “quantity overshadows quality” in the data broker industry and that, on average, only up to 60 percent of the data surveyed can be considered precise.

According to its website, Datastream Group appears to offer “internet advertising data coupled with hashed emails, cookies, and mobile location data.” Its listed datasets include niche categories like boat owners, mortgage seekers, and cigarette smokers. The company, one of many in a multibillion-dollar location-data industry, did not respond to our request for comment about the data it provided on US military and intelligence personnel in Germany, where the US maintains a force of at least 35,000 troops, according to the most recent estimates.

Defense Department officials have known about the threat that commercial data brokers pose to national security since at least 2016, when Mike Yeagley, a government contractor and technologist, delivered a briefing to senior military officials at the Joint Special Operations Command compound in Fort Liberty (formerly Fort Bragg), North Carolina, about the issue. Yeagley’s presentation aimed to show how commercially available mobile data—already pervasive in conflict zones like Syria—could be weaponized for pattern of life analysis.

Midway through the presentation, Yeagley decided to raise the stakes. “Well, here’s the behavior of an ISIS operator,” he tells WIRED, recalling his presentation. “Let me turn the mirror around—let me show you how it works for your own personnel.” He then displayed data revealing phones as they moved from Fort Bragg in North Carolina and MacDill Air Force Base in Florida—critical hubs for elite US special operations units. The devices traveled through transit points like Turkey before clustering in northern Syria at a seemingly abandoned cement factory near Kobane, a known ISIS stronghold. The location he pinpointed was a covert forward operating base.

Yeagley says he was quickly escorted to a secured room to continue his presentation behind closed doors. There, officials questioned him on how he had obtained the data, concerned that his stunt had involved hacking personnel or unauthorized intercepts.

The data wasn’t sourced from espionage but from unregulated commercial brokers, he explained to the concerned DOD officials. “I didn’t hack, intercept, or engineer this data,” he told them. “I bought it.”

Now, years later, Yeagley remains deeply frustrated with the DODs inability to control the situation. What WIRED, BR, and Netzpolitik.org are now reporting is “very similar to the alarms we raised almost 10 years ago,” he says, shaking his head. “And it doesn’t seem like anything’s changed.”

US law requires the director of national intelligence to provide “protection support” for the personal devices of “at risk” intelligence personnel who are deemed susceptible to “hostile information collection activities.” But which personnel meet this criteria is unclear, as is the extent of the protections beyond periodic training and advice. The location data we acquired demonstrates, regardless, that commercial surveillance is far too pervasive and complex to be reduced to individual responsibility.

Biden’s outgoing director of national intelligence, Avril Haines, did not respond to a request for comment.

A report declassified by Haines last summeracknowledges that US intelligence agencies had purchased a “large amount” of “sensitive and intimate information” about US citizens from commercial data brokers, adding that “in the wrong hands,” the data could “facilitate blackmail, stalking, harassment, and public shaming.” The report, which contains numerous redactions, notes that, while the US government „would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times,” smartphones, connected cars, and web tracking have all made this possible “without government participation.”

Mike Rogers, the Republican chair of the House Armed Services Committee, did not respond to multiple requests for comment. A spokesperson for Adam Smith, the committee’s ranking Democrat, said Smith was unavailable to discuss the matter, busy negotiating a must-pass bill to fund the Pentagon’s policy priorities next year.

Jack Reed and Roger Wicker, the leading Democrat and Republican on the Senate Armed Services Committee, respectively, did not respond to multiple requests for comment. Inquiries placed with House and Senate leaders and top lawmakers on both congressional intelligence committees have gone unanswered.

The DOD and the NSA declined to answer specific questions related to our investigation. However, DOD spokesperson Javan Rasnake says that the Pentagon is aware that geolocation services could put personnel at risk and urged service members to remember their training and adhere strictly to operational security protocols. “Within the USEUCOM region, members are reminded of the need to execute proper OPSEC when conducting mission activities inside operational areas,” Rasnake says, using the shorthand for operational security.

An internal Pentagon presentation obtained by the reporting collective, though, claims that not only is the domestic data collection likely capable of revealing military secrets, it is essentially unavoidable at the personal level, service members’ lives being simply too intertwined with the technology permitting it. This conclusion closely mirrors the observations of Chief Justice John Roberts of the US Supreme Court, who in landmark privacy cases within the past decade described cell phones as being a “pervasive and insistent part of daily life” and that owning one was “indispensable to participation in modern society.”

The presentation, which a source says was delivered to high-ranking general officers, including the US Army’s chief information officer, warns that despite promises from major ad tech companies, “de-anonymization” is all but trivial given the widespread availability of commercial data collected on Pentagon employees. The document emphasizes that the caches of location data on US individuals is a “force protection issue,” likely capable of revealing troop movements and other highly guarded military secrets.

While instances of blackmail inside the Pentagon have seen a sharp decline since the Cold War, many of the structural barriers to persistently surveilling Americans have also vanished. In recent decades, US courts have repeatedly found that new technologies pose a threat to privacy by enabling surveillance that, “in earlier times, would have been prohibitively expensive,“ as the 7th Circuit Court of Appeals noted in 2007.

In an August 2024 ruling, another US appeals court disregarded claims by tech companies that users who “opt in” to surveillance were actually “informed” and doing so “voluntarily,” declaring the opposite is clear to “anyone with a smartphone.” The internal presentation for military staff presses that adversarial nations can gain access to advertising data with ease, using it to exploit, manipulate, and coerce military personnel for purposes of espionage.

Patronizing sex workers, whether legal in a foreign country or not, is a violation of the Uniform Code of Military Justice. The penalties can be severe, including forfeiture of pay, dishonorable discharge, and up to one year of imprisonment. But the ban on solicitation is not merely imposed on principle alone, says Michael Waddington, a criminal defense attorney who specializes in court-marial cases. “There’s a genuine danger of encountering foreign agents in these establishments, which can lead to blackmail or exploitation,” he says.

“This issue is particularly concerning given the current geopolitical climate. Many US servicemembers in Europe are involved in supporting Ukraine in its defense against the Russian invasion,” Waddington says. “Any compromise of their integrity could have serious implications for our operations and national security.”

When it comes to jeopardizing national security, even data on low-level personnel can pose a risk, says Vivek Chilukuri, senior fellow and program director of the Technology and National Security Program at the Center for a New American Security (CNAS). Before joining CNAS, Chilukuri served in part as legislative director and tech policy advisor to US senator Michael Bennet on the Senate Intelligence Committee and previously worked at the US State Department, specializing in countering violent extremism.

„Low-value targets can lead to high-value compromises,“ Chilukuri says. „Even if someone isn’t senior in an organization, they may have access to highly sensitive infrastructure. A system is only as secure as its weakest link.“ He points out that if adversaries can target someone with access to a crucial server or database, they could exploit that vulnerability to cause serious damage. “It just takes one USB stick plugged into the right device to compromise an organization.”

It’s not just individual service members who are at risk—entire security protocols and operational routines can be exposed through location data. At Büchel Air Base, where the US is believed to have stored an estimated 10 to 15 B61 nuclear weapons, the data reveals the daily activity patterns of devices on the base, including when personnel are most active and, more concerningly, potentially when the base is least populated.

Overview of the Air Mobility Command ramp at Ramstein Air Base, Germany.Photograph: Timm Ziegenthaler/Stocktrek Images; Getty Images

Büchel has 11 protective aircraft shelters equipped with hardened vaults for nuclear weapons storage. Each vault, which is located in a so-called WS3, or Weapons Storage and Security System, can hold up to four warheads. Our investigation traced precise location data for as many as 40 cellular devices that were present in or near these bunkers.

The patterns we could observe from devices at Büchel go far beyond just understanding the working hours of people on base. In aggregate, it’s possible to map key entry and exit points, pinpointing frequently visited areas, and even tracing personnel to their off-base routines. For a terrorist, this information could be a gold mine—an opportunity to identify weak points, plan an attack, or target individuals with access to sensitive areas.

This month, German authorities arrested a former civilian contractor employed by the US military on allegations of offering to pass sensitive information about American military operations in Germany to Chinese intelligence agencies.

In April, German authorities arrested two German-Russian nationals accused of scouting US military sites for potential sabotage, including allegedly arson. One of the targeted locations was the US Army’s Grafenwöhr Training Area in Bavaria, a critical hub for US military operations in Europe that spans 233 square kilometers.

At Grafenwöhr, WIRED, BR, and Netzpolitik.org could track the precise movements from up to 1,257 devices. Some devices could even be observed zigzagging through Range 301, an armored vehicle course, before returning to nearby barracks.

Our investigation found 38,474 location signals from up to 189 devices inside Büchel Air Base, around a dozen US nuclear weapons are reportedly stored.Courtesy of OpenMapTiles

A senior fellow at Duke University’s Sanford School of Public Policy and head of its data brokerage research project, Justin Sherman also leads Global Cyber Strategies, a firm specializing in cybersecurity and tech policy. In 2023, he and his coauthors at Duke secured $250,000 in funding from the United States Military Academy to investigate how easy it is to purchase sensitive data about military personnel from data brokers. The results were alarming: They were able to buy highly sensitive, nonpublic, individually identifiable health and financial data on active-duty service members, without any vetting.

“It shows you how bad the situation is,” Sherman says, explaining how they geofenced requests to specific special operations bases. “We didn’t pretend to be a marketing firm in LA. We just wanted to see what the data brokers would ask.” Most brokers didn’t question their requests, and one even offered to bypass an ID verification check if they paid by wire.

During the study, Sherman helped draft an amendment to the National Defense Authorization Act that requires the Defense Department to ensure that highly identifiable individual data shared with contractors cannot be resold. He found the overall impact of the study underwhelming, however. “The scope of the industry is the problem,” he says. “It’s great to pass focused controls on parts of the ecosystem, but if you don’t address the rest of the industry, you leave the door wide open for anyone wanting location data on intelligence officers.”

Efforts by the US Congress to pass comprehensive privacy legislation have been stalled for the better part of a decade. The latest effort, known as the American Privacy Rights Act, failed to advance in June after GOP leaders threatened to scuttle the bill, which was significantly weakened before being shelved.

Another current privacy bill, the Fourth Amendment Is Not For Sale Act, seeks to ban the US government from purchasing data on Americans that it would normally need a warrant to obtain. While the bill would not prohibit the sale of commercial location data altogether, it would bar federal agencies from using those purchases to circumvent constitutional protections upheld by the Supreme Court. Its fate rests in the hands of House and Senate leaders, whose negotiations are private.

“The government needs to stop subsidizing what is now for good reason one of the world’s least popular industries,” says Sean Vitka, policy director at the nonprofit Demand Progress. “There are a lot of members of Congress who take seriously the severe threats to privacy and national security posed by data brokers, but we’ve seen many actions by congressional leaders that only furthers the problem. There shouldn’t need to be a body count for these people to take action.”

Open letter on the feasibility of “Chat Control”: Assessments from a scientific point of view

Source: https://www.ins.jku.at/chatcontrol/

Open letter on the feasibility of „Chat Control“:Assessments from a scientific point of view

Update: A parallel initiative is aimed at the EU institutions and is available in English at the CSA Academia Open Letter . Since the very similar arguments were formulated in parallel, they support each other.

The initiative of the EU Commission discussed under the name “ Chat Control ”, the unprovoked monitoring of various communication channels to detect child pornography, terrorist or other “undesirable” material – including attempts at early detection (e.g. “grooming” minors through text messages that build trust) – mandatory for mobile devices and communication services, has recently been expanded to include the monitoring of direct audio communications . Some states, including Austria and Germany , have already publicly declared that they will not support this initiative for monitoring without cause. AlsoCivil protection and children’s rights organizations have rejected this approach as excessive and at the same time ineffective . Recently, even the legal service of the EU Council of Ministers diagnosed an incompatibility with European fundamental rights. Irrespective of this, the draft will be tightened up even more and extended to other channels: in the last version even to audio messages and conversations. The approach appears to be coordinated with corresponding attempts in the US ( “EARN IT” and “STOP CSAM” Acts ) and the UK (“Online Safety Bill”).

As scientists who are actively researching in various areas of this topic, we therefore make the declaration in all clarity: This advance cannot be implemented safely and effectively. There is currently no foreseeable further development of the corresponding technologies that would technically make such an implementation possible. In addition, according to our assessment, the hoped-for effects of these monitoring measures are not to be expected. This legislative initiative therefore misses its target, is socio-politically dangerous and would permanently damage the security of our communication channels for the majority of the population.

The main reasons against the feasibility of „Chat Control“ have already been mentioned several times. In the following, we would like to discuss these specifically in the interdisciplinary connection between artificial intelligence (AI, artificial intelligence / AI), security (information security / technical data protection) and law .

Our concerns are:

  1. Security: a) Encryption is the best method for internet security. Successful attacks are almost always due to faulty software. b) A systematic and automated monitoring (ie „scanning“) of encrypted content is technically only possible if the security that can be achieved through encryption is massively violated, which is associated with considerable additional risks. c) A legal obligation to integrate such scanners will make secure digital communications in the EU unavailable to the majority of the population, but will have little impact on criminal communications.
  2. AI: a) Automated classification of content, including methods based on machine learning, is always subject to errors, which in this case will lead to high false positives. b) Special monitoring methods, which are carried out on the end devices, open up additional possibilities for attacks up to the extraction of possibly illegal training material.
  3. Law: a) A sensible demarcation from the explicitly permitted use of specific content, for example in the educational sector or for criticism and parody, does not appear to be automatically possible. b) The massive encroachment on fundamental rights through such an instrument of mass surveillance is not proportionate and would cause great collateral damage in society.

In detail, these concerns are based on the following scientifically recognized facts:

  1. Security
    1. Encryption using modern methods is an indispensable basis for practically all technical mechanisms for maintaining security and data protection on the Internet. In this way, communication on the Internet is currently protected as the cornerstone for current services, right through to critical infrastructure such as telephone, electricity, water networks, hospitals, etc. Trust in good encryption methods is significantly higher among experts than in other security mechanisms. Above all, the average poor quality of software in general is the reason for the many publicly known security incidents. Improving this situation in terms of better security therefore relies primarily on encryption.
    2. Automatic monitoring („scanning“) of correctly encrypted content is not effectively possible according to the current state of knowledge. Procedures such as „Fully Homomorphic Encryption“ (FHE) are currently not suitable for this application – neither is the procedure capable of this, nor is the necessary computing power realistically available. A rapid improvement is not foreseeable here either.
    3. For these reasons, earlier attempts to ban or restrict end-to-end encryption were mostly quickly abandoned internationally. The current chat control push aims to have monitoring functionality built into the end devices in the form of scanning modules (“Client-Side Scanning” / CSS) and therefore to scan the plain text content before secure encryption or after secure decryption . Providers of communication services would have to be legally obliged to implement this for all content. Since this is not in the core interest of such organizations and requires effort in implementation and operation as well as increased technical complexity, it cannot be assumed that the introduction of such scanners will be voluntary – in contrast to scanning on the server side.
    4. Secure messengers such as Signal or Threema and WhatsApp have already publicly announced that they will not implement such client scanners, but to withdraw from the corresponding regions. This has different implications for communication depending on the use case: (i) (adult) criminals will simply communicate with each other via “non-compliant” messenger services to further benefit from secure encryption. The increased effort, for example to install other apps on Android via sideloading that are not available in the usual app stores in the respective country, is not a significant hurdle for criminal elements. (ii) Criminals communicate with possible future victims via popular platforms, which would be the target of the mandatory surveillance measures discussed. In this case, it can be assumed that informed criminals will quickly lure their victims to alternative but still internationally recognized channels such as Signal, which are not covered by the monitoring. (iii) Participants exchange problematic material without being aware that they are committing a crime. This case would be reported automatically and possibly also lead to the criminalization of minors without intent. The restrictions would therefore primarily affect the broad – and irreproachable – mass of the population.It would be utterly delusional to think that without built-in monitoring, secure encryption could still be reversed. Tools like Signal, Tor, Cwtch, Briar and many others are widely available as open source and can easily be removed from central control. Knowledge of secure encryption is already common knowledge and can no longer be censored. There is no effective way to technically block the use of strong encryption without Client Side Scanning (CSS). If surveillance measures are prescribed in messengers, only criminals whose actual crimes outweigh the violation of the surveillance obligation will maintain their privacy.
    5. Furthermore, the complex implementation forced by proposed scanner modules creates additional security problems that do not currently exist. On the one hand, this represents new software components, which in turn will be vulnerable. On the other hand, the Chat Control proposals consistently assume that the scanner modules themselves will remain confidential, since they would be trained on content that is already punishable for mere possession (built into the Messenger app), on the one hand, and simply for testing evasion methods, on the other can be used. It is also an illusion that such machine learning models or other scanner modules, distributed to billions of devices under the control of end users, can ever be kept secret.NeuralHash “ module for CSAM detection, which was extracted almost immediately from corresponding iOS versions and is thus openly available . The assumption by Chat Control proposals that these scanner modules could be kept confidential is therefore completely unfounded and incorrect Corresponding data leaks are almost unavoidable here.
  2. artificial intelligence
    1. We have to assume that machine learning (ML) models on end devices cannot, in principle, be kept completely secret. This is in contrast to server-side scanning, which is currently legally possible and also actively practiced by various providers to scan content that has not been end-to-end encrypted. ML models on the server side can be reasonably protected from being read with the current state of the art and are less the focus of this consideration.
    2. A general problem with all ML-based filters are false classifications, i.e. that known “undesirable” material is not recognized as such with small changes (also referred to as “false negative” or “false non-match”). For parts of the push, it is currently unknown how ML models should be able to recognize complex, unfamiliar material with changing context (e.g. „grooming“ in text chats) with even approximate accuracy. The probability of high false negative rates is high.In terms of risk, however, it is significantly more serious if harmless material is classified as “undesirable” (also referred to as “false positive” or “false match” or also as “collision”). Such errors can be reduced, but in principle cannot be ruled out. In addition to the false accusation of uninvolved persons, false positives also lead to (possibly very) many false reports for the investigative authorities, which already have too few resources to investigate reports.
    3. The assumed open availability of ML models also creates various new attack possibilities. Using the example of Apple NeuralHash , random collisions were found very quickly and programs were freely released to generate any collisions between images . This method, also known as “malicious collisions”, uses so-called adversarial attacks against the neural network and thus enables attackers to deliberately classify harmless material as a “match” in the ML model and thus classify it as “undesirable”. In this way, innocent people can be harmed in a targeted manner by automatic false reports and brought under suspicion – without any illegal action on the part of the attacked or attacker.
    4. The open availability of the models can also be used for so-called „training input recovery“ in order to extract (at least partially) the content used for training from the ML model. In the case of prohibited content (e.g. child pornography), this poses another massive problem and can further increase the damage to those affected by the fact that their sensitive data (e.g. images of abuse used for training) can continue to be published. Because of these and other problems, Apple, for example, withdrew the proposal .We note that this latter danger does not occur with server-side scanning by ML models, but is newly added by the chat control proposal with client scanner.
  3. Legal Aspects
    1. The right to privacy is a fundamental right that may only be interfered with under very strict conditions. Whoever makes use of this basic right must not be suspected from the outset of wanting to hide something criminal. The often-used phrase: „If you have nothing to hide, you have nothing to fear!“ denies people the exercise of their basic rights and promotes totalitarian surveillance tendencies. The use of chat control would fuel this.
    2. The area of ​​terrorism in particular overlaps with political activity and freedom of expression in its breadth. It is precisely against this background that the „preliminary criminalisation“, which has increasingly taken place in recent years under the guise of fighting terrorism, is viewed particularly critically. Chat control measures go in the same direction. They can severely curtail this basic right and make people who are politically critical the focus of criminal prosecution. The resulting severe curtailment of politically critical activity hinders the further development of democracy and harbors the danger of promoting radicalized underground movements.
    3. The field of law and social sciences includes researching criminal phenomena and questioning regulatory mechanisms. From this point of view, scientific discourse also runs the risk of being identified as “suspicious” by chat control and thus indirectly restricted. The possible stigmatization of critical legal and social sciences is in tension with the freedom of science, which also requires “research independent of the mainstream” for further development.
    4. In education, there is a need to educate young people to be critically conscious. This also includes passing on facts about terrorism. Through the use of chat control, the provision of teaching material by teachers could put them in a criminal focus. The same applies to addressing sexual abuse, so that control measures could make this sensitive subject more taboo, even if “self-empowerment mechanisms” are to be promoted.
    5. Interventions in fundamental rights must always be appropriate and proportionate, even if they are made in the context of criminal prosecution. The technical considerations presented show that these requirements are not met with Chat Control. Such measures thus lack any legal or ethical legitimacy.

In summary, the current proposal for chat control legislation is not technically sound from either a security or AI point of view and is highly problematic and excessive from a legal point of view. The chat control push brings significantly greater dangers for the general public than a possible improvement for those affected and should therefore be rejected.

Instead, existing options for human-driven reporting of potentially problematic material by recipients, as is already possible with various messenger services, should be strengthened and made even more easily accessible. It should be considered whether anonymous registration options for correspondingly illegal material could be created and made easily accessible from messengers. Existing criminal prosecution options, such as the monitoring of social media or open chat groups by police officers, as well as the legally required analysis of suspects‘ smartphones, can continue to be used accordingly.

For more detailed information and further details please contact:

Security issues:
Univ.-Prof. dr
Rene Mayrhofer

+43 732 2468-4121

rm@ins.jku.at

AI questions:
DI Dr.
Bernard Nessler

+43 732 2468-4489

nessler@ml.jku.at

Questions of law:
Univ.-Prof. dr
Alois Birklbauer

+43 732 2468-7447

alois.birklbauer@jku.at

Signatories: inside

  • AI Austria ,
    association for the promotion of artificial intelligence in Austria, Wollzeile 24/12, 1010 Vienna
  • Austrian Society for Artificial Intelligence (ASAI) ,
    association for the promotion of scientific research in the field of AI in Austria
  • Univ.-Prof. dr Alois Birklbauer, JKU Linz
    Head of the practice department for criminal law and medical criminal law )
  • Ass.-Prof. dr Maria Eichlseder, Graz University of Technology
  • Univ.-Prof. dr Sepp Hochreiter, JKU Linz
    Board of Directors of the Institute for Machine Learning, Head of the LIT AI Lab )
  • dr Tobias Höller, JKU Linz
    (post-doc at the Institute for Networks and Security)
  • FH Prof. TUE Peter Kieseberg, St. Pölten University of Applied Sciences
    Head of the Institute for IT Security Research )
  • dr Brigitte Krenn, Austrian Research Institute for Artificial Intelligence
    Board Member Austrian Society for Artificial Intelligence )
  • Univ.-Prof. dr Matteo Maffei, TU Vienna
    Head of the Security and Privacy Research Department, Co-Head of the TU Vienna Cyber ​​Security Center )
  • Univ.-Prof. dr Stefan Mangard, TU Graz
    Head of the Institute for Applied Information Processing and Communication Technology )
  • Univ.-Prof. dr René Mayrhofer, JKU Linz
    Board of Directors of the Institute for Networks and Security, Co-Head of the LIT Secure and Correct System Lab )
  • DI Dr. Bernhard Nessler, JKU Linz/SCCH
    Vice President of the Austrian Society for Artificial Intelligence )
  • Univ.-Prof. dr Christian Rechberger, Graz University of Technology
  • dr Michael Roland, JKU Linz
    (post-doc at the Institute for Networks and Security)
  • a.Univ.-Prof. dr Johannes Sametinger, JKU Linz
    Institute for Business Informatics – Software Engineering, LIT Secure and Correct System Labs )
  • Univ.-Prof. DI Georg Weissenbacher, DPhil (Oxon), TU Vienna
    (Prof. Rigorous Systems Engineering)

Published on 07/04/2023

How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users

Source: https://www.propublica.org/article/how-facebook-undermines-privacy-protections-for-its-2-billion-whatsapp-users

When Mark Zuckerberg unveiled a new “privacy-focused vision” for Facebook in March 2019, he cited the company’s global messaging service, WhatsApp, as a model. Acknowledging that “we don’t currently have a strong reputation for building privacy protective services,” the Facebook CEO wrote that “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever. This is the future I hope we will help bring about. We plan to build this the way we’ve developed WhatsApp.”

Zuckerberg’s vision centered on WhatsApp’s signature feature, which he said the company was planning to apply to Instagram and Facebook Messenger: end-to-end encryption, which converts all messages into an unreadable format that is only unlocked when they reach their intended destinations. WhatsApp messages are so secure, he said, that nobody else — not even the company — can read a word. As Zuckerberg had put it earlier, in testimony to the U.S. Senate in 2018, “We don’t see any of the content in WhatsApp.”

WhatsApp emphasizes this point so consistently that a flag with a similar assurance automatically appears on-screen before users send messages: “No one outside of this chat, not even WhatsApp, can read or listen to them.”

Given those sweeping assurances, you might be surprised to learn that WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through millions of private messages, images and videos. They pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.

The workers have access to only a subset of WhatsApp messages — those flagged by users and automatically forwarded to the company as possibly abusive. The review is one element in a broader monitoring operation in which the company also reviews material that is not encrypted, including data about the sender and their account.

Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp. A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.” It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala ​​Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters.” The presentation does not mention the company’s content moderation efforts.

WhatsApp’s director of communications, Carl Woog, acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove “the worst” abusers. But Woog told ProPublica that the company does not consider this work to be content moderation, saying: “We actually don’t typically use the term for WhatsApp.” The company declined to make executives available for interviews for this article, but responded to questions with written comments. “WhatsApp is a lifeline for millions of people around the world,” the company said. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”

WhatsApp’s denial that it moderates content is noticeably different from what Facebook Inc. says about WhatsApp’s corporate siblings, Instagram and Facebook. The company has said that some 15,000 moderators examine content on Facebook and Instagram, neither of which is encrypted. It releases quarterly transparency reports that detail how many accounts Facebook and Instagram have “actioned” for various categories of abusive content. There is no such report for WhatsApp.

Deploying an army of content reviewers is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp users. Together, the company’s actions have left WhatsApp — the largest messaging app in the world, with two billion users — far less private than its users likely understand or expect. A ProPublica investigation, drawing on data, documents and dozens of interviews with current and former employees and contractors, reveals how, since purchasing WhatsApp in 2014, Facebook has quietly undermined its sweeping security assurances in multiple ways. (Two articles this summer noted the existence of WhatsApp’s moderators but focused on their working conditions and pay rather than their effect on users’ privacy. This article is the first to reveal the details and extent of the company’s ability to scrutinize messages and user data — and to examine what the company does with that information.)

Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission. The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false. “We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment.

Facebook Inc. has also downplayed how much data it collects from WhatsApp users, what it does with it and how much it shares with law enforcement authorities. For example, WhatsApp shares metadata, unencrypted records that can reveal a lot about a user’s activity, with law enforcement agencies such as the Department of Justice. Some rivals, such as Signal, intentionally gather much less metadata to avoid incursions on its users’ privacy, and thus share far less with law enforcement. (“WhatsApp responds to valid legal requests,” the company spokesperson said, “including orders that require us to provide on a real-time going forward basis who a specific person is messaging.”)

WhatsApp user data, ProPublica has learned, helped prosecutors build a high-profile case against a Treasury Department employee who leaked confidential documents to BuzzFeed News that exposed how dirty money flows through U.S. banks.

Like other social media and communications platforms, WhatsApp is caught between users who expect privacy and law enforcement entities that effectively demand the opposite: that WhatsApp turn over information that will help combat crime and online abuse. WhatsApp has responded to this dilemma by asserting that it’s no dilemma at all. “I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes,” said Will Cathcart, whose title is Head of WhatsApp, in a YouTube interview with an Australian think tank in July.

The tension between privacy and disseminating information to law enforcement is exacerbated by a second pressure: Facebook’s need to make money from WhatsApp. Since paying $22 billion to buy WhatsApp in 2014, Facebook has been trying to figure out how to generate profits from a service that doesn’t charge its users a penny.

That conundrum has periodically led to moves that anger users, regulators or both. The goal of monetizing the app was part of the company’s 2016 decision to start sharing WhatsApp user data with Facebook, something the company had told European Union regulators was technologically impossible. The same impulse spurred a controversial plan, abandoned in late 2019, to sell advertising on WhatsApp. And the profit-seeking mandate was behind another botched initiative in January: the introduction of a new privacy policy for user interactions with businesses on WhatsApp, allowing businesses to use customer data in new ways. That announcement triggered a user exodus to competing apps.

WhatsApp’s increasingly aggressive business plan is focused on charging companies for an array of services — letting users make payments via WhatsApp and managing customer service chats — that offer convenience but fewer privacy protections. The result is a confusing two-tiered privacy system within the same app where the protections of end-to-end encryption are further eroded when WhatsApp users employ the service to communicate with businesses.

The company’s December marketing presentation captures WhatsApp’s diverging imperatives. It states that “privacy will remain important.” But it also conveys what seems to be a more urgent mission: the need to “open the aperture of the brand to encompass our future business objectives.”


I. “Content Moderation Associates”

In many ways, the experience of being a content moderator for WhatsApp in Austin is identical to being a moderator for Facebook or Instagram, according to interviews with 29 current and former moderators. Mostly in their 20s and 30s, many with past experience as store clerks, grocery checkers and baristas, the moderators are hired and employed by Accenture, a huge corporate contractor that works for Facebook and other Fortune 500 behemoths.

The job listings advertise “Content Review” positions and make no mention of Facebook or WhatsApp. Employment documents list the workers’ initial title as “content moderation associate.” Pay starts around $16.50 an hour. Moderators are instructed to tell anyone who asks that they work for Accenture, and are required to sign sweeping non-disclosure agreements. Citing the NDAs, almost all the current and former moderators interviewed by ProPublica insisted on anonymity. (An Accenture spokesperson declined comment, referring all questions about content moderation to WhatsApp.)

When the WhatsApp team was assembled in Austin in 2019, Facebook moderators already occupied the fourth floor of an office tower on Sixth Street, adjacent to the city’s famous bar-and-music scene. The WhatsApp team was installed on the floor above, with new glass-enclosed work pods and nicer bathrooms that sparked a tinge of envy in a few members of the Facebook team. Most of the WhatsApp team scattered to work from home during the pandemic. Whether in the office or at home, they spend their days in front of screens, using a Facebook software tool to examine a stream of “tickets,” organized by subject into “reactive” and “proactive” queues.

Collectively, the workers scrutinize millions of pieces of WhatsApp content each week. Each reviewer handles upwards of 600 tickets a day, which gives them less than a minute per ticket. WhatsApp declined to reveal how many contract workers are employed for content review, but a partial staffing list reviewed by ProPublica suggests that, at Accenture alone, it’s more than 1,000. WhatsApp moderators, like their Facebook and Instagram counterparts, are expected to meet performance metrics for speed and accuracy, which are audited by Accenture.

Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.

Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.

The WhatsApp reviewers have three choices when presented with a ticket for either type of queue: Do nothing, place the user on “watch” for further scrutiny, or ban the account. (Facebook and Instagram content moderators have more options, including removing individual postings. It’s that distinction — the fact that WhatsApp reviewers can’t delete individual items — that the company cites as its basis for asserting that WhatsApp reviewers are not “content moderators.”)

WhatsApp moderators must make subjective, sensitive and subtle judgments, interviews and documents examined by ProPublica show. They examine a wide range of categories, including “Spam Report,” “Civic Bad Actor” (political hate speech and disinformation), “Terrorism Global Credible Threat,” “CEI” (child exploitative imagery) and “CP” (child pornography). Another set of categories addresses the messaging and conduct of millions of small and large businesses that use WhatsApp to chat with customers and sell their wares. These queues have such titles as “business impersonation prevalence,” “commerce policy probable violators” and “business verification.”

Moderators say the guidance they get from WhatsApp and Accenture relies on standards that can be simultaneously arcane and disturbingly graphic. Decisions about abusive sexual imagery, for example, can rest on an assessment of whether a naked child in an image appears adolescent or prepubescent, based on comparison of hip bones and pubic hair to a medical index chart. One reviewer recalled a grainy video in a political-speech queue that depicted a machete-wielding man holding up what appeared to be a severed head: “We had to watch and say, ‘Is this a real dead body or a fake dead body?’”

In late 2020, moderators were informed of a new queue for alleged “sextortion.” It was defined in an explanatory memo as “a form of sexual exploitation where people are blackmailed with a nude image of themselves which have been shared by them or someone else on the Internet.” The memo said workers would review messages reported by users that “include predefined keywords typically used in sextortion/blackmail messages.”

WhatsApp’s review system is hampered by impediments, including buggy language translation. The service has users in 180 countries, with the vast majority located outside the U.S. Even though Accenture hires workers who speak a variety of languages, for messages in some languages there’s often no native speaker on site to assess abuse complaints. That means using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”

The process can be rife with errors and misunderstandings. Companies have been flagged for offering weapons for sale when they’re selling straight shaving razors. Bras can be sold, but if the marketing language registers as “adult,” the seller can be labeled a forbidden “sexually oriented business.” And a flawed translation tool set off an alarm when it detected kids for sale and slaughter, which, upon closer scrutiny, turned out to involve young goats intended to be cooked and eaten in halal meals.

The system is also undercut by the human failings of the people who instigate reports. Complaints are frequently filed to punish, harass or prank someone, according to moderators. In messages from Brazil and Mexico, one moderator explained, “we had a couple of months where AI was banning groups left and right because people were messing with their friends by changing their group names” and then reporting them. “At the worst of it, we were probably getting tens of thousands of those. They figured out some words the algorithm did not like.”

Other reports fail to meet WhatsApp standards for an account ban. “Most of it is not violating,” one of the moderators said. “It’s content that is already on the internet, and it’s just people trying to mess with users.” Still, each case can reveal up to five unencrypted messages, which are then examined by moderators.

The judgment of WhatsApp’s AI is less than perfect, moderators say. “There were a lot of innocent photos on there that were not allowed to be on there,” said Carlos Sauceda, who left Accenture last year after nine months. “It might have been a photo of a child taking a bath, and there was nothing wrong with it.” As another WhatsApp moderator put it, “A lot of the time, the artificial intelligence is not that intelligent.”

Facebook’s written guidance to WhatsApp moderators acknowledges many problems, noting “we have made mistakes and our policies have been weaponized by bad actors to get good actors banned. When users write inquiries pertaining to abusive matters like these, it is up to WhatsApp to respond and act (if necessary) accordingly in a timely and pleasant manner.” Of course, if a user appeals a ban that was prompted by a user report, according to one moderator, it entails having a second moderator examine the user’s content.


II. “Industry Leaders” in Detecting Bad Behavior

In public statements and on the company’s websites, Facebook Inc. is noticeably vague about WhatsApp’s monitoring process. The company does not provide a regular accounting of how WhatsApp polices the platform. WhatsApp’s FAQ page and online complaint form note that it will receive “the most recent messages” from a user who has been flagged. They do not, however, disclose how many unencrypted messages are revealed when a report is filed, or that those messages are examined by outside contractors. (WhatsApp told ProPublica it limits that disclosure to keep violators from “gaming” the system.)

By contrast, both Facebook and Instagram post lengthy “Community Standards” documents detailing the criteria its moderators use to police content, along with articles and videos about “the unrecognized heroes who keep Facebook safe” and announcements on new content-review sites. Facebook’s transparency reports detail how many pieces of content are “actioned” for each type of violation. WhatsApp is not included in this report.

When dealing with legislators, Facebook Inc. officials also offer few details — but are eager to assure them that they don’t let encryption stand in the way of protecting users from images of child sexual abuse and exploitation. For example, when members of the Senate Judiciary Committee grilled Facebook about the impact of encrypting its platforms, the company, in written follow-up questions in Jan. 2020, cited WhatsApp in boasting that it would remain responsive to law enforcement. “Even within an encrypted system,” one response noted, “we will still be able to respond to lawful requests for metadata, including potentially critical location or account information… We already have an encrypted messaging service, WhatsApp, that — in contrast to some other encrypted services — provides a simple way for people to report abuse or safety concerns.”

Sure enough, WhatsApp reported 400,000 instances of possible child-exploitation imagery to the National Center for Missing and Exploited Children in 2020, according to its head, Cathcart. That was ten times as many as in 2019. “We are by far the industry leaders in finding and detecting that behavior in an end-to-end encrypted service,” he said.

During his YouTube interview with the Australian think tank, Cathcart also described WhatsApp’s reliance on user reporting and its AI systems’ ability to examine account information that isn’t subject to encryption. Asked how many staffers WhatsApp employed to investigate abuse complaints from an app with more than two billion users, Cathcart didn’t mention content moderators or their access to encrypted content. “There’s a lot of people across Facebook who help with WhatsApp,” he explained. “If you look at people who work full time on WhatsApp, it’s above a thousand. I won’t get into the full breakdown of customer service, user reports, engineering, etc. But it’s a lot of that.”

In written responses for this article, the company spokesperson said: “We build WhatsApp in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive. This work takes extraordinary effort from security experts and a valued trust and safety team that works tirelessly to help provide the world with private communication.” The spokesperson noted that WhatsApp has released new privacy features, including “more controls about how people’s messages can disappear” or be viewed only once. He added, “Based on the feedback we’ve received from users, we’re confident people understand when they make reports to WhatsApp we receive the content they send us.”


III. “Deceiving Users” About Personal Privacy

Since the moment Facebook announced plans to buy WhatsApp in 2014, observers wondered how the service, known for its fervent commitment to privacy, would fare inside a corporation known for the opposite. Zuckerberg had become one of the wealthiest people on the planet by using a “surveillance capitalism” approach: collecting and exploiting reams of user data to sell targeted digital ads. Facebook’s relentless pursuit of growth and profits has generated a series of privacy scandals in which it was accused of deceiving customers and regulators.

By contrast, WhatsApp knew little about its users apart from their phone numbers and shared none of that information with third parties. WhatsApp ran no ads, and its co-founders, Jan Koum and Brian Acton, both former Yahoo engineers, were hostile to them. “At every company that sells ads,” they wrote in 2012, “a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data and making sure it’s all being logged and collated and sliced and packed and shipped out,” adding: “Remember, when advertising is involved you the user are the product.” At WhatsApp, they noted, “your data isn’t even in the picture. We are simply not interested in any of it.”

Zuckerberg publicly vowed in a 2014 keynote speech that he would keep WhatsApp “exactly the same.” He declared, “We are absolutely not going to change plans around WhatsApp and the way it uses user data. WhatsApp is going to operate completely autonomously.”

In April 2016, WhatsApp completed its long-planned adoption of end-to-end encryption, which helped establish the app as a prized communications platform in 180 countries, including many where text messages and phone calls are cost-prohibitive. International dissidents, whistleblowers and journalists also turned to WhatsApp to escape government eavesdropping.

Four months later, however, WhatsApp disclosed it would begin sharing user data with Facebook — precisely what Zuckerberg had said would not happen — a move that cleared the way for an array of future revenue-generating plans. The new WhatsApp terms of service said the app would share information such as users’ phone numbers, profile photos, status messages and IP addresses for the purposes of ad targeting, fighting spam and abuse and gathering metrics. “By connecting your phone number with Facebook’s systems,” WhatsApp explained, “Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them.”

Such actions were increasingly bringing Facebook into the crosshairs of regulators. In May 2017, European Union antitrust regulators fined the company 110 million euros (about $122 million) for falsely claiming three years earlier that it would be impossible to link the user information between WhatsApp and the Facebook family of apps. The EU concluded that Facebook had “intentionally or negligently” deceived regulators. Facebook insisted its false statements in 2014 were not intentional, but didn’t contest the fine.

By the spring of 2018, the WhatsApp co-founders, now both billionaires, were gone. Acton, in what he later described as an act of “penance” for the “crime” of selling WhatsApp to Facebook, gave $50 million to a foundation backing Signal, a free encrypted messaging app that would emerge as a WhatsApp rival. (Acton’s donor-advised fund has also given money to ProPublica.)

Meanwhile, Facebook was under fire for its security and privacy failures as never before. The pressure culminated in a landmark $5 billion fine by the Federal Trade Commission in July 2019 for violating a previous agreement to protect user privacy. The fine was almost 20 times greater than any previous privacy-related penalty, according to the FTC, and Facebook’s transgressions included “deceiving users about their ability to control the privacy of their personal information.”

The FTC announced that it was ordering Facebook to take steps to protect privacy going forward, including for WhatsApp users: “As part of Facebook’s order-mandated privacy program, which covers WhatsApp and Instagram, Facebook must conduct a privacy review of every new or modified product, service, or practice before it is implemented, and document its decisions about user privacy.” Compliance officers would be required to generate a “quarterly privacy review report” and share it with the company and, upon request, the FTC.

Facebook agreed to the FTC’s fine and order. Indeed, the negotiations for that agreement were the backdrop, just four months before that, for Zuckerberg’s announcement of his new commitment to privacy.

By that point, WhatsApp had begun using Accenture and other outside contractors to hire hundreds of content reviewers. But the company was eager not to step on its larger privacy message — or spook its global user base. It said nothing publicly about its hiring of contractors to review content.


IV. “We Kill People Based On Metadata”

Even as Zuckerberg was touting Facebook Inc.’s new commitment to privacy in 2019, he didn’t mention that his company was apparently sharing more of its WhatsApp users’ metadata than ever with the parent company — and with law enforcement.

To the lay ear, the term “metadata” can sound abstract, a word that evokes the intersection of literary criticism and statistics. To use an old, pre-digital analogy, metadata is the equivalent of what’s written on the outside of an envelope — the names and addresses of the sender and recipient and the postmark reflecting where and when it was mailed — while the “content” is what’s written on the letter sealed inside the envelope. So it is with WhatsApp messages: The content is protected, but the envelope reveals a multitude of telling details (as noted: time stamps, phone numbers and much more).

Those in the information and intelligence fields understand how crucial this information can be. It was metadata, after all, that the National Security Agency was gathering about millions of Americans not suspected of a crime, prompting a global outcry when it was exposed in 2013 by former NSA contractor Edward Snowden. “Metadata absolutely tells you everything about somebody’s life,” former NSA general counsel Stewart Baker once said. “If you have enough metadata, you don’t really need content.” In a symposium at Johns Hopkins University in 2014, Gen. Michael Hayden, former director of both the CIA and NSA, went even further: “We kill people based on metadata.”

U.S. law enforcement has used WhatsApp metadata to help put people in jail. ProPublica found more than a dozen instances in which the Justice Department sought court orders for the platform’s metadata since 2017. These represent a fraction of overall requests, known as pen register orders (a phrase borrowed from the technology used to track numbers dialed by landline telephones), as many more are kept from public view by court order. U.S. government requests for data on outgoing and incoming messages from all Facebook platforms increased by 276% from the first half of 2017 to the second half of 2020, according to Facebook Inc. statistics (which don’t break out the numbers by platform). The company’s rate of handing over at least some data in response to such requests has risen from 84% to 95% during that period.

It’s not clear exactly what government investigators have been able to gather from WhatsApp, as the results of those orders, too, are often kept from public view. Internally, WhatsApp calls such requests for information about users “prospective message pairs,” or PMPs. These provide data on a user’s messaging patterns in response to requests from U.S. law enforcement agencies, as well as those in at least three other countries — the United Kingdom, Brazil and India — according to a person familiar with the matter who shared this information on condition of anonymity. Law enforcement requests from other countries might only receive basic subscriber profile information.

WhatsApp metadata was pivotal in the arrest and conviction of Natalie “May” Edwards, a former Treasury Department official with the Financial Crimes Enforcement Network, for leaking confidential banking reports about suspicious transactions to BuzzFeed News. The FBI’s criminal complaint detailed hundreds of messages between Edwards and a BuzzFeed reporter using an “encrypted application,” which interviews and court records confirmed was WhatsApp. “On or about August 1, 2018, within approximately six hours of the Edwards pen becoming operative — and the day after the July 2018 Buzzfeed article was published — the Edwards cellphone exchanged approximately 70 messages via the encrypted application with the Reporter-1 cellphone during an approximately 20-minute time span between 12:33 a.m. and 12:54 a.m.,” FBI Special Agent Emily Eckstut wrote in her October 2018 complaint. Edwards and the reporter used WhatsApp because Edwards believed the platform to be secure, according to a person familiar with the matter.

Edwards was sentenced on June 3 to six months in prison after pleading guilty to a conspiracy charge and reported to prison last week. Edwards’ attorney declined to comment, as did representatives from the FBI and the Justice Department.

WhatsApp has for years downplayed how much unencrypted information it shares with law enforcement, largely limiting mentions of the practice to boilerplate language buried deep in its terms of service. It does not routinely keep permanent logs of who users are communicating with and how often, but company officials confirmed they do turn on such tracking at their own discretion — even for internal Facebook leak investigations — or in response to law enforcement requests. The company declined to tell ProPublica how frequently it does so.

The privacy page for WhatsApp assures users that they have total control over their own metadata. It says users can “decide if only contacts, everyone, or nobody can see your profile photo” or when they last opened their status updates or when they last opened the app. Regardless of the settings a user chooses, WhatsApp collects and analyzes all of that data — a fact not mentioned anywhere on the page.


V. “Opening the Aperture to Encompass Business Objectives”

The conflict between privacy and security on encrypted platforms seems to be only intensifying. Law enforcement and child safety advocates have urged Zuckerberg to abandon his plan to encrypt all of Facebook’s messaging platforms. In June 2020, three Republican senators introduced the “Lawful Access to Encrypted Data Act,” which would require tech companies to assist in providing access to even encrypted content in response to law enforcement warrants. For its part, WhatsApp recently sued the Indian government to block its requirement that encrypted apps provide “traceability” — a method to identify the sender of any message deemed relevant to law enforcement. WhatsApp has fought similar demands in other countries.

Other encrypted platforms take a vastly different approach to monitoring their users than WhatsApp. Signal employs no content moderators, collects far less user and group data, allows no cloud backups and generally rejects the notion that it should be policing user activities. It submits no child exploitation reports to NCMEC.

Apple has touted its commitment to privacy as a selling point. Its iMessage system displays a “report” button only to alert the company to suspected spam, and the company has made just a few hundred annual reports to NCMEC, all of them originating from scanning outgoing email, which is unencrypted.

But Apple recently took a new tack, and appeared to stumble along the way. Amid intensifying pressure from Congress, in August the company announced a complex new system for identifying child-exploitative imagery on users’ iCloud backups. Apple insisted the new system poses no threat to private content, but privacy advocates accused the company of creating a backdoor that potentially allows authoritarian governments to demand broader content searches, which could result in the targeting of dissidents, journalists or other critics of the state. On Sept. 3, Apple announced it would delay implementation of the new system.

Still, it’s Facebook that seems to face the most constant skepticism among major tech platforms. It is using encryption to market itself as privacy-friendly, while saying little about the other ways it collects data, according to Lloyd Richardson, the director of IT at the Canadian Centre for Child Protection. “This whole idea that they’re doing it for personal protection of people is completely ludicrous,” Richardson said. “You’re trusting an app owned and written by Facebook to do exactly what they’re saying. Do you trust that entity to do that?” (On Sept. 2, Irish authorities announced that they are fining WhatsApp 225 million euros, about $267 million, for failing to properly disclose how the company shares user information with other Facebook platforms. WhatsApp is contesting the finding.)

Facebook’s emphasis on promoting WhatsApp as a paragon of privacy is evident in the December marketing document obtained by ProPublica. The “Brand Foundations” presentation says it was the product of a 21-member global team across all of Facebook, involving a half-dozen workshops, quantitative research, “stakeholder interviews” and “endless brainstorms.” Its aim: to offer “an emotional articulation” of WhatsApp’s benefits, “an inspirational toolkit that helps us tell our story,” and a “brand purpose to champion the deep human connection that leads to progress.” The marketing deck identifies a feeling of “closeness” as WhatsApp’s “ownable emotional territory,” saying the app delivers “the closest thing to an in-person conversation.”

WhatsApp should portray itself as “courageous,” according to another slide, because it’s “taking a strong, public stance that is not financially motivated on things we care about,” such as defending encryption and fighting misinformation. But the presentation also speaks of the need to “open the aperture of the brand to encompass our future business objectives. While privacy will remain important, we must accommodate for future innovations.”

WhatsApp is now in the midst of a major drive to make money. It has experienced a rocky start, in part because of broad suspicions of how WhatsApp will balance privacy and profits. An announced plan to begin running ads inside the app didn’t help; it was abandoned in late 2019, just days before it was set to launch. Early this January, WhatsApp unveiled a change in its privacy policy — accompanied by a one-month deadline to accept the policy or get cut off from the app. The move sparked a revolt, impelling tens of millions of users to flee to rivals such as Signal and Telegram.

The policy change focused on how messages and data would be handled when users communicate with a business in the ever-expanding array of WhatsApp Business offerings. Companies now could store their chats with users and use information about users for marketing purposes, including targeting them with ads on Facebook or Instagram.

Elon Musk tweeted “Use Signal,” and WhatsApp users rebelled. Facebook delayed for three months the requirement for users to approve the policy update. In the meantime, it struggled to convince users that the change would have no effect on the privacy protections for their personal communications, with a slightly modified version of its usual assurance: “WhatsApp cannot see your personal messages or hear your calls and neither can Facebook.” Just as when the company first bought WhatsApp years before, the message was the same: Trust us.

Source: https://www.propublica.org/article/how-facebook-undermines-privacy-protections-for-its-2-billion-whatsapp-users

Metadaten: Wo das eigentliche Privacy-Problem von WhatsApp liegt

Wo das eigentliche Privacy-Problem von WhatsApp liegt

Gerade macht eine Nachricht aus den USA die Runde: Das Investigativmagazin ProPublica widmet dem Datenschutz bei WhatsApp einen ausführlichen Artikel und kommt zu dem Schluss, das Mutterunternehmen Facebook untergrabe die Privatsphäre der zwei Milliarden Nutzer:innen. So richtig diese Aussage ist, so problematisch ist das Framing der Autoren und vieler deutscher Medien, die die Meldung oberflächlich aufgreifen.

Im Hauptteil des Artikels geht es darum, dass Facebook ein Heer von Content-Moderator:innen beschäftigt, um gemeldete Inhalte in WhatsApp-Chats zu überprüfen. Das ist keine Neuigkeit, aber ProPublica kann erstmals ausführlicher darüber berichten, wie diese Arbeit abläuft. Dass potenziell jede WhatsApp-Nachricht von den Moderator:innen des Konzerns gelesen werden kann, stellen die Autoren dem Privacy-Versprechen des Messengers gegenüber: „No one outside of this chat, not even WhatsApp, can read or listen to them.”

Allerdings, und hier wird es problematisch, setzen die Autoren dann auf ein Framing, dass die Content-Moderation (die WhatsApp nicht so nennen will) als Schwächung der Ende-zu-Ende-Verschlüsselung darstellt. Ein ProPublica-Autor bezeichnete die Moderation sogar als „Backdoor“, was gemeinhin eine gezielt eingebaute Hintertür zum Umgehen von Verschlüsselung meint. Diverse Sicherheitsexpert:innen wie die Cybersecurity-Direktorin der Electronic Frontier Foundation, Eva Galperin, kritisieren deshalb die Berichterstattung.

Die Verschlüsselung tut, was sie soll

Wo also liegt das Problem? Klar ist: Mark Zuckerbergs 2018 gegebenes Versprechen, dass seine Firma keinerlei Kommunikationsinhalte aus WhatsApp-Chats lesen könne, ist irreführend. Jede Nachricht, jedes Bild und jedes Video, die von Chat-Teilnehmer:innen gemeldet werden, landen zur Überprüfung bei WhatsApp und deren Dienstleistern. Etwa 1000 Menschen seien in Austin, Dublin und Singapur rund um die Uhr im Einsatz, um die gemeldeten Inhalte zu sichten, berichtet ProPublica. Weil das Unternehmen das Privacy-Versprechen für sein Marketing benötigt, versteckt WhatsApp diese Info vor seinen Nutzer:innen.

Klar ist auch: Wie jede Form der Inhaltemoderation bringt dies erhebliche Probleme mit sich. So zeigen die Autoren nach Gesprächen mit diversen Quellen etwa, dass die Moderator:innen wenig Zeit für ihre schwerwiegenden Entscheidungen haben und mit teils missverständlichen Vorgaben arbeiten müssen. Wie bei der Moderation für Facebook und Instagram werden sie zudem von einem automatisierten System unterstützt, das mitunter fehlerhafte Vorschläge macht. Deshalb werden immer wieder Inhalte gesperrt, die eigentlich nicht gesperrt werden dürften, etwa harmlose Fotos oder Satire. Einen ordentlichen Widerspruchsmechanismus gibt es bei WhatsApp nicht und es ist ein Verdienst des Artikels, diese Schwierigkeiten ans Licht zu bringen.

Diese Probleme liegen jedoch nicht an einer mangelhaften Ende-zu-Ende-Verschlüsselung der WhatsApp-Nachrichten. Diese funktioniert technisch gesehen weiterhin gut. Die Nachrichten sind zunächst nur auf den Geräten der Kommunikationsteilnehmer:innen lesbar (sofern diese nicht durch kriminelle oder staatliche Hacker kompromittiert wurden). Die Nutzer:innen, die Inhalte aus Chats melden, leiten diese an WhatsApp weiter. Das kann jede:r tun und ist kein Verschlüsselungsproblem.

Die eigentliche Gefahr liegt woanders

Die Möglichkeit, missbräuchliche Inhalte zu melden, besteht bei WhatsApp schon seit Längerem. Das Meldesystem soll helfen, wenn etwa volksverhetztende Inhalte geteilt werden, Ex-Partner:innen bedroht oder in Gruppen zur Gewalt gegen Minderheiten aufgerufen wird. Es ist zwar ein Eingriff in private Kommunikation, aber man kann argumentieren, dass dieser in Abwägung mit den Gefahren gerechtfertigt ist. Selbstverständlich wäre WhatsApp in der Pflicht, seine Nutzer:innen besser darüber informieren, wie das Meldesystem funktioniert und dass ihre Nachrichten mit ein paar Klicks an Moderator:innen weitergeleitet werden können.

Die größere Gefahr für die Privatsphäre bei WhatsApp kommt jedoch von einer anderen Stelle: Es sind die Metadaten, die über Menschen ähnlich viel verraten wie die Inhalte ihrer Gespräche. Dazu gehört die Identität von Absender und Empfänger, ihre Telefonnummern und zugehörige Facebook-Konten, Profilfotos, Statusnachrichten sowie Akkustand des Telefons. Außerdem Informationen zum Kommunikationsverhalten: Wer kommuniziert mit wem? Wer nutzt die App wie häufig und wie lange?

Aus solchen Daten lassen sich Studien zufolge weitgehende psychologische Profile bilden. So kommt es schon mal vor, dass Facebook-Manager ihren Werbekunden versprechen, diese könnten auf der Plattform „emotional verletzliche Teenager“ finden. „We kill people based on metadata“, offenbarte der frühere NSA-Chef Michael Hayden über metadatenbasierte Raketenangriffe der USA.

Wie WhatsApp eine Whistleblowerin ans Messer lieferte

WhatsApp sammelt diese Daten im großen Stil, weil sie sich zu Geld machen lassen. Im Originalbericht von ProPublica kommt dieser Aspekt durchaus vor, in vielen deutschen Meldungen geht er leider unter. Tatsächlich berichtet das US-Medium sogar vom Fall einer Whistleblowerin, die ins Gefängnis musste, weil WhatsApp ihre Metadaten an das FBI weitergab. Natalie Edwards war im US-Finanzministerium angestellt und reichte Informationen über verdächtige Transaktionen an BuzzFeed News weiter. Entdeckt und verurteilt wurde sie unter anderem, weil die Strafverfolger nachweisen konnten, dass sie in regem WhatsApp-Kontakt mit dem BuzzFeed-Reporter stand.

Dem Bericht zufolge gibt WhatsApp in den USA derlei Metadaten regelmäßig an Ermittlungsbehörden weiter. Auch in Deutschland und Europa dürfte dies der Fall sein. Hinzukommt, dass nicht nur staatliche Stellen die verräterischen Informationen erhalten, sondern auch Facebook. Dort werden sie genutzt, um die Datenprofile der Nutzer:innen zu verfeinern und in weiten Teilen der Welt auch, um Werbeanzeigen besser zuschneiden zu können. Als der Datenkonzern den Messenger 2014 aufkaufte, versprach er der europäischen Wettbewerbsbehörde, dass dies technisch überhaupt nicht möglich sei. Eine dreiste Lüge, für die das Unternehmen mehr als 100 Millionen Euro Strafe zahlen musste.

Deshalb lässt sich nicht oft genug sagen: Auch wenn die Ende-zu-Ende-Verschlüsselung des Messengers funktioniert, ist WhatsApp kein guter Ort für private Kommunikation. Journalist:innen, die auf diesem Messenger vertrauliche Gespräche mit ihren Quellen führen, handeln unverantwortlich. Wer wirklich sicher und datensparsam kommunizieren will, sollte Alternativen wie Threema oder Signal nutzen, die kaum Metadaten speichern.

 

How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users

WhatsApp assures users that no one can see their messages — but the company has an extensive monitoring operation and regularly shares personal information with prosecutors.

 

Series: The Social Machine

How Facebook Plays by Its Own set of Rules

Clarification, Sept. 8, 2021: A previous version of this story caused unintended confusion about the extent to which WhatsApp examines its users’ messages and whether it breaks the encryption that keeps the exchanges secret. We’ve altered language in the story to make clear that the company examines only messages from threads that have been reported by users as possibly abusive. It does not break end-to-end encryption.

When Mark Zuckerberg unveiled a new “privacy-focused vision” for Facebook in March 2019, he cited the company’s global messaging service, WhatsApp, as a model. Acknowledging that “we don’t currently have a strong reputation for building privacy protective services,” the Facebook CEO wrote that “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever. This is the future I hope we will help bring about. We plan to build this the way we’ve developed WhatsApp.”

Zuckerberg’s vision centered on WhatsApp’s signature feature, which he said the company was planning to apply to Instagram and Facebook Messenger: end-to-end encryption, which converts all messages into an unreadable format that is only unlocked when they reach their intended destinations. WhatsApp messages are so secure, he said, that nobody else — not even the company — can read a word. As Zuckerberg had put it earlier, in testimony to the U.S. Senate in 2018, “We don’t see any of the content in WhatsApp.”

 

WhatsApp emphasizes this point so consistently that a flag with a similar assurance automatically appears on-screen before users send messages: “No one outside of this chat, not even WhatsApp, can read or listen to them.”

Given those sweeping assurances, you might be surprised to learn that WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through millions of private messages, images and videos. They pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.

The workers have access to only a subset of WhatsApp messages — those flagged by users and automatically forwarded to the company as possibly abusive. The review is one element in a broader monitoring operation in which the company also reviews material that is not encrypted, including data about the sender and their account.

Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp. A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.” It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala ​​Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters.” The presentation does not mention the company’s content moderation efforts.

WhatsApp’s director of communications, Carl Woog, acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove “the worst” abusers. But Woog told ProPublica that the company does not consider this work to be content moderation, saying: “We actually don’t typically use the term for WhatsApp.” The company declined to make executives available for interviews for this article, but responded to questions with written comments. “WhatsApp is a lifeline for millions of people around the world,” the company said. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”

WhatsApp’s denial that it moderates content is noticeably different from what Facebook Inc. says about WhatsApp’s corporate siblings, Instagram and Facebook. The company has said that some 15,000 moderators examine content on Facebook and Instagram, neither of which is encrypted. It releases quarterly transparency reports that detail how many accounts Facebook and Instagram have “actioned” for various categories of abusive content. There is no such report for WhatsApp.

Deploying an army of content reviewers is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp users. Together, the company’s actions have left WhatsApp — the largest messaging app in the world, with two billion users — far less private than its users likely understand or expect. A ProPublica investigation, drawing on data, documents and dozens of interviews with current and former employees and contractors, reveals how, since purchasing WhatsApp in 2014, Facebook has quietly undermined its sweeping security assurances in multiple ways. (Two articles this summer noted the existence of WhatsApp’s moderators but focused on their working conditions and pay rather than their effect on users’ privacy. This article is the first to reveal the details and extent of the company’s ability to scrutinize messages and user data — and to examine what the company does with that information.)

Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission. The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false. “We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment.

Facebook Inc. has also downplayed how much data it collects from WhatsApp users, what it does with it and how much it shares with law enforcement authorities. For example, WhatsApp shares metadata, unencrypted records that can reveal a lot about a user’s activity, with law enforcement agencies such as the Department of Justice. Some rivals, such as Signal, intentionally gather much less metadata to avoid incursions on its users’ privacy, and thus share far less with law enforcement. (“WhatsApp responds to valid legal requests,” the company spokesperson said, “including orders that require us to provide on a real-time going forward basis who a specific person is messaging.”)

WhatsApp user data, ProPublica has learned, helped prosecutors build a high-profile case against a Treasury Department employee who leaked confidential documents to BuzzFeed News that exposed how dirty money flows through U.S. banks.

Like other social media and communications platforms, WhatsApp is caught between users who expect privacy and law enforcement entities that effectively demand the opposite: that WhatsApp turn over information that will help combat crime and online abuse. WhatsApp has responded to this dilemma by asserting that it’s no dilemma at all. “I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes,” said Will Cathcart, whose title is Head of WhatsApp, in a YouTube interview with an Australian think tank in July.

The tension between privacy and disseminating information to law enforcement is exacerbated by a second pressure: Facebook’s need to make money from WhatsApp. Since paying $22 billion to buy WhatsApp in 2014, Facebook has been trying to figure out how to generate profits from a service that doesn’t charge its users a penny.

That conundrum has periodically led to moves that anger users, regulators or both. The goal of monetizing the app was part of the company’s 2016 decision to start sharing WhatsApp user data with Facebook, something the company had told European Union regulators was technologically impossible. The same impulse spurred a controversial plan, abandoned in late 2019, to sell advertising on WhatsApp. And the profit-seeking mandate was behind another botched initiative in January: the introduction of a new privacy policy for user interactions with businesses on WhatsApp, allowing businesses to use customer data in new ways. That announcement triggered a user exodus to competing apps.

WhatsApp’s increasingly aggressive business plan is focused on charging companies for an array of services — letting users make payments via WhatsApp and managing customer service chats — that offer convenience but fewer privacy protections. The result is a confusing two-tiered privacy system within the same app where the protections of end-to-end encryption are further eroded when WhatsApp users employ the service to communicate with businesses.

The company’s December marketing presentation captures WhatsApp’s diverging imperatives. It states that “privacy will remain important.” But it also conveys what seems to be a more urgent mission: the need to “open the aperture of the brand to encompass our future business objectives.”


 

I. “Content Moderation Associates”

In many ways, the experience of being a content moderator for WhatsApp in Austin is identical to being a moderator for Facebook or Instagram, according to interviews with 29 current and former moderators. Mostly in their 20s and 30s, many with past experience as store clerks, grocery checkers and baristas, the moderators are hired and employed by Accenture, a huge corporate contractor that works for Facebook and other Fortune 500 behemoths.

The job listings advertise “Content Review” positions and make no mention of Facebook or WhatsApp. Employment documents list the workers’ initial title as “content moderation associate.” Pay starts around $16.50 an hour. Moderators are instructed to tell anyone who asks that they work for Accenture, and are required to sign sweeping non-disclosure agreements. Citing the NDAs, almost all the current and former moderators interviewed by ProPublica insisted on anonymity. (An Accenture spokesperson declined comment, referring all questions about content moderation to WhatsApp.)

When the WhatsApp team was assembled in Austin in 2019, Facebook moderators already occupied the fourth floor of an office tower on Sixth Street, adjacent to the city’s famous bar-and-music scene. The WhatsApp team was installed on the floor above, with new glass-enclosed work pods and nicer bathrooms that sparked a tinge of envy in a few members of the Facebook team. Most of the WhatsApp team scattered to work from home during the pandemic. Whether in the office or at home, they spend their days in front of screens, using a Facebook software tool to examine a stream of “tickets,” organized by subject into “reactive” and “proactive” queues.

Collectively, the workers scrutinize millions of pieces of WhatsApp content each week. Each reviewer handles upwards of 600 tickets a day, which gives them less than a minute per ticket. WhatsApp declined to reveal how many contract workers are employed for content review, but a partial staffing list reviewed by ProPublica suggests that, at Accenture alone, it’s more than 1,000. WhatsApp moderators, like their Facebook and Instagram counterparts, are expected to meet performance metrics for speed and accuracy, which are audited by Accenture.

Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.

Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.

The WhatsApp reviewers have three choices when presented with a ticket for either type of queue: Do nothing, place the user on “watch” for further scrutiny, or ban the account. (Facebook and Instagram content moderators have more options, including removing individual postings. It’s that distinction — the fact that WhatsApp reviewers can’t delete individual items — that the company cites as its basis for asserting that WhatsApp reviewers are not “content moderators.”)

WhatsApp moderators must make subjective, sensitive and subtle judgments, interviews and documents examined by ProPublica show. They examine a wide range of categories, including “Spam Report,” “Civic Bad Actor” (political hate speech and disinformation), “Terrorism Global Credible Threat,” “CEI” (child exploitative imagery) and “CP” (child pornography). Another set of categories addresses the messaging and conduct of millions of small and large businesses that use WhatsApp to chat with customers and sell their wares. These queues have such titles as “business impersonation prevalence,” “commerce policy probable violators” and “business verification.”

Moderators say the guidance they get from WhatsApp and Accenture relies on standards that can be simultaneously arcane and disturbingly graphic. Decisions about abusive sexual imagery, for example, can rest on an assessment of whether a naked child in an image appears adolescent or prepubescent, based on comparison of hip bones and pubic hair to a medical index chart. One reviewer recalled a grainy video in a political-speech queue that depicted a machete-wielding man holding up what appeared to be a severed head: “We had to watch and say, ‘Is this a real dead body or a fake dead body?’”

In late 2020, moderators were informed of a new queue for alleged “sextortion.” It was defined in an explanatory memo as “a form of sexual exploitation where people are blackmailed with a nude image of themselves which have been shared by them or someone else on the Internet.” The memo said workers would review messages reported by users that “include predefined keywords typically used in sextortion/blackmail messages.”

WhatsApp’s review system is hampered by impediments, including buggy language translation. The service has users in 180 countries, with the vast majority located outside the U.S. Even though Accenture hires workers who speak a variety of languages, for messages in some languages there’s often no native speaker on site to assess abuse complaints. That means using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”

The process can be rife with errors and misunderstandings. Companies have been flagged for offering weapons for sale when they’re selling straight shaving razors. Bras can be sold, but if the marketing language registers as “adult,” the seller can be labeled a forbidden “sexually oriented business.” And a flawed translation tool set off an alarm when it detected kids for sale and slaughter, which, upon closer scrutiny, turned out to involve young goats intended to be cooked and eaten in halal meals.

The system is also undercut by the human failings of the people who instigate reports. Complaints are frequently filed to punish, harass or prank someone, according to moderators. In messages from Brazil and Mexico, one moderator explained, “we had a couple of months where AI was banning groups left and right because people were messing with their friends by changing their group names” and then reporting them. “At the worst of it, we were probably getting tens of thousands of those. They figured out some words the algorithm did not like.”

Other reports fail to meet WhatsApp standards for an account ban. “Most of it is not violating,” one of the moderators said. “It’s content that is already on the internet, and it’s just people trying to mess with users.” Still, each case can reveal up to five unencrypted messages, which are then examined by moderators.

The judgment of WhatsApp’s AI is less than perfect, moderators say. “There were a lot of innocent photos on there that were not allowed to be on there,” said Carlos Sauceda, who left Accenture last year after nine months. “It might have been a photo of a child taking a bath, and there was nothing wrong with it.” As another WhatsApp moderator put it, “A lot of the time, the artificial intelligence is not that intelligent.”

Facebook’s written guidance to WhatsApp moderators acknowledges many problems, noting “we have made mistakes and our policies have been weaponized by bad actors to get good actors banned. When users write inquiries pertaining to abusive matters like these, it is up to WhatsApp to respond and act (if necessary) accordingly in a timely and pleasant manner.” Of course, if a user appeals a ban that was prompted by a user report, according to one moderator, it entails having a second moderator examine the user’s content.


 

*£%#£$&@+*&+@@@£#+@&§_$£&£@_§##*$#$§+&+@&&%_$$@@

In public statements and on the company’s websites, Facebook Inc. is noticeably vague about WhatsApp’s monitoring process. The company does not provide a regular accounting of how WhatsApp polices the platform. WhatsApp’s FAQ page and online complaint form note that it will receive “the most recent messages” from a user who has been flagged. They do not, however, disclose how many unencrypted messages are revealed when a report is filed, or that those messages are examined by outside contractors. (WhatsApp told ProPublica it limits that disclosure to keep violators from “gaming” the system.)

By contrast, both Facebook and Instagram post lengthy “Community Standards” documents detailing the criteria its moderators use to police content, along with articles and videos about “the unrecognized heroes who keep Facebook safe” and announcements on new content-review sites. Facebook’s transparency reports detail how many pieces of content are “actioned” for each type of violation. WhatsApp is not included in this report.

When dealing with legislators, Facebook Inc. officials also offer few details — but are eager to assure them that they don’t let encryption stand in the way of protecting users from images of child sexual abuse and exploitation. For example, when members of the Senate Judiciary Committee grilled Facebook about the impact of encrypting its platforms, the company, in written follow-up questions in Jan. 2020, cited WhatsApp in boasting that it would remain responsive to law enforcement. “Even within an encrypted system,” one response noted, “we will still be able to respond to lawful requests for metadata, including potentially critical location or account information… We already have an encrypted messaging service, WhatsApp, that — in contrast to some other encrypted services — provides a simple way for people to report abuse or safety concerns.”

Sure enough, WhatsApp reported 400,000 instances of possible child-exploitation imagery to the National Center for Missing and Exploited Children in 2020, according to its head, Cathcart. That was ten times as many as in 2019. “We are by far the industry leaders in finding and detecting that behavior in an end-to-end encrypted service,” he said.

During his YouTube interview with the Australian think tank, Cathcart also described WhatsApp’s reliance on user reporting and its AI systems’ ability to examine account information that isn’t subject to encryption. Asked how many staffers WhatsApp employed to investigate abuse complaints from an app with more than two billion users, Cathcart didn’t mention content moderators or their access to encrypted content. “There’s a lot of people across Facebook who help with WhatsApp,” he explained. “If you look at people who work full time on WhatsApp, it’s above a thousand. I won’t get into the full breakdown of customer service, user reports, engineering, etc. But it’s a lot of that.”

In written responses for this article, the company spokesperson said: “We build WhatsApp in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive. This work takes extraordinary effort from security experts and a valued trust and safety team that works tirelessly to help provide the world with private communication.” The spokesperson noted that WhatsApp has released new privacy features, including “more controls about how people’s messages can disappear” or be viewed only once. He added, “Based on the feedback we’ve received from users, we’re confident people understand when they make reports to WhatsApp we receive the content they send us.”


 

III. “Deceiving Users” About Personal Privacy

Since the moment Facebook announced plans to buy WhatsApp in 2014, observers wondered how the service, known for its fervent commitment to privacy, would fare inside a corporation known for the opposite. Zuckerberg had become one of the wealthiest people on the planet by using a “surveillance capitalism” approach: collecting and exploiting reams of user data to sell targeted digital ads. Facebook’s relentless pursuit of growth and profits has generated a series of privacy scandals in which it was accused of deceiving customers and regulators.

By contrast, WhatsApp knew little about its users apart from their phone numbers and shared none of that information with third parties. WhatsApp ran no ads, and its co-founders, Jan Koum and Brian Acton, both former Yahoo engineers, were hostile to them. “At every company that sells ads,” they wrote in 2012, “a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data and making sure it’s all being logged and collated and sliced and packed and shipped out,” adding: “Remember, when advertising is involved you the user are the product.” At WhatsApp, they noted, “your data isn’t even in the picture. We are simply not interested in any of it.”

Zuckerberg publicly vowed in a 2014 keynote speech that he would keep WhatsApp “exactly the same.” He declared, “We are absolutely not going to change plans around WhatsApp and the way it uses user data. WhatsApp is going to operate completely autonomously.”

In April 2016, WhatsApp completed its long-planned adoption of end-to-end encryption, which helped establish the app as a prized communications platform in 180 countries, including many where text messages and phone calls are cost-prohibitive. International dissidents, whistleblowers and journalists also turned to WhatsApp to escape government eavesdropping.

Four months later, however, WhatsApp disclosed it would begin sharing user data with Facebook — precisely what Zuckerberg had said would not happen — a move that cleared the way for an array of future revenue-generating plans. The new WhatsApp terms of service said the app would share information such as users’ phone numbers, profile photos, status messages and IP addresses for the purposes of ad targeting, fighting spam and abuse and gathering metrics. “By connecting your phone number with Facebook’s systems,” WhatsApp explained, “Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them.”

Such actions were increasingly bringing Facebook into the crosshairs of regulators. In May 2017, European Union antitrust regulators fined the company 110 million euros (about $122 million) for falsely claiming three years earlier that it would be impossible to link the user information between WhatsApp and the Facebook family of apps. The EU concluded that Facebook had “intentionally or negligently” deceived regulators. Facebook insisted its false statements in 2014 were not intentional, but didn’t contest the fine.

By the spring of 2018, the WhatsApp co-founders, now both billionaires, were gone. Acton, in what he later described as an act of “penance” for the “crime” of selling WhatsApp to Facebook, gave $50 million to a foundation backing Signal, a free encrypted messaging app that would emerge as a WhatsApp rival. (Acton’s donor-advised fund has also given money to ProPublica.)

Meanwhile, Facebook was under fire for its security and privacy failures as never before. The pressure culminated in a landmark $5 billion fine by the Federal Trade Commission in July 2019 for violating a previous agreement to protect user privacy. The fine was almost 20 times greater than any previous privacy-related penalty, according to the FTC, and Facebook’s transgressions included “deceiving users about their ability to control the privacy of their personal information.”

The FTC announced that it was ordering Facebook to take steps to protect privacy going forward, including for WhatsApp users: “As part of Facebook’s order-mandated privacy program, which covers WhatsApp and Instagram, Facebook must conduct a privacy review of every new or modified product, service, or practice before it is implemented, and document its decisions about user privacy.” Compliance officers would be required to generate a “quarterly privacy review report” and share it with the company and, upon request, the FTC.

Facebook agreed to the FTC’s fine and order. Indeed, the negotiations for that agreement were the backdrop, just four months before that, for Zuckerberg’s announcement of his new commitment to privacy.

By that point, WhatsApp had begun using Accenture and other outside contractors to hire hundreds of content reviewers. But the company was eager not to step on its larger privacy message — or spook its global user base. It said nothing publicly about its hiring of contractors to review content.


 

IV$ “W+ Kill P_op%§ Base@%On$Met§data”

Even as Zuckerberg was touting Facebook Inc.’s new commitment to privacy in 2019, he didn’t mention that his company was apparently sharing more of its WhatsApp users’ metadata than ever with the parent company — and with law enforcement.

To the lay ear, the term “metadata” can sound abstract, a word that evokes the intersection of literary criticism and statistics. To use an old, pre-digital analogy, metadata is the equivalent of what’s written on the outside of an envelope — the names and addresses of the sender and recipient and the postmark reflecting where and when it was mailed — while the “content” is what’s written on the letter sealed inside the envelope. So it is with WhatsApp messages: The content is protected, but the envelope reveals a multitude of telling details (as noted: time stamps, phone numbers and much more).

Those in the information and intelligence fields understand how crucial this information can be. It was metadata, after all, that the National Security Agency was gathering about millions of Americans not suspected of a crime, prompting a global outcry when it was exposed in 2013 by former NSA contractor Edward Snowden. “Metadata absolutely tells you everything about somebody’s life,” former NSA general counsel Stewart Baker once said. “If you have enough metadata, you don’t really need content.” In a symposium at Johns Hopkins University in 2014, Gen. Michael Hayden, former director of both the CIA and NSA, went even further: “We kill people based on metadata.”

U.S. law enforcement has used WhatsApp metadata to help put people in jail. ProPublica found more than a dozen instances in which the Justice Department sought court orders for the platform’s metadata since 2017. These represent a fraction of overall requests, known as pen register orders (a phrase borrowed from the technology used to track numbers dialed by landline telephones), as many more are kept from public view by court order. U.S. government requests for data on outgoing and incoming messages from all Facebook platforms increased by 276% from the first half of 2017 to the second half of 2020, according to Facebook Inc. statistics (which don’t break out the numbers by platform). The company’s rate of handing over at least some data in response to such requests has risen from 84% to 95% during that period.

It’s not clear exactly what government investigators have been able to gather from WhatsApp, as the results of those orders, too, are often kept from public view. Internally, WhatsApp calls such requests for information about users “prospective message pairs,” or PMPs. These provide data on a user’s messaging patterns in response to requests from U.S. law enforcement agencies, as well as those in at least three other countries — the United Kingdom, Brazil and India — according to a person familiar with the matter who shared this information on condition of anonymity. Law enforcement requests from other countries might only receive basic subscriber profile information.

WhatsApp metadata was pivotal in the arrest and conviction of Natalie “May” Edwards, a former Treasury Department official with the Financial Crimes Enforcement Network, for leaking confidential banking reports about suspicious transactions to BuzzFeed News. The FBI’s criminal complaint detailed hundreds of messages between Edwards and a BuzzFeed reporter using an “encrypted application,” which interviews and court records confirmed was WhatsApp. “On or about August 1, 2018, within approximately six hours of the Edwards pen becoming operative — and the day after the July 2018 Buzzfeed article was published — the Edwards cellphone exchanged approximately 70 messages via the encrypted application with the Reporter-1 cellphone during an approximately 20-minute time span between 12:33 a.m. and 12:54 a.m.,” FBI Special Agent Emily Eckstut wrote in her October 2018 complaint. Edwards and the reporter used WhatsApp because Edwards believed the platform to be secure, according to a person familiar with the matter.

Edwards was sentenced on June 3 to six months in prison after pleading guilty to a conspiracy charge and reported to prison last week. Edwards’ attorney declined to comment, as did representatives from the FBI and the Justice Department.

WhatsApp has for years downplayed how much unencrypted information it shares with law enforcement, largely limiting mentions of the practice to boilerplate language buried deep in its terms of service. It does not routinely keep permanent logs of who users are communicating with and how often, but company officials confirmed they do turn on such tracking at their own discretion — even for internal Facebook leak investigations — or in response to law enforcement requests. The company declined to tell ProPublica how frequently it does so.

The privacy page for WhatsApp assures users that they have total control over their own metadata. It says users can “decide if only contacts, everyone, or nobody can see your profile photo” or when they last opened their status updates or when they last opened the app. Regardless of the settings a user chooses, WhatsApp collects and analyzes all of that data — a fact not mentioned anywhere on the page.


 

V. “Opening the Aperture to Encompass Business Objectives”

The conflict between privacy and security on encrypted platforms seems to be only intensifying. Law enforcement and child safety advocates have urged Zuckerberg to abandon his plan to encrypt all of Facebook’s messaging platforms. In June 2020, three Republican senators introduced the “Lawful Access to Encrypted Data Act,” which would require tech companies to assist in providing access to even encrypted content in response to law enforcement warrants. For its part, WhatsApp recently sued the Indian government to block its requirement that encrypted apps provide “traceability” — a method to identify the sender of any message deemed relevant to law enforcement. WhatsApp has fought similar demands in other countries.

Other encrypted platforms take a vastly different approach to monitoring their users than WhatsApp. Signal employs no content moderators, collects far less user and group data, allows no cloud backups and generally rejects the notion that it should be policing user activities. It submits no child exploitation reports to NCMEC.

Apple has touted its commitment to privacy as a selling point. Its iMessage system displays a “report” button only to alert the company to suspected spam, and the company has made just a few hundred annual reports to NCMEC, all of them originating from scanning outgoing email, which is unencrypted.

But Apple recently took a new tack, and appeared to stumble along the way. Amid intensifying pressure from Congress, in August the company announced a complex new system for identifying child-exploitative imagery on users’ iCloud backups. Apple insisted the new system poses no threat to private content, but privacy advocates accused the company of creating a backdoor that potentially allows authoritarian governments to demand broader content searches, which could result in the targeting of dissidents, journalists or other critics of the state. On Sept. 3, Apple announced it would delay implementation of the new system.

Still, it’s Facebook that seems to face the most constant skepticism among major tech platforms. It is using encryption to market itself as privacy-friendly, while saying little about the other ways it collects data, according to Lloyd Richardson, the director of IT at the Canadian Centre for Child Protection. “This whole idea that they’re doing it for personal protection of people is completely ludicrous,” Richardson said. “You’re trusting an app owned and written by Facebook to do exactly what they’re saying. Do you trust that entity to do that?” (On Sept. 2, Irish authorities announced that they are fining WhatsApp 225 million euros, about $267 million, for failing to properly disclose how the company shares user information with other Facebook platforms. WhatsApp is contesting the finding.)

Facebook’s emphasis on promoting WhatsApp as a paragon of privacy is evident in the December marketing document obtained by ProPublica. The “Brand Foundations” presentation says it was the product of a 21-member global team across all of Facebook, involving a half-dozen workshops, quantitative research, “stakeholder interviews” and “endless brainstorms.” Its aim: to offer “an emotional articulation” of WhatsApp’s benefits, “an inspirational toolkit that helps us tell our story,” and a “brand purpose to champion the deep human connection that leads to progress.” The marketing deck identifies a feeling of “closeness” as WhatsApp’s “ownable emotional territory,” saying the app delivers “the closest thing to an in-person conversation.”

WhatsApp should portray itself as “courageous,” according to another slide, because it’s “taking a strong, public stance that is not financially motivated on things we care about,” such as defending encryption and fighting misinformation. But the presentation also speaks of the need to “open the aperture of the brand to encompass our future business objectives. While privacy will remain important, we must accommodate for future innovations.”

WhatsApp is now in the midst of a major drive to make money. It has experienced a rocky start, in part because of broad suspicions of how WhatsApp will balance privacy and profits. An announced plan to begin running ads inside the app didn’t help; it was abandoned in late 2019, just days before it was set to launch. Early this January, WhatsApp unveiled a change in its privacy policy — accompanied by a one-month deadline to accept the policy or get cut off from the app. The move sparked a revolt, impelling tens of millions of users to flee to rivals such as Signal and Telegram.

The policy change focused on how messages and data would be handled when users communicate with a business in the ever-expanding array of WhatsApp Business offerings. Companies now could store their chats with users and use information about users for marketing purposes, including targeting them with ads on Facebook or Instagram.

Elon Musk tweeted “Use Signal,” and WhatsApp users rebelled. Facebook delayed for three months the requirement for users to approve the policy update. In the meantime, it struggled to convince users that the change would have no effect on the privacy protections for their personal communications, with a slightly modified version of its usual assurance: “WhatsApp cannot see your personal messages or hear your calls and neither can Facebook.” Just as when the company first bought WhatsApp years before, the message was the same: Trust us.

Correction

Sept. 10, 2021: This story originally stated incorrectly that Apple’s iMessage system has no “report” button. The iMessage system does have a report button, but only for suspected spam (not for suspected abusive content).

https://www.propublica.org/article/how-facebook-undermines-privacy-protections-for-its-2-billion-whatsapp-users

DuckDuckGo’s Quest to Prove Online Privacy Is Possible

illustration of orange tool box with tools and green bowtie in it
This year, DuckDuckGo plans to significantly expand its privacy offerings.Illustration: Sam Whitney; Getty Images

I was driving up through Pennsylvania last summer, somewhere along US Route 15 between Harrisburg and Williamsport, when I saw a familiar face: a goofy cartoon duck wearing a green bowtie. It was the logo for DuckDuckGo, the privacy-focused search engine, along with a message: “Tired of Being Tracked Online? We Can Help.”

The sight of a tech company on a billboard in rural Pennsylvania was surprising enough to lodge in my memory. Highways in and out of Silicon Valley may be lined with billboards advertising startups, where they can be easily spied by VCs and other industry influencers, but the post-industrial communities hugging the Susquehanna River will never be confused with Palo Alto. Far more typical are road signs advertising a fireworks store, a sex shop, or Donald Trump. I found it hard to imagine that the other drivers on the road were really the audience for an internet company that occupies a very specific niche.

It turns out DuckDuckGo—itself based in Valley Forge, PA, about 90 miles east of Route 15—knew something I didn’t. According to the company’s market research, just about every demographic wants more data privacy: young, old, male, female, urban, rural. Public polling backs that up, though the results vary based on how the question is asked. One recent survey found that “93 percent of Americans would switch to a company that prioritizes data privacy if given the option.” Another reported that 57 percent of Americans would give up personalization in exchange for privacy. Perhaps most telling are the early returns on Apple’s new App Tracking Transparency system, which prompts iOS users to opt in to being tracked by third-party apps rather than handing over their data by default, as has long been standard. According to some estimates, only a tiny minority of users are choosing to allow tracking.

The problem for a company like DuckDuckGo, then, isn’t making people care about privacy; it’s convincing them that privacy is possible. Many consumers, the company has found, have basically thrown up their hands in resignation, concluding that there’s no way out of the modern surveillance economy. It’s easy to see why. Each new story about data privacy, whether it’s about the pervasiveness of tracking, or a huge data breach, or Facebook or Google’s latest violation of user trust, not only underscores the extent of corporate surveillance but also makes it feel increasingly inescapable.

DuckDuckGo is on a mission to prove that giving up one’s privacy online is not, in fact, inevitable. Over the past several years, it has expanded far beyond its original search engine to provide a suite of free privacy-centric tools, including a popular browser extension, that plug up the various holes through which ad tech companies and data brokers spy on us as we browse the internet and use our phones. This year it will roll out some major new products and features, including a desktop browser and email privacy protection. And it will spend more money than it ever has on advertising to get the word out. The long-term goal is to turn DuckDuckGo into an all-in-one online privacy shield—what Gabriel Weinberg, the company’s founder and CEO, calls “the ‘easy button’ for privacy.”

“People want privacy, but they feel like it’s impossible to get,” Weinberg says. “So our main challenge is to make the idea that you can get simple privacy protection credible.”

Whether that mission succeeds could have consequences far beyond DuckDuckGo’s bottom line. DuckDuckGo is operating to some extent in the shadow of Apple, which has already made privacy a core part of its pitch to customers. But DuckDuckGo’s ambition is to provide a suite of protections that are even more extensive and intuitive than Apple’s. And it is offering them to the millions of people who don’t want or can’t afford to use Apple products: Google’s Android operating system accounts for about 50 percent of the mobile market in the US and more than 70 percent worldwide. Perhaps most important, if DuckDuckGo succeeds at bringing simple privacy to the masses, it will mean that the future of privacy might not depend on the relative benevolence of just two corporate overlords.

Founded in 2008, DuckDuckGo is best known for its search engine. Which means that it has always been defined as a challenger to Google. It has not shied away from the comparison. In 2011, Weinberg, then the company’s sole employee, took out an ad on a billboard in San Francisco that declared, “Google tracks you. We don’t.” That branding—Google, but private—has served the company well in the years since.

“The only way to compete with Google is not to try to compete on search results,” says Brad Burnham, a partner at Union Square Ventures, which gave DuckDuckGo its first and only Series A funding in 2011. When the upstart launched, Google already controlled 90 percent of the market and was spending billions of dollars, and collecting data on billions of users, to make its product even better. DuckDuckGo, however, “offered something that Google couldn’t offer,” Burnham says: “They offered not to track you. And Google’s entire business model is, obviously, built on the ability to do that, so Google couldn’t respond by saying, ‘OK, we won’t track you either.’”

Neither DuckDuckGo nor anyone else came close to stopping Google from dominating search. Today, Google’s market share still hovers around the 90 percent range. But the pie is so enormous—advertisers spent $60 billion on search advertising in the US alone last year, according to eMarketer—that there’s quite a bit of money in even a tiny slice. DuckDuckGo has been profitable since 2014.

Like Google Search, DuckDuckGo makes money by selling ads on top of people’s search results. The difference is that while the ads you see when searching on Google are generally targeted to you in part based on your past searches, plus what Google knows about your behavior more broadly, DuckDuckGo’s are purely “contextual”—that is, they are based only on the search term. That’s because DuckDuckGo doesn’t know anything about you. It doesn’t assign you an identifier or keep track of your search history in order to personalize your results.

This non-creepy approach only protects you, however, while you’re on DuckDuckGo. “You’re anonymous on the search engine, but once you click off, now you’re going to other websites where you’re less anonymous,” Weinberg says. “How can we protect you there?”

DuckDuckGo’s first answer to that question rolled out in 2018, with the launch of a desktop browser extension and mobile browser that block third-party trackers by default wherever a user goes on the internet. It was good timing: 2018 was a banner year for raising privacy awareness. Facebook’s Cambridge Analytica scandal broke that spring. The GDPR took effect in Europe, throwing into relief how little the US regulates data collection. That summer, the Associated Press revealed that many Google services were storing your location data even if you explicitly opted out. Data collection and privacy were firmly in the national conversation. Since then, congressional inquiries, antitrust lawsuits, Netflix documentaries, and a growing feud between Apple and Facebook have kept it there.

“One of the funny things about DuckDuckGo is that the single best marketing we’ve ever had has been the gaffes that Google and Facebook have made over the years,” says Burnham. “Cambridge Analytica, for instance, was a huge driver of adoption for DuckDuckGo. There is an increasing awareness of how this business model works and what it means—not just in terms of the loss of privacy and agency over our own data, but also what it means for the vibrance and success of an open marketplace.”

Man looking at his computer being surrounded by eyes that represent data snatchers

Awareness is one thing, action another. DuckDuckGo was in position to capitalize on the rising tide of scandal because it has a reputation for building products that work. In 2019, for instance, it added a feature to its extension and browser that directs users to encrypted versions of websites whenever possible, preventing would-be hackers or ISPs from, say, looking over your shoulder as you type a password into a web page. While other encryption tools work by manually creating lists of tens of thousands of websites in need of an upgrade, DuckDuckGo crawled the internet to automatically populate a list of more than 12 million sites. The Electronic Frontier Foundation recently announced that it would incorporate DuckDuckGo’s dataset for its own HTTPS Everywhere extension. Similarly, Apple uses DuckDuckGo’s Tracker Radar dataset—a continuously updated, publicly available list of trackers assembled using open-source code—for Safari’s tracking prevention.

Weinberg is particularly proud of DuckDuckGo’s tracker prevention. Surveillance is so built into the infrastructure of the web that many sites will stop functioning if you block all cookies. Take Google Analytics, which is found on the vast majority of websites. “If you just straight-up block Google Analytics, you’ll break sites,” Weinberg says. As a result, mainstream browsers with tracking prevention, like Safari and Firefox, allow trackers to load, then try to restrict the data they can gather.

“They’re more inclined to err on the side of not breaking websites,” explains Bennett Cyphers, a technologist at the Electronic Frontier Foundation. “They will try and do this middle ground thing where they’ll load resources but restrict what Google can do once it’s in your browser.”

The problem is that even allowing a tracker to load in the first place can allow it to gather highly specific data about the user, including their IP address. So DuckDuckGo, like some other privacy extensions, works differently. It simply prevents the cookie from loading at all. To avoid the broken-site problem, it replaces some trackers with a dummy that essentially tricks the site into thinking the cookie has loaded, a technique called “surrogates” pioneered by the ad blocker uBlock Origin.

Ultimately, DuckDuckGo probably owes its success less to the technical aspects of its tracker prevention, which very few people are in any position to understand, than to the fact that the company does a pretty good job honoring its slogan: “Privacy, simplified.” Its products don’t require a user to toggle any elaborate settings. They simply include encryption, tracker blocking, and private search automatically.

Since their launch, the extension and mobile browser have experienced rapid user growth. According to DuckDuckGo, the extension and browser have together been downloaded more than 100 million times since 2018, and more than half of those downloads took place over the past twelve months. That growth has in turn helped juice the use of the original search engine, which is built into mobile app. The company estimates that its search user count doubled over the past year to between 70 and 100 million. (It’s an estimate because they don’t track users.) According to StatCounter, DuckDuckGo now has the second highest share of the US mobile search market, edging out Bing and Yahoo. (A distant second, that is: 2 percent to Google’s 94 percent.) DuckDuckGo says its annual revenue is over $100 million.

This year, the company plans to significantly expand its privacy offerings. It is introducing a desktop browser, incorporating the same features as the existing mobile app. Currently, even someone with the DuckDuckGo privacy extension can’t stop Google from gathering some data on them if they’re using Chrome, for example.

DuckDuckGo is also adding two new features to its existing extension and mobile app. The first is email privacy protection. Weinberg says that his company’s researchers found that some 70 percent of emails have some sort of tracker embedded in them. That includes not just corporate promotional emails, but just about any newsletter or fundraising email that’s sent using an automated service. In nearly a third of those cases, Weinberg says, the trackers are sending users’ plaintext email addresses over the internet, potentially exposing them to any number of marketers, data brokers, and shadier actors. The email tool is designed to thwart that by forwarding messages through a DuckDuckGo email address, which will remove the trackers before sending them along to inboxes. It also will allow people to generate random email addresses whenever they have to use email to sign up for something.  (Apple recently announced a similar feature for the Mail app on iOS.) In theory, DuckDuckGo could have created its own email client, but Weinberg recognizes getting users to switch their email providers is prohibitively difficult.

“Our goal is simplicity, right?” he says. “We want to make privacy simple and seamless without sacrifice for users.”

The final new tech DuckDuckGo is unveiling this year operates on a similar principle. A new feature within its Android app will operate in the background, even when the app itself is not in use, to block third parties from tracking you through any other app on your phone. It does that by using the phone’s VPN permission to route all traffic through DuckDuckGo, so that, as with the email trackers, it can block requests from anyone on its tracker list before they have an opportunity to gather any user data. (Again, this is somewhat analogous to Apple’s App Tracking Transparency on iOS. It will not stop first-party data collection, meaning the app you’re using can still collect your data. But it won’t be able to pass that data through to other companies, including Facebook, which currently tracks users through a vast number of unrelated apps.)

Taken together, the new features, which the company says will be available in beta this summer, represent DuckDuckGo’s evolving mission to create what Weinberg calls “the privacy layer of the internet.”

“The ideal case for that from a user perspective is, you download DuckDuckGo and you’re just protected wherever you go online,” he says. “We’re obviously not there yet, but that’s the product vision.”

So, about those billboards.

The company’s reliance on old-school advertising mediums—in addition to billboards, DuckDuckGo is partial to radio ads—is partly of necessity: As a privacy-focused business, it refuses to do any microtargeted online advertising. (Even when it advertises on a social media site like Twitter, Weinberg says it doesn’t set any demographic targeting parameters.) But the strategy also stems from the company’s market research, which has found that precise targeting would be a waste of money anyway.

“People who care about privacy, who act on privacy, who would adopt a DuckDuckGo product—they’re actually not a very niche audience,” says Zac Pappis, head of the company’s user insight team. “People who act and care about privacy don’t fall into a particular age group or demographic or have a particular psychographic background, so that makes them easier to reach.”

To put it in advertising parlance, this means DuckDuckGo spends its marketing budget on brand awareness. Ordinary people around the country don’t need to be convinced to care about privacy, the theory goes—they just need to learn that a solution exists. “Our current top business priority is to be the household name for simple online privacy protection,” Weinberg says. “So when you think about privacy online, we want you to turn to DuckDuckGo.”

To that end, the company is investing in its biggest marketing blitz to date this year, devoting tens of millions to an advertising push—so expect more billboards and more radio ads during those summer road trips. Weinberg believes the time is ripe. He points out the fact that tech giants like Apple, Facebook, and Google have all been raising the salience of privacy through very public battles over their policies and products. Plus, the ongoing antitrust lawsuits against the tech giants will draw more attention to those companies’ business practices, including around user privacy. One of the cases, brought by the Department of Justice, could even give DuckDuckGo a direct boost by preventing Google from being set as the default search engine on phones.

DuckDuckGo has competition. Companies like Ghostery offer tracking protection. Brave has a well-regarded privacy browser. The Netherlands-based Startpage offers search without tracking. But in the US, at least, DuckDuckGo has a strong position in the privacy market. In a sector where users have to trust that your product works the way you say it does, a decade-long track record without any privacy scandals establishes important credibility. “They’re probably the biggest name right now, probably because of the popularity of their search engine,” says Jon Callas, director of technology products at the Electronic Frontier Foundation.

But being the biggest name among people with a special interest in online privacy still amounts to being a big fish in a small pond. Weinberg believes DuckDuckGo can change that. He is convinced that the pond is actually huge. It just doesn’t know it yet.

Source: https://www.wired.com/story/duckduckgo-quest-prove-online-privacy-possible/

It’s time to ditch Chrome

It’s time to ditch Chrome

As well as collecting your data, Chrome also gives Google a huge amount of control over how the web works
Its time to ditch Chrome
Kheat / GOOGLE / WIRED
 

Despite a poor reputation for privacy, Google’s Chrome browser continues to dominate. The web browser has around 65 per cent market share and two billion people are regularly using it. Its closest competitor, Apple’s Safari, lags far behind with under 20 per cent market share. That’s a lot of power, even before you consider Chrome’s data collection practices. 

Is Google too big and powerful, and do you need to ditch Chrome for good? Privacy experts say yes. Chrome is tightly integrated with Google’s data gathering infrastructure, including services such as Google search and Gmail – and its market dominance gives it the power to help set new standards across the web. Chrome is one of Google’s most powerful data-gathering tools.

Google is currently under fire from privacy campaigners including rival browser makers and regulators for changes in Chrome that will spell the end of third-party cookies, the trackers that follow you as you browse. Although there are no solid plans for Europe yet, Google is planning to replace cookies with its own ‘privacy preserving’ tracking tech called FLoC, which critics say will give the firm even more power at the expense of its competitors due to the sheer scale of Chrome’s user base.

Chrome’s hefty data collection practices are another reason to ditch the browser. According to Apple’s iOS privacy labels, Google’s Chrome app can collect data including your location, search and browsing history, user identifiers and product interaction data for “personalisation” purposes. Google says this gives you the ability to enable features such as the option to save your bookmarks and passwords to your Google Account. But unlike rivals Safari, Microsoft’s Edge and Firefox, Chrome links this data to devices and individuals.

Although Chrome legitimately needs to handle browsing data, it can siphon off a large amount of information about your activities and transmit it to Google, says Rowenna Fielding, founder and director of privacy consultancy Miss IG Geek. “If you’re using Chrome to browse the internet, even in private mode, Google is watching everything you do online, all the time. This allows Google to build up a detailed and sophisticated picture about your personality, interests, vulnerabilities and triggers.”

When you sync your Google accounts to Chrome, the data slurping doesn’t stop there. Information from other Google-owned products including its email service Gmail and Google search can be combined to form a scarily accurate picture. Chrome data can be added to your geolocation history from Google Maps, the metadata from your Gmail usage, your social graph – who you interact with, both on and offline – the apps you use on your Android phone, and the products you buy with Google Pay. “That creates a very clear picture of who you are and how you live your life,” Fielding says.

As well as gathering information about your online and offline purchases, data from Google Pay can be used “in the same way as data from other Google services,” says Fielding. “This is not just what you buy, but also your location, device contacts and information, and the links those details provide so you can be identified and profiled across multiple datasets.”

Google’s power goes even further than its own browser market share. Competitor browsers such as Microsoft’s Edge are based on the same engine, Chromium. “So under the hood they are still a form of Chrome”, says Sean Wright, an independent security researcher.

Google’s massive market share has allowed the internet giant to develop web standards such as AMP in Google mobile search, which publishers must use in order to appear at the top of search results. And more recently, Chrome’s FLoC effectively gives Google control over the ad tracking tech that will replace third-party cookies – although this is being developed in the open and with feedback from other developers.

Google’s power allows it to set the direction of the industry, says Wright. “Some of those changes are good, including the move to make HTTPS encryption a default, but others are more self-serving, such as the FLoC proposal.”

Google says its Ads products do not access synced Chrome browsing history, other than for preventing spam and fraud. The firm outlines that the iOS privacy labels represent the maximum categories of data that can be gathered, and what is actually collected depends on the features you use in the app, and how you configure your settings. It also claims its open-source FLoC API is privacy-focused and will not give Google Ads products special privileges or access.

Google says privacy and security “have always been core benefits of the Chrome browser”. A Google spokesperson highlighted the Safe Browsing features that protect against threats such as phishing and malware, as well as additional controls to help you manage your information in Chrome. In recent years the company has introduced more ways you can control your data. “Chrome offers helpful options to keep your data in sync across devices, and you control what activity gets saved to your Google Account if you choose to sign in,” the spokesperson says.

But that doesn’t change the level of data collection possible, or the fact that Google has so much sway, simply through its market dominance and joined up ad-driven ecosystem. “When you are a company that has the majority share of browsers and internet search, you suddenly have a huge amount of power,” says Matthew Gribben, a former GCHQ cybersecurity consultant. “When every web developer and SEO expert in the world needs to pander to these whims, the focus becomes on making sites work well for Google at the expense of everything else.”

And as long as people use Chrome and other services – many of which are, admittedly, more user friendly than those of rivals – then Google’s power shows no signs of diminishing. Chrome provides Google with “enormous amounts of behavioural and demographic data, control over people’s browsing experience, a platform for shaping the web to Google’s own advantage, and brand ‘capture’”, Fielding says. “When people’s favourite tools, games and sites only work with Chrome, they are reluctant to switch to an alternative.”

In theory, competition and data protection laws should provide the tools to keep Google from getting out of control, says Fielding. But in practice, “that doesn’t seem to be working for various reasons – including disparities of wealth and power between Google and national regulators”. Fielding adds that Google is also useful to many governments and economies and it is tricky to enforce national laws against a global corporation.

There are steps you can take to lock down your account, such as preventing your browsing data being collected by not syncing Chrome, and turning off third-party cookie tracking. But note that the more features you use in Chrome, the more data Google needs to ensure they can function properly. And as Google’s power and dominance continues to surge, the other option is to ditch Chrome altogether.

If you do decide to ditch Chrome, there are plenty of other feature-rich privacy browser options to consider, including Firefox, Brave and DuckDuckGo, which don’t involve giving Google any of your data.

source: https://www.wired.co.uk/article/google-chrome-browser-data

Google increasingly complicates the balance between the privacy its users deserve and the targeted advertising that drives its business.   

Abstract: Android has come a long way in enhancing its security features and building out privacy controls for users, including with its Android 12 innovations. But as Apple continues to crack down on ad-tracking in an iOS 14 feature, the bar is higher than ever—and in ways that increasingly complicate Google’s balance between the privacy its users deserve and the targeted advertising that drives its business. 

Android 12 Lets You See What Your Apps Are Getting IntoA new privacy dashboard and “app hibernation” are coming to Google’s mobile operating system.man on phoneGoogle’s new privacy dashboard breaks down app activity by category— like “Location,” “Camera,” and “Microphone”—and then shows you which apps accessed those mechanisms, and for how long.Photograph: Getty Images