The confusing rollout of meaningful social interactions—marked by internal dissent, blistering external criticism, genuine efforts at reform, and foolish mistakes—set the stage for Facebook’s 2018. This is the story of that annus horribilis, based on interviews with 65 current and former employees. It’s ultimately a story about the biggest shifts ever to take place inside the world’s biggest social network. But it’s also about a company trapped by its own pathologies and, perversely, by the inexorable logic of its own recipe for success.
Facebook’s powerful network effects have kept advertisers from fleeing, and overall user numbers remain healthy if you include people on Instagram, which Facebook owns. But the company’s original culture and mission kept creating a set of brutal debts that came due with regularity over the past 16 months. The company floundered, dissembled, and apologized. Even when it told the truth, people didn’t believe it. Critics appeared on all sides, demanding changes that ranged from the essential to the contradictory to the impossible. As crises multiplied and diverged, even the company’s own solutions began to cannibalize each other. And the most crucial episode in this story—the crisis that cut the deepest—began not long after Davos, when some reporters from The New York Times, The Guardian, and Britain’s Channel 4 News came calling. They’d learned some troubling things about a shady British company called Cambridge Analytica, and they had some questions.
15 Months of Fresh Hell Inside Facebook
Scandals. Backstabbing. Resignations. Record profits. Time Bombs. In early 2018, Mark Zuckerberg set out to fix Facebook. Here’s how that turned out:
Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms.
Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits.
Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.
Seven hours after this story was published, Facebook told TechCrunch it would shut down the iOS version of its Research app in the wake of our report. But on Wednesday morning, an Apple spokesperson confirmed that Facebook violated its policies, and it had blocked Facebook’s Research app on Tuesday before the social network seemingly pulled it voluntarily (without mentioning it was forced to do so). You can read our full report on the development here.
An Apple spokesperson provided this statement. “We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”
Facebook’s Research program will continue to run on Android.
Facebook’s Research app requires users to ‘Trust’ it with extensive access to their dataWe asked Guardian Mobile Firewall’s security expert Will Strafach to dig into the Facebook Research app, and he told us that “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.” It’s unclear exactly what data Facebook is concerned with, but it gets nearly limitless access to a user’s device once they install the app.
The strategy shows how far Facebook is willing to go and how much it’s willing to pay to protect its dominance — even at the risk of breaking the rules of Apple’s iOS platform on which it depends. Apple may have asked Facebook to discontinue distributing its Research app.
A more stringent punishment would be to revoke Facebook’s permission to offer employee-only apps. The situation could further chill relations between the tech giants. Apple’s Tim Cook has repeatedly criticized Facebook’s data collection practices. Facebook disobeying iOS policies to slurp up more information could become a new talking point.
“The fairly technical sounding ‘install our Root Certificate’ step is appalling,” Strafach tells us. “This hands Facebook continuous access to the most sensitive data about you, and most users are going to be unable to reasonably consent to this regardless of any agreement they sign, because there is no good way to articulate just how much power is handed to Facebook when you do this.”
Facebook’s surveillance app
Facebook first got into the data-sniffing business when it acquired Onavo for around $120 million in 2014. The VPN app helped users track and minimize their mobile data plan usage, but also gave Facebook deep analytics about what other apps they were using. Internal documents acquired by Charlie Warzel and Ryan Mac of BuzzFeed News reveal that Facebook was able to leverage Onavo to learn that WhatsApp was sending more than twice as many messages per day as Facebook Messenger. Onavo allowed Facebook to spot WhatsApp’s meteoric rise and justify paying $19 billion to buy the chat startup in 2014. WhatsApp has since tripled its user base, demonstrating the power of Onavo’s foresight.
Over the years since, Onavo clued Facebook in to what apps to copy, features to build and flops to avoid. By 2018, Facebook was promoting the Onavo app in a Protect bookmark of the main Facebook app in hopes of scoring more users to snoop on. Facebook also launched the Onavo Bolt app that let you lock apps behind a passcode or fingerprint while it surveils you, but Facebook shut down the app the day it was discovered following privacy criticism. Onavo’s main app remains available on Google Play and has been installed more than 10 million times.
The backlash heated up after security expert Strafach detailed in March how Onavo Protect was reporting to Facebook when a user’s screen was on or off, and its Wi-Fi and cellular data usage in bytes even when the VPN was turned off. In June, Apple updated its developer policies to ban collecting data about usage of other apps or data that’s not necessary for an app to function. Apple proceeded to inform Facebook in August that Onavo Protect violated those data collection policies and that the social network needed to remove it from the App Store, which it did, Deepa Seetharaman of the WSJ reported.
But that didn’t stop Facebook’s data collection.
TechCrunch recently received a tip that despite Onavo Protect being banished by Apple, Facebook was paying users to sideload a similar VPN app under the Facebook Research moniker from outside of the App Store. We investigated, and learned Facebook was working with three app beta testing services to distribute the Facebook Research app: BetaBound, uTest and Applause. Facebook began distributing the Research VPN app in 2016. It has been referred to as Project Atlas since at least mid-2018, around when backlash to Onavo Protect magnified and Apple instituted its new rules that prohibited Onavo. Previously, a similar program was called Project Kodiak. Facebook didn’t want to stop collecting data on people’s phone usage and so the Research program continued, in disregard for Apple banning Onavo Protect.
Ads (shown below) for the program run by uTest on Instagram and Snapchat sought teens 13-17 years old for a “paid social media research study.” The sign-up page for the Facebook Research program administered by Applause doesn’t mention Facebook, but seeks users “Age: 13-35 (parental consent required for ages 13-17).” If minors try to sign-up, they’re asked to get their parents’ permission with a form that reveal’s Facebook’s involvement and says “There are no known risks associated with the project, however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of apps. You will be compensated by Applause for your child’s participation.” For kids short on cash, the payments could coerce them to sell their privacy to Facebook.
The Applause site explains what data could be collected by the Facebook Research app (emphasis mine):
“By installing the software, you’re giving our client permission to collect data from your phone that will help them understand how you browse the internet, and how you use the features in the apps you’ve installed . . . This means you’re letting our client collect information such as which apps are on your phone, how and when you use them, data about your activities and content within those apps, as well as how other people interact with you or your content within those apps. You are also letting our client collect information about your internet browsing activity (including the websites you visit and data that is exchanged between your device and those websites) and your use of other online services. There are some instances when our client will collect this information even where the app uses encryption, or from within secure browser sessions.”
Meanwhile, the BetaBound sign-up page with a URL ending in “Atlas” explains that “For $20 per month (via e-gift cards), you will install an app on your phone and let it run in the background.” It also offers $20 per friend you refer. That site also doesn’t initially mention Facebook, but the instruction manual for installing Facebook Research reveals the company’s involvement.
Facebook seems to have purposefully avoided TestFlight, Apple’s official beta testing system, which requires apps to be reviewed by Apple and is limited to 10,000 participants. Instead, the instruction manual reveals that users download the app from r.facebook-program.com and are told to install an Enterprise Developer Certificate and VPN and “Trust” Facebook with root access to the data their phone transmits. Apple requires that developers agree to only use this certificate system for distributing internal corporate apps to their own employees. Randomly recruiting testers and paying them a monthly fee appears to violate the spirit of that rule.
Once installed, users just had to keep the VPN running and sending data to Facebook to get paid. The Applause-administered program requested that users screenshot their Amazon orders page. This data could potentially help Facebook tie browsing habits and usage of other apps with purchase preferences and behavior. That information could be harnessed to pinpoint ad targeting and understand which types of users buy what.
TechCrunch commissioned Strafach to analyze the Facebook Research app and find out where it was sending data. He confirmed that data is routed to “vpn-sjc1.v.facebook-program.com” that is associated with Onavo’s IP address, and that the facebook-program.com domain is registered to Facebook, according to MarkMonitor. The app can update itself without interacting with the App Store, and is linked to the email address PeopleJourney@fb.com. He also discovered that the Enterprise Certificate first acquired in 2016 indicates Facebook renewed it on June 27th, 2018 — weeks after Apple announced its new rules that prohibited the similar Onavo Protect app.
“It is tricky to know what data Facebook is actually saving (without access to their servers). The only information that is knowable here is what access Facebook is capable of based on the code in the app. And it paints a very worrisome picture,” Strafach explains. “They might respond and claim to only actually retain/save very specific limited data, and that could be true, it really boils down to how much you trust Facebook’s word on it. The most charitable narrative of this situation would be that Facebook did not think too hard about the level of access they were granting to themselves . . . which is a startling level of carelessness in itself if that is the case.”
[Update: TechCrunch also found that Google’s Screenwise Meter surveillance app also breaks the Enterprise Certificate policy, though it does a better job of revealing the company’s involvement and how it works than Facebook does.]
“Flagrant defiance of Apple’s rules”
In response to TechCrunch’s inquiry, a Facebook spokesperson confirmed it’s running the program to learn how people use their phones and other services. The spokesperson told us “Like many companies, we invite people to participate in research that helps us identify things we can be doing better. Since this research is aimed at helping Facebook understand how people use their mobile devices, we’ve provided extensive information about the type of data we collect and how they can participate. We don’t share this information with others and people can stop participating at any time.”
Facebook’s spokesperson claimed that the Facebook Research app was in line with Apple’s Enterprise Certificate program, but didn’t explain how in the face of evidence to the contrary. They said Facebook first launched its Research app program in 2016. They tried to liken the program to a focus group and said Nielsen and comScore run similar programs, yet neither of those ask people to install a VPN or provide root access to the network. The spokesperson confirmed the Facebook Research program does recruit teens but also other age groups from around the world. They claimed that Onavo and Facebook Research are separate programs, but admitted the same team supports both as an explanation for why their code was so similar.
However, Facebook’s claim that it doesn’t violate Apple’s Enterprise Certificate policy is directly contradicted by the terms of that policy. Those include that developers “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing”. The policy also states that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers” unless under direct supervision of employees or on company premises. Given Facebook’s customers are using the Enterprise Certificate-powered app without supervision, it appears Facebook is in violation.
Seven hours after this report was first published, Facebook updated its position and told TechCrunch that it would shut down the iOS Research app. Facebook noted that the Research app was started in 2016 and was therefore not a replacement for Onavo Protect. However, they do share similar code and could be seen as twins running in parallel. A Facebook spokesperson also provided this additional statement:
“Key facts about this market research program are being ignored. Despite early reports, there was nothing ‘secret’ about this; it was literally called the Facebook Research App. It wasn’t ‘spying’ as all of the people who signed up to participate went through a clear on-boarding process asking for their permission and were paid to participate. Finally, less than 5 percent of the people who chose to participate in this market research program were teens. All of them with signed parental consent forms.”
Facebook did not publicly promote the Research VPN itself and used intermediaries that often didn’t disclose Facebook’s involvement until users had begun the signup process. While users were given clear instructions and warnings, the program never stresses nor mentions the full extent of the data Facebook can collect through the VPN. A small fraction of the users paid may have been teens, but we stand by the newsworthiness of its choice not to exclude minors from this data collection initiative.
Facebook disobeying Apple so directly and then pulling the app could hurt their relationship. “The code in this iOS app strongly indicates that it is simply a poorly re-branded build of the banned Onavo app, now using an Enterprise Certificate owned by Facebook in direct violation of Apple’s rules, allowing Facebook to distribute this app without Apple review to as many users as they want,” Strafach tells us. ONV prefixes and mentions of graph.onavo.com, “onavoApp://” and “onavoProtect://” custom URL schemes litter the app. “This is an egregious violation on many fronts, and I hope that Apple will act expeditiously in revoking the signing certificate to render the app inoperable.”
Facebook is particularly interested in what teens do on their phones as the demographic has increasingly abandoned the social network in favor of Snapchat, YouTube and Facebook’s acquisition Instagram. Insights into how popular with teens is Chinese video music app TikTok and meme sharing led Facebook to launch a clone called Lasso and begin developing a meme-browsing feature called LOL, TechCrunch first reported. But Facebook’s desire for data about teens riles critics at a time when the company has been battered in the press. Analysts on tomorrow’s Facebook earnings call should inquire about what other ways the company has to collect competitive intelligence now that it’s ceased to run the Research program on iOS.
Last year when Tim Cook was asked what he’d do in Mark Zuckerberg’s position in the wake of the Cambridge Analytica scandal, he said “I wouldn’t be in this situation . . . The truth is we could make a ton of money if we monetized our customer, if our customer was our product. We’ve elected not to do that.” Zuckerberg told Ezra Klein that he felt Cook’s comment was “extremely glib.”
Now it’s clear that even after Apple’s warnings and the removal of Onavo Protect, Facebook was still aggressively collecting data on its competitors via Apple’s iOS platform. “I have never seen such open and flagrant defiance of Apple’s rules by an App Store developer,” Strafach concluded. Now that Facebook has ceased the program on iOS and its Android future is uncertain, it may either have to invent new ways to surveil our behavior amidst a climate of privacy scrutiny, or be left in the dark.
Additional reporting by Zack Whittaker. Updated with comment from Facebook, and on Wednesday with a statement from Apple.
Surprisingly a number of students and generation Y digital natives turn against social media giants.
BERKELEY, Calif. — A job at Facebook sounds pretty plum. The interns make around $8,000 a month, and an entry-level software engineer makes about $140,000 a year. The food is free. There’s a walking trail with indigenous plants and a juice bar.
But the tone among highly sought-after computer scientists about the social network is changing. On a recent night at the University of California, Berkeley, as a group of young engineers gathered to show off their tech skills, many said they would avoid taking jobs at the social network.
“I’ve heard a lot of employees who work there don’t even use it,” said Niky Arora, 19, an engineering student, who was recently invited to a Facebook recruiting event at the company’s headquarters in Menlo Park, Calif. “I just don’t believe in the product because like, Facebook, the baseline of everything they do is desire to show people more ads.”
Emily Zhong, 20, a computer science major, piped up. “Surprisingly, a lot of my friends now are like, ‘I don’t really want to work for Facebook,’” she said, citing “privacy stuff, fake news, personal data, all of it.”
“Before it was this glorious, magical thing to work there,” said Jazz Singh, 18, also studying computer science. “Now it’s like, just because it does what you want doesn’t mean it’s doing good.”
As Facebook has been rocked by scandal after scandal, some young engineers are souring on the company. Many are still taking jobs there, but those who do are doing it a little more quietly, telling their friends that they will work to change it from within or that they have carved out more ethical work at a company whose reputation has turned toxic.
Facebook, which employs more than 30,000 full-time workers around the world, said, “In 2018, we’ve hired more engineers than ever before.” The company added, “We continue to see strong engagement and excitement within the engineering community at the prospect of joining our company.”
The changing attitudes are happening beyond Facebook. Across Silicon Valley, tech recruiters said job applicants in general were asking more hard questions during interviews, wanting to know specifically what they would be asked to do at the company. Career coaches said they had tech employees reaching out to get tips on handling moral quandaries. The questions include “How do I avoid a project I disagree with?” and “How do I remind my bosses of the company mission statement?”
“Employees are wising up to the fact that you can have a mission statement on your website, but when you’re looking at how the company creates new products or makes decisions, the correlation between the two is not so tightly aligned,” said David Chie, the head of Palo Alto Staffing, a tech job placement service in Silicon Valley. “Everyone’s having this conversation.”
When engineers apply for jobs, they are also doing it differently.
“They do a lot more due diligence,” said Heather Johnston, Bay Area district president for the tech job staffing agency Robert Half. “Before, candidates were like: ‘Oh, I don’t want to do team interviews. I want a one-and-done.’” Now, she added, job candidates “want to meet the team.”
“They’re not just going to blindly take a company because of the name anymore,” she said.
Yet while many of the big tech companies have been hit by a change in public perception, Facebook seems uniquely tarred among young workers.
“I’ve had a couple of clients recently say they’re not as enthusiastic about Facebook because they’re frustrated with what they see happening politically or socially,” said Paul Freiberger, president of Shimmering Careers, a career counseling group based in San Mateo, Calif. “It’s privacy and political news, and concern that it’s going to be hard to correct these things from inside.”
Chad Herst, a leadership and career coach based in San Francisco since 2008, said that now, for the first time, he had clients who wanted to avoid working for big social media companies like Facebook or Twitter.
“They’re concerned about where democracy is going, that social media polarizes us, and they don’t want to be building it,” Mr. Herst said. “People really have been thinking about the mission of the company and what the companies are trying to achieve a little more.”
He said one client, a midlevel executive at Facebook, wanted advice on how to shift her group’s work to encourage users to connect offline as well. But she found resistance internally to her efforts.
“She was trying to figure out: ‘How do I politic this? How do I language this?’” Mr. Herst said. “And I was telling her to bring up some of Mark Zuckerberg’s past statements about connecting people.”
On the recent evening at the University of California, Berkeley, around 2,200 engineering students from around the country gathered for Cal Hacks 5.0 — a competition to build the best apps. The event spanned a weekend, so teenage competitors dragged pillows around with them. The hosts handed out 2,000 burritos as students registered.
It was also a hiring event. Recruiters from Facebook and Alphabet set up booths (free sunglasses from Facebook; $200 in credit to the Google Cloud platform from Alphabet).
In the auditorium, the head of Y Combinator, a start-up incubator and investment firm, gave opening remarks, recommending that young people avoid jobs in big tech.
“You get to program your life on a totally different scale,” said Michael Seibel, who leads Y Combinator. “The worst thing that can happen to you is you get a job at Google.” He called those jobs “$100,000-a-year welfare” — meaning, he said, that workers can get tethered to the paycheck and avoid taking risks.
The event then segued to a word from the sponsor, Microsoft. Justin Garrett, a Microsoft recruiter who on his LinkedIn profile calls himself a senior technical evangelist, stepped onstage, laughing a little.
“So, Michael’s a tough guy to follow, especially when you work for one of those big companies,” Mr. Garrett said. “He called it welfare. I like to call it tremendous opportunity.”
Then students flooded into the stadium, which was filled with long tables of computers where they would stay and compete. In the middle of the scrum, three friends joked around. Caleb Thomas, 21, was gently made fun of because he had accepted an internship at Facebook.
“Come on, guys,” Mr. Thomas said.
“These are the realities of how the business works,” said Samuel Resendez, 20, a computer science student at the University of Southern California.
It turned out Mr. Resendez had interned at Facebook in the summer. Olivia Brown, 20, head of Stanford’s Computer Science and Social Good club and an iOS intern at Mozilla, called him out on it. “But you still worked at Facebook, too,” she said.
“Well, at least I signed before Cambridge Analytica,” Mr. Resendez said, a little bashful about the data privacy and election manipulation scandal that rocked the company this year. “Ninety-five percent of what Facebook is doing is delivering memes.”
Ms. Brown said a lot of students criticize Facebook and talk about how they would not work there, but ultimately join. “Everyone cares about ethics in tech before they get a contract,” she said.
Ms. Brown said she thought that could change soon, though, as the social stigma of working for Facebook began outweighing the financial benefits.
“Defense companies have had this reputation for a long time,” she said. “Social networks are just getting that.”
- Back in April, the WhatsApp cofounder Jan Koum announced plans to leave Facebook.
- But he’s still showing up to the office once a month so he can continue to collect $450 million in Facebook stock he’s contractually due from when Facebook bought his company.
- It’s a high-dollar example of „rest and vest,“ in which big tech companies pay senior employees who don’t do much work.
- Koum has already sold over $7 billion in Facebook stock.
The WhatsApp cofounder Jan Koum said in April that he planned to leave Facebook, which bought his company for $19 billion in 2014. He’s already sold $7.1 billion worth of Facebook shares.
But he’s still showing up to the office, The Wall Street Journal reports, to collect one last payday: $450 million in stock.
Koum is resting and vesting, in Silicon Valley lingo, a state that often refers to wealthy entrepreneurs and engineers with one foot out the door at big tech companies who are allowed to continue to be officially employed until they’re able to collect stock and options in quarterly or annual increments.
Usually, stock awards after a merger are distributed on a four-year vesting cliff — if you last all four years, you get your entire stock grant. Koum’s last vesting date is November. He showed up at Facebook’s offices in mid-July, fulfilling a requirement of his employment contract, according to The Wall Street Journal.
„Resting and vesting“ is an open secret in Silicon Valley, Business Insider has reported. At some companies, the employees are called „coasters.“ The HBO show „Silicon Valley“ even spoofed it in an episode in which engineers hang out on a roof and don’t do any work.
„I’ve actually had a number of people, including today at Google X … send me pictures of themselves on a roof, kicking back doing nothing, with the hashtag ‚unassigned‘ or ‚rest and vest.‘ It’s something that really happens, and apparently, somewhat often,“ Josh Brener, the actor who plays the lucky character who got to rest and vest in HBO’s „Silicon Valley,“ told Business Insider last year.
From Business Insider’s report on the phenomenon:
„Facebook, for instance, has a fairly hush bonus program called ‚discretionary equity,‘ a former Facebook engineer who received it said.
„DE is when the company hands an engineer a massive, extra chunk of restricted stock units, worth tens to hundreds of thousands of dollars. It’s a thank-you for a job well done. It also helps keep the person from jumping ship because DE vests over time. These are bonus grants that are signed by top executives, sometimes even CEO Mark Zuckerberg.“
Koum’s payday isn’t related to discretionary equity; it’s instead a result of the over 20 million restricted shares of Facebook he received when he sold WhatsApp. He has one more vesting day in August and one in November, according to filings with the Securities and Exchange Commission.
Koum reportedly decided to leave Facebook in the middle of a spat over how to integrate advertising into WhatsApp. A WhatsApp representative declined to comment, but The Journal reports that Koum is still employed at the social-networking giant.
When Koum left, he wrote that he was taking time off to collect „rare air-cooled Porsches“ and play ultimate Frisbee.
How many Porsches can one buy with $450 million?
1. Companies Worldwide Strive for GDPR Compliance
Companies around the world are scrambling to get their business and its practices into compliance – a significant task for many of them. While technically, the deadline to get everything in order passed on May 25, for many companies the process will continue well into June and possibly beyond. Some companies are even shutting down in Europe for good, or for as long as it takes them to get in compliance.
Even with the deadline behind us, the GDPR continues to be a top story for the tech world and may remain so for some time to come.
2. Amazon Provides Facial Recognition Tech to Law Enforcement
Amazon can’t seem to go a whole month without showing up in a tech news roundup. This month it’s for a controversial story: selling use of Rekognition, their facial recognition software, to law enforcement agencies on the cheap.
Civil rights groups have called for the company to stop allowing law enforcement access to the tech out of concerns that increased government surveillance can pose a threat to vulnerable communities in the country. In spite of the public criticism, Amazon hasn’t backed off on providing the tech to authorities, at least as of this time.
3. Apple Looks Into Self-Driving Employee Shuttles
Of the many problems facing our world, the frustrating work commute is one that many of the brightest minds in tech deal with just like the rest of us. Which makes it a problem the biggest tech companies have a strong incentive to try to solve.
Apple is one of many companies that’s invested in developing self-driving cars as a possible solution, but while that goal is still (probably) years away, they’ve narrowed their focus to teaming up with VW to create self-driving shuttles just for their employees. Even that project is moving slower than the company had hoped, but they’re aiming to have some shuttles ready by the end of the year.
4. Court Weighs in on President’s Tendency to Block Critics on Twitter
Three years ago no one would have imagined that Twitter would be a president’s go-to source for making announcements, but today it’s used to that effect more frequently than official press conferences or briefings.
In a court battle that may sound surreal to many of us, a judge just found that the president can no longer legally block other users on Twitter. The court asserted that blocking users on a public forum like Twitter amounts to a violation of their First Amendment rights. The judgment does still allow for the president and other public officials to mute users they don’t agree with, though.
5. YouTube Launches Music Streaming Service
YouTube joined the ranks of Spotify, Pandora, and Amazon this past month with their own streaming music service. Consumers can use a free version of the service that includes ads, or can pay $9.99 for the ad-free version.
With so many similar services already on the market, people weren’t exactly clamoring for another music streaming option. But since YouTube is likely to remain the reigning source for videos, it doesn’t necessarily need to unseat Spotify to still be okay. And with access to Google’s extensive user data, it may be able to provide more useful recommendations than its main competitors in the space, which is one way the service could differentiate itself.
6. Facebook Institutes Political Ad Rules
Facebook hasn’t yet left behind the controversies of the last election. The company is still working to proactively respond to criticism of its role in the spread of political propaganda many believe influenced election results. One of the solutions they’re trying is a new set of rules for any political ads run on the platform.
Any campaign that intends to run Facebook ads is now required to verify their identity with a card Facebook mails to their address that has a verification code. While Facebook has been promoting these new rules for a few weeks to politicians active on the platform, some felt blindsided when they realized, right before their primaries no less, that they could no longer place ads without waiting 12 to 15 days for a verification code to come in the mail. Politicians in this position blame the company for making a change that could affect their chances in the upcoming election.
Even in their efforts to avoid swaying elections, Facebook has found themselves criticized for doing just that. They’re probably feeling at this point like they just can’t win.
7. Another Big Month for Tech IPOs
This year has seen one tech IPO after another and this month is no different. Chinese smartphone company Xiaomi has a particularly large IPO in the works. The company seeks to join the Hong Kong stock exchange on June 7 with an initial public offering that experts anticipate could reach $10 billion.
The online lending platform Greensky started trading on the New York Stock Exchange on May 23 and sold 38 million shares in its first day, 4 million more than expected. This month continues 2018’s trend of tech companies going public, largely to great success.
8. StumbleUpon Shuts Down
In the internet’s ongoing evolution, there will always be tech companies that win and those that fall by the wayside. StumbleUpon, a content discovery platform that had its heyday in the early aughts, is officially shutting down on June 30.
Since its 2002 launch, the service has helped over 40 million users “stumble upon” 60 billion new websites and pieces of content. The company behind StumbleUpon plans to create a new platform that serves a similar purpose that may be more useful to former StumbleUpon users called Mix.
9. Uber and Lyft Invest in Driver Benefits
In spite of their ongoing success, the popular ridesharing platforms Uber and Lyft have faced their share of criticism since they came onto the scene. One of the common complaints critics have made is that the companies don’t provide proper benefits to their drivers. And in fact, the companies have fought to keep drivers classified legally as contractors so they’re off the hook for covering the cost of employee taxes and benefits.
Recently both companies have taken steps to make driving for them a little more attractive. Uber has begun offering Partner Protection to its drivers in Europe, which includes health insurance, sick pay, and parental leave – so far nothing similar in the U.S. though. For its part, Lyft is investing $100 million in building driver support centers where their drivers can stop to get discounted car maintenance, tax help, and customer support help in person from Lyft staff. It’s not the same as getting full employee benefits (in the U.S. at least), but it’s something.
With Zuckerberg testifying to the US Congress over Facebook’s data privacy and the implementation of GDPR fast approaching, the debate around data ownership has suddenly burst into the public psyche. Collecting user data to serve targeted advertising in a free platform is one thing, harvesting the social graphs of people interacting with apps and using it to sway an election is somewhat worse.
Suffice to say that neither of the above compare to the indiscriminate collection of ordinary civilians’ data on behalf of governments every day.
In 2013, Edward Snowden blew the whistle on the systematic US spy program he helped to architect. Perhaps the largest revelation to come out of the trove of documents he released were the details of PRISM, an NSA program that collects internet communications data from US telecommunications companies like Microsoft, Yahoo, Google, Facebook and Apple. The data collected included audio and video chat logs, photographs, emails, documents and connection logs of anyone using the services of 9 leading US internet companies. PRISM benefited from changes to FISA that allowed warrantless domestic surveillance of any target without the need for probable cause. Bill Binney, former US intelligence official, explains how, for instances where corporate control wasn’t achievable, the NSA enticed third party countries to clandestinely tap internet communication lines on the internet backbone via the RAMPART-A program.What this means is that the NSA was able to assemble near complete dossiers of all web activity carried out by anyone using the internet.
But this is just in the US right?, policies like this wouldn’t be implemented in Europe.
GCHQ, the UK’s intelligence agency allegedly collects considerably more metadata than the NSA. Under Tempora, GCHQ can intercept all internet communications from submarine fibre optic cables and store the information for 30 days at the Bude facility in Cornwall. This includes complete web histories, the contents of all emails and facebook entires and given that more than 25% of all internet communications flow through these cables, the implications are astronomical. Elsewhere, JTRIG, a unit of GCHQ have intercepted private facebook pictures, changed the results of online polls and spoofed websites in real time. A lot of these techniques have been made possible by the 2016 Investigatory Powers Act which Snowden describes as the most “extreme surveillance in the history of western democracy”.
But despite all this, the age old reprise; “if you’ve got nothing to hide, you’ve got nothing to fear” often rings out in debates over privacy.
Indeed, the idea is so pervasive that politicians often lean on the phrase to justify ever more draconian methods of surveillance. Yes, they draw upon the selfsame rhetoric of Joseph Goebbels, propaganda minister for the Nazi regime.
In drafting legislation for the the Investigatory Powers Act, May said that such extremes were necessary to ensure “no area of cyberspace becomes a haven for those who seek to harm us, to plot, poison minds and peddle hatred under the radar”.
When levelled against the fear of terrorism and death, its easy to see how people passively accept ever greater levels of surveillance. Indeed, Naomi Klein writes extensively in Shock Doctrine how the fear of external threats can be used as a smokescreen to implement ever more invasive policy. But indiscriminate mass surveillance should never be blindly accepted, privacy should and always will be a social norm, despite what Mark Zuckerberg said in 2010. Although I’m sure he may have a different answer now.
So you just read emails and look at cat memes online, why would you care about privacy?
In the same way we’re able to close our living room curtains and be alone and unmonitored, we should be able to explore our identities online un-impinged. Its a well rehearsed idea that nowadays we’re more honest to our web browsers than we are to each other but what happens when you become cognisant that everything you do online is intercepted and catalogued? As with CCTV, when we know we’re being watched, we alter our behaviour in line with whats expected.
As soon as this happens online, the liberating quality provided by the anonymity of the internet is lost. Your thinking aligns with the status quo and we lose the boundless ability of the internet to search and develop our identities. No progress can be made when everyone thinks the same way. Difference of opinion fuels innovation.
This draws obvious comparisons with Bentham’s Panopticon, a prison blueprint for enforcing control from within. The basic setup is as follows; there is a central guard tower surrounded by cells. In the cells are prisoners. The tower shines bright light so that the watchman can see each inmate silhouetted in their cell but the prisoners cannot see the watchman. The prisoners must assume they could be observed at any point and therefore act accordingly. In literature, the common comparison is Orwell’s 1984 where omnipresent government surveillance enforces control and distorts reality. With revelations about surveillance states, the relevance of these metaphors are plain to see.
In reality, theres actually a lot more at stake here.
With the Panopticon certain individuals are watched, in 1984 everyone is watched. On the modern internet, every person, irrespective of the threat they pose, is not only watched but their information is stored and archived for analysis.
Kafka’s The Trial, in which a bureaucracy uses citizens information to make decisions about them, but denies them the ability to participate in how their information is used, therefore seems a more apt comparison. The issue here is that corporations, more so, states have been allowed to comb our data and make decisions that affect us without our consent.
Maybe, as a member of a western democracy, you don’t think this matters. But what if you’re a member of a minority group in an oppressive regime? What if you’re arrested because a computer algorithm cant separate humour from intent to harm?
On the other hand, maybe you trust the intentions of your government, but how much faith do you have in them to keep your data private? The recent hack of the SEC shows that even government systems aren’t safe from attackers. When a business database is breached, maybe your credit card details become public, when a government database that has aggregated millions of data points on every aspect of your online life is hacked, you’ve lost all control of your ability to selectively reveal yourself to the world. Just as Lyndon Johnson sought to control physical clouds, he who controls the modern cloud, will rule the world.
Perhaps you think that even this doesn’t matter, if it allows the government to protect us from those that intend to cause harm then its worth the loss of privacy. The trouble with indiscriminate surveillance is that with so much data you see everything but paradoxically, still know nothing.
Intelligence is the strategic collection of pertinent facts, bulk data collection cannot therefore be intelligent. As Bill Binney puts it “bulk data kills people” because technicians are so overwhelmed that they cant isolate whats useful. Data collection as it is can only focus on retribution rather than reduction.
Granted, GDPR is a big step forward for individual consent but will it stop corporations handing over your data to the government? Depending on how cynical you are, you might think that GDPR is just a tool to clean up and create more reliable deterministic data anyway. The nothing to hide, nothing to fear mentality renders us passive supplicants in the removal of our civil liberties. We should be thinking about how we relate to one another and to our Governments and how much power we want to have in that relationship.
To paraphrase Edward Snowden, saying you don’t care about privacy because you’ve got nothing to hide is analogous to saying you don’t care about freedom of speech because you have nothing to say.
Photo Credits: oe24.at – Copyrights of oe24.at reserved
Let’s put aside Instagram, WhatsApp and other Facebook products for a minute. Facebook has built the world’s biggest social network. But that’s not what they sell. You’ve probably heard the internet saying “if a product is free, it means that you are the product.”
And it’s particularly true in this case because Facebook is the world’s second biggest advertising company in the world behind Google. During the last quarter of 2017, Facebook reported $12.97 billion in revenue, including $12.78 billion from ads.
That’s 98.5 percent of Facebook’s revenue coming from ads.
Ads aren’t necessarily a bad thing. But Facebook has reached ad saturation in the newsfeed. So the company has two options — creating new products and ad formats, or optimizing those sponsored posts.
Facebook has reached ad saturation in the newsfeed
This isn’t a zero-sum game — Facebook has been doing both at the same time. That’s why you’re seeing more ads on Instagram and Messenger. And that’s also why ads on Facebook seem more relevant than ever.
If Facebook can show you relevant ads and you end up clicking more often on those ads, then advertisers will pay Facebook more money.
So Facebook has been collecting as much personal data about you as possible — it’s all about showing you the best ad. The company knows your interests, what you buy, where you go and who you’re sleeping with.
You can’t hide from Facebook
Facebook’s terms and conditions are a giant lie. They are purposely misleading, too long and too broad. So you can’t just read the company’s terms of service and understand what it knows about you.
That’s why some people have been downloading their Facebook data. You can do it too, it’s quite easy. Just head over to your Facebook settings and click the tiny link that says “Download a copy of your Facebook data.”
In that archive file, you’ll find your photos, your posts, your events, etc. But if you keep digging, you’ll also find your private messages on Messenger (by default, nothing is encrypted).
And if you keep digging a bit more, chances are you’ll also find your entire address book and even metadata about your SMS messages and phone calls.
All of this is by design and you agreed to it. Facebook has unified terms of service and share user data across all its apps and services (except WhatsApp data in Europe for now). So if you follow a clothing brand on Instagram, you could see an ad from this brand on Facebook.com.
Messaging apps are privacy traps
But Facebook has also been using this trick quite a lot with Messenger. You might not remember, but the on-boarding experience on Messenger is really aggressive.
On iOS, the app shows you a fake permission popup to access your address book that says “Ok” or “Learn More”. The company is using a fake popup because you can’t ask for permission twice.
There’s a blinking arrow below the OK button.
If you click on “Learn More”, you get a giant blue button that says “Turn On”. Everything about this screen is misleading and Messenger tries to manipulate your emotions.
“Messenger only works when you have people to talk to,” it says. Nobody wants to be lonely, that’s why Facebook implies that turning on this option will give you friends.
Even worse, it says “if you skip this step, you’ll need to add each contact one-by-one to message them.” This is simply a lie as you can automatically talk to your Facebook friends using Messenger without adding them one-by-one.
The next time you pay for a burrito with your credit card, Facebook will learn about this transaction and match this credit card number with the one you added in Messenger
If you tap on “Not Now”, Messenger will show you a fake notification every now and then to push you to enable contact syncing. If you tap on yes and disable it later, Facebook still keeps all your contacts on its servers.
On Android, you can let Messenger manage your SMS messages. Of course, you guessed it, Facebook uploads all your metadata. Facebook knows who you’re texting, when, how often.
Even if you disable it later, Facebook will keep this data for later reference.
But my favorite thing is probably peer-to-peer payments. In some countries, you can pay back your friends using Messenger. It’s free! You just have to add your card to the app.
It turns out that Facebook also buys data about your offline purchases. The next time you pay for a burrito with your credit card, Facebook will learn about this transaction and match this credit card number with the one you added in Messenger.
In other words, Messenger is a great Trojan horse designed to learn everything about you.
And the next time an app asks you to share your address book, there’s a 99-percent chance that this app is going to mine your address book to get new users, spam your friends, improve ad targeting and sell email addresses to marketing companies.
I could say the same thing about all the other permission popups on your phone. Be careful when you install an app from the Play Store or open an app for the first time on iOS. It’s easier to enable something if a feature doesn’t work without it than to find out that Facebook knows everything about you.
GDPR to the rescue
There’s one last hope. And that hope is GDPR. I encourage you to read TechCrunch’s Natasha Lomas excellent explanation of GDPR to understand what the European regulation is all about.
Many of the misleading things that are currently happening at Facebook will have to change. You can’t force people to opt in like in Messenger. Data collection should be minimized to essential features. And Facebook will have to explain why it needs all this data to its users.
If Facebook doesn’t comply, the company will have to pay up to 4 percent of its global annual turnover. But that doesn’t stop you from actively reclaiming your online privacy right now.
You can’t be invisible on the internet, but you have to be conscious about what’s happening behind your back. Every time a company asks you to tap OK, think about what’s behind this popup. You can’t say that nobody told you.