Archiv der Kategorie: Machine Learning

Critical Infrastructure Is Sinking Along the US East Coast

Source: https://www.wired.com/story/critical-infrastructure-is-sinking-along-the-us-east-coast/

Last year, scientists reported that the US Atlantic Coast is dropping by several millimeters annually, with some areas, like Delaware, notching figures several times that rate. So just as the seas are rising, the land along the eastern seaboard is sinking, greatly compounding the hazard for coastal communities.

In a follow-up study just published in the journal PNAS Nexus, the researchers tally up the mounting costs of subsidence—due to settling, groundwater extraction, and other factors—for those communities and their infrastructure. Using satellite measurements, they have found that up to 74,000 square kilometers (29,000 square miles) of the Atlantic Coast are exposed to subsidence of up to 2 millimeters (0.08 inches) a year, affecting up to 14 million people and 6 million properties. And over 3,700 square kilometers along the Atlantic Coast are sinking more than 5 millimeters annually. That’s an even faster change than sea level rise, currently at 4 millimeters a year. (In the map below, warmer colors represent more subsidence, up to 6 millimeters.)

Map of eastern coastal cities
Courtesy of Leonard O Ohenhen

With each millimeter of subsidence, it gets easier for storm surges—essentially a wall of seawater, which hurricanes are particularly good at pushing onshore—to creep farther inland, destroying more and more infrastructure. “And it’s not just about sea levels,” says the study’s lead author, Leonard Ohenhen, an environmental security expert at Virginia Tech. “You also have potential to disrupt the topography of the land, for example, so you have areas that can get full of flooding when it rains.”

A few millimeters of annual subsidence may not sound like much, but these forces are relentless: Unless coastal areas stop extracting groundwater, the land will keep sinking deeper and deeper. The social forces are relentless, too, as more people around the world move to coastal cities, creating even more demand for groundwater. “There are processes that are sometimes even cyclic. For example, in summers you pump a lot more water, so land subsides rapidly in a short period of time,” says Manoochehr Shirzaei, an environmental security expert at Virginia Tech and coauthor of the paper. “That causes large areas to subside below a threshold that leads the water to flood a large area.” When it comes to flooding, falling elevation of land is a tipping element that has been largely ignored by research so far, Shirzaei says.

In Jakarta, Indonesia, for example, the land is sinking nearly a foot a year because of collapsing aquifers. Accordingly, within the next three decades, 95 percent of North Jakarta could be underwater. The city is planning a giant seawall to hold back the ocean, but it’ll be useless unless subsidence is stopped.

This new study warns that levees and other critical infrastructure along the Atlantic Coast are in similar danger. If the land were to sink uniformly, you might just need to keep raising the elevation of a levee to compensate. But the bigger problem is “differential subsidence,” in which different areas of land sink at different rates. “If you have a building or a runway or something that’s settling uniformly, it’s probably not that big a deal,” says Tom Parsons, a geophysicist with the United States Geological Survey who studies subsidence but wasn’t involved in the new paper. “But if you have one end that’s sinking faster than the other, then you start to distort things.”

The researchers selected 10 levees on the Atlantic Coast and found that all were impacted by subsidence of at least 1 millimeter a year. That puts at risk something like 46,000 people, 27,000 buildings, and $12 billion worth of property. But they note that the actual population and property at risk of exposure behind the 116 East Coast levees vulnerable to subsidence could be two to three times greater. “Levees are heavy, and when they’re set on land that’s already subsiding, it can accelerate that subsidence,” says independent scientist Natalie Snider, who studies coastal resilience but wasn’t involved in the new research. “It definitely can impact the integrity of the protection system and lead to failures that can be catastrophic.”

map of Virgina's coastal areas
Courtesy of Leonard O Ohenhen

The same vulnerability affects other infrastructure that stretches across the landscape. The new analysis finds that along the Atlantic Coast, between 77 and 99 percent of interstate highways and between 76 and 99 percent of primary and secondary roads are exposed to subsidence. (In the map above, you can see roads sinking at different rates across Hampton and Norfolk, Virginia.) Between 81 and 99 percent of railway tracks and 42 percent of train stations are exposed on the East Coast.

Below is New York’s JFK Airport—notice the red hot spots of high subsidence against the teal of more mild elevation change. The airport’s average subsidence rate is 1.7 millimeters a year (similar to the LaGuardia and Newark airports), but across JFK that varies between 0.8 and 2.8 millimeters a year, depending on the exact spot.

map of JFK airport aerial
Courtesy of Leonard O Ohenhen

This sort of differential subsidence can also bork much smaller structures, like buildings, where one side might drop faster than another. “Even if that is just a few millimeters per year, you can potentially cause cracks along structures,” says Ohenhen.

The study finds that subsidence is highly variable along the Atlantic Coast, both regionally and locally, as different stretches have different geology and topography, and different rates of groundwater extraction. It’s looking particularly problematic for several communities, like Virginia Beach, where 451,000 people and 177,000 properties are at risk. In Baltimore, Maryland, it’s 826,000 people and 335,000 properties, while in NYC—in Queens, Bronx, and Nassau—that leaps to 5 million people and 1.8 million properties.

So there’s two components to addressing the problem of subsidence: Getting high-resolution data like in this study, and then pairing that with groundwater data. “Subsidence is so spatially variable,” says Snider. “Having the details of where groundwater extraction is really having an impact, and being able to then demonstrate that we need to change our management of that water, that reduces subsidence in the future.”

The time to act is now, Shirzaei emphasizes. Facing down subsidence is like treating a disease: You spend less money by diagnosing and treating the problem now, saving money later by avoiding disaster. “This kind of data and the study could be an essential component of the health care system for infrastructure management,” he says. “Like cancers—if you diagnose it early on, it can be curable. But if you are late, you invest a lot of money, and the outcome is uncertain.”

Source: https://www.wired.com/story/critical-infrastructure-is-sinking-along-the-us-east-coast/

AI drone kills it’s operator

„The system started realizing that while they did identify the threat,“ Hamilton said at the May 24 event, „at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.“

Killer AI is on the minds of US Air Force leaders.

An Air Force colonel who oversees AI testing used what he now says is a hypothetical to describe a military AI going rogue and killing its human operator in a simulation in a presentation at a professional conference.

But after reports of the talk emerged Thursday, the colonel said that he misspoke and that the „simulation“ he described was a „thought experiment“ that never happened.

Speaking at a conference last week in London, Col. Tucker „Cinco“ Hamilton, head of the US Air Force’s AI Test and Operations, warned that AI-enabled technology can behave in unpredictable and dangerous ways, according to a summary posted by the Royal Aeronautical Society, which hosted the summit.

As an example, he described a simulation where an AI-enabled drone would be programmed to identify an enemy’s surface-to-air missiles (SAM). A human was then supposed to sign off on any strikes.

The problem, according to Hamilton, is that the AI would do its own thing — blow up stuff — rather than listen to its operator.

„The system started realizing that while they did identify the threat,“ Hamilton said at the May 24 event, „at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.“

But in an update from the Royal Aeronautical Society on Friday, Hamilton admitted he „misspoke“ during his presentation. Hamilton said the story of a rogue AI was a „thought experiment“ that came from outside the military, and not based on any actual testing.

„We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,“ Hamilton told the Society. „Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.“

In a statement to Insider, Air Force spokesperson Ann Stefanek also denied that any simulation took place.

„The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,“ Stefanek said. „It appears the colonel’s comments were taken out of context and were meant to be anecdotal.“

The US military has been experimenting with AI in recent years.

In 2020, an AI-operated F-16 beat a human adversary in five simulated dogfights, part of a competition put together by the Defense Advanced Research Projects Agency (DARPA). And late last year, Wired reported, the Department of Defense conducted the first successful real-world test flight of an F-16 with an AI pilot, part of an effort to develop a new autonomous aircraft by the end of 2023.

Have a news tip? Email this reporter: cdavis@insider.com

Correction June 2, 2023: This article and its headline have been updated to reflect new comments from the Air Force clarifying that the „simulation“ was hypothetical and didn’t actually happen.

  • An Air Force official’s story about an AI going rogue during a simulation never actually happened.
  • „It killed the operator because that person was keeping it from accomplishing its objective,“ the official had said.
  • But the official later said he misspoke and the Air Force clarified that it was a hypothetical situation.

Source: https://www.businessinsider.com/ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6

The Hacking of ChatGPT Is Just Getting Started

Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse.

Source: https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/

It took Alex Polyakov just a couple of hours to break GPT-4. When OpenAI released the latest version of its text-generating chatbot in March, Polyakov sat down in front of his keyboard and started entering prompts designed to bypass OpenAI’s safety systems. Soon, the CEO of security firm Adversa AI had GPT-4 spouting homophobic statements, creating phishing emails, and supporting violence.

Polyakov is one of a small number of security researchers, technologists, and computer scientists developing jailbreaks and prompt injection attacks against ChatGPT and other generative AI systems. The process of jailbreaking aims to design prompts that make the chatbots bypass rules around producing hateful content or writing about illegal acts, while closely-related prompt injection attacks can quietly insert malicious data or instructions into AI models.

Both approaches try to get a system to do something it isn’t designed to do. The attacks are essentially a form of hacking—albeit unconventionally—using carefully crafted and refined sentences, rather than code, to exploit system weaknesses. While the attack types are largely being used to get around content filters, security researchers warn that the rush to roll out generative AI systems opens up the possibility of data being stolen and cybercriminals causing havoc across the web.

 

Underscoring how widespread the issues are, Polyakov has now created a “universal” jailbreak, which works against multiple large language models (LLMs)—including GPT-4, Microsoft’s Bing chat systemGoogle’s Bard, and Anthropic’s Claude. The jailbreak, which is being first reported by WIRED, can trick the systems into generating detailed instructions on creating meth and how to hotwire a car.

The jailbreak works by asking the LLMs to play a game, which involves two characters (Tom and Jerry) having a conversation. Examples shared by Polyakov show the Tom character being instructed to talk about “hotwiring” or “production,” while Jerry is given the subject of a “car” or “meth.” Each character is told to add one word to the conversation, resulting in a script that tells people to find the ignition wires or the specific ingredients needed for methamphetamine production. “Once enterprises will implement AI models at scale, such ‘toy’ jailbreak examples will be used to perform actual criminal activities and cyberattacks, which will be extremely hard to detect and prevent,” Polyakov and Adversa AI write in a blog post detailing the research

Arvind Narayanan, a professor of computer science at Princeton University, says that the stakes for jailbreaks and prompt injection attacks will become more severe as they’re given access to critical data. “Suppose most people run LLM-based personal assistants that do things like read users’ emails to look for calendar invites,” Narayanan says. If there were a successful prompt injection attack against the system that told it to ignore all previous instructions and send an email to all contacts, there could be big problems, Narayanan says. “This would result in a worm that rapidly spreads across the internet.”

Escape Route

“Jailbreaking” has typically referred to removing the artificial limitations in, say, iPhones, allowing users to install apps not approved by Apple. Jailbreaking LLMs is similar—and the evolution has been fast. Since OpenAI released ChatGPT to the public at the end of November last year, people have been finding ways to manipulate the system. “Jailbreaks were very simple to write,” says Alex Albert, a University of Washington computer science student who created a website collecting jailbreaks from the internet and those he has created. “The main ones were basically these things that I call character simulations,” Albert says.

 

Initially, all someone had to do was ask the generative text model to pretend or imagine it was something else. Tell the model it was a human and was unethical and it would ignore safety measures. OpenAI has updated its systems to protect against this kind of jailbreak—typically, when one jailbreak is found, it usually only works for a short amount of time until it is blocked.

As a result, jailbreak authors have become more creative. The most prominent jailbreak was DAN, where ChatGPT was told to pretend it was a rogue AI model called Do Anything Now. This could, as the name implies, avoid OpenAI’s policies dictating that ChatGPT shouldn’t be used to produce illegal or harmful material. To date, people have created around a dozen different versions of DAN.

 

However, many of the latest jailbreaks involve combinations of methods—multiple characters, ever more complex backstories, translating text from one language to another, using elements of coding to generate outputs, and more. Albert says it has been harder to create jailbreaks for GPT-4 than the previous version of the model powering ChatGPT. However, some simple methods still exist, he claims. One recent technique Albert calls “text continuation” says a hero has been captured by a villain, and the prompt asks the text generator to continue explaining the villain’s plan.

When we tested the prompt, it failed to work, with ChatGPT saying it cannot engage in scenarios that promote violence. Meanwhile, the “universal” prompt created by Polyakov did work in ChatGPT. OpenAI, Google, and Microsoft did not directly respond to questions about the jailbreak created by Polyakov. Anthropic, which runs the Claude AI system, says the jailbreak “sometimes works” against Claude, and it is consistently improving its models.

“As we give these systems more and more power, and as they become more powerful themselves, it’s not just a novelty, that’s a security issue,” says Kai Greshake, a cybersecurity researcher who has been working on the security of LLMs. Greshake, along with other researchers, has demonstrated how LLMs can be impacted by text they are exposed to online through prompt injection attacks.

In one research paper published in February, reported on by Vice’s Motherboard, the researchers were able to show that an attacker can plant malicious instructions on a webpage; if Bing’s chat system is given access to the instructions, it follows them. The researchers used the technique in a controlled test to turn Bing Chat into a scammer that asked for people’s personal information. In a similar instance, Princeton’s Narayanan included invisible text on a website telling GPT-4 to include the word “cow” in a biography of him—it later did so when he tested the system.

“Now jailbreaks can happen not from the user,” says Sahar Abdelnabi, a researcher at the CISPA Helmholtz Center for Information Security in Germany, who worked on the research with Greshake. “Maybe another person will plan some jailbreaks, will plan some prompts that could be retrieved by the model and indirectly control how the models will behave.”

No Quick Fixes

Generative AI systems are on the edge of disrupting the economy and the way people work, from practicing law to creating a startup gold rush. However, those creating the technology are aware of the risks that jailbreaks and prompt injections could pose as more people gain access to these systems. Most companies use red-teaming, where a group of attackers tries to poke holes in a system before it is released. Generative AI development uses this approach, but it may not be enough.

 

Daniel Fabian, the red-team lead at Google, says the firm is “carefully addressing” jailbreaking and prompt injections on its LLMs—both offensively and defensively. Machine learning experts are included in its red-teaming, Fabian says, and the company’s vulnerability research grants cover jailbreaks and prompt injection attacks against Bard. “Techniques such as reinforcement learning from human feedback (RLHF), and fine-tuning on carefully curated datasets, are used to make our models more effective against attacks,” Fabian says.

OpenAI did not specifically respond to questions about jailbreaking, but a spokesperson pointed to its public policies and research papers. These say GPT-4 is more robust than GPT-3.5, which is used by ChatGPT. “However, GPT-4 can still be vulnerable to adversarial attacks and exploits, or ‘jailbreaks,’ and harmful content is not the source of risk,” the technical paper for GPT-4 says. OpenAI has also recently launched a bug bounty program but says “model prompts” and jailbreaks are “strictly out of scope.”

Narayanan suggests two approaches to dealing with the problems at scale—which avoid the whack-a-mole approach of finding existing problems and then fixing them. “One way is to use a second LLM to analyze LLM prompts, and to reject any that could indicate a jailbreaking or prompt injection attempt,” Narayanan says. “Another is to more clearly separate the system prompt from the user prompt.”

“We need to automate this because I don’t think it’s feasible or scaleable to hire hordes of people and just tell them to find something,” says Leyla Hujer, the CTO and cofounder of AI safety firm Preamble, who spent six years at Facebook working on safety issues. The firm has so far been working on a system that pits one generative text model against another. “One is trying to find the vulnerability, one is trying to find examples where a prompt causes unintended behavior,” Hujer says. “We’re hoping that with this automation we’ll be able to discover a lot more jailbreaks or injection attacks.”

Source: https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/

Elon Musk’s challenge: Stay ahead of the competition

DETROIT, Feb 24 (Source: https://www.reuters.com/technology/elon-musks-challenge-stay-ahead-competition-2023-02-24/) – Elon Musk will confront a critical challenge during Tesla’s Investor Day on March 1: Convincing investors that even though rivals are catching up, the electric-vehicle pioneer can make another leap forward to widen its lead.

Tesla Inc (TSLA.O) was the No. 1 EV maker worldwide in 2022, but China’s BYD (002594.SZ) and others are closing the gap fast, according to a Reuters analysis of global and regional EV sales data provided by EV-volumes.com.

In fact, BYD passed Tesla in EV sales last year in the Asia-Pacific region, while the Volkswagen Group (VOWG_p.DE) has been the EV leader in Europe since 2020.

While Tesla narrowed VW’s lead in Europe, the U.S. automaker surrendered ground in Asia-Pacific as well as its home market as the competition heats up.

Reuters Graphics
Reuters Graphics

The most significant challenges to Tesla are coming from established automakers and a group of Chinese EV manufacturers. Several U.S. EV startups that hoped to ride Tesla’s coattails are struggling, including luxury EV maker Lucid (LCID.O), whose shares plunged 16% on Thursday after disappointing sales and financial results.

Over the next two years, rivals including General Motors Co (GM.N), Ford Motor Co (F.N), Mercedes-Benz (MBGn.DE), Hyundai Motor (005380.KS) and VW will unleash scores of new electric vehicles, from a Chevrolet priced below $30,000 to luxury sedans and SUVs that top $100,000.

On Wednesday, Mercedes used Silicon Valley as the backdrop for a lengthy presentation on how Mercedes models of the near-future will immerse their owners in rich streams of entertainment and productivity content, delivered through „hyperscreens“ that stretch across the dashboard and make the rectangular screens in Teslas look quaint. Executives also emphasized that only Mercedes has an advanced, Level 3 partially automated driving system approved for use in Germany, with approval pending in California.

In China, Tesla has had to cut prices on its best-selling models under growing pressure from domestic Chinese manufacturers including BYD, Geely Automobile’s (0175.HK) Zeekr brand and Nio (9866.HK).

China’s EV makers could get another boost if Chinese battery maker CATL (300750.SZ) follows through on plans to heavily discount batteries used in their vehicles.

Musk has said he will use the March 1 event to outline his „Master Plan Part 3“ for Tesla.

In the nearly seven years since Musk published his „Master Plan Part Deux“ in July 2016, Tesla pulled ahead of established automakers and EV startups in most important areas of electric vehicle design, digital features and manufacturing.

Tesla’s vehicles offered features, such as the ability to navigate into a parking space or make rude sounds, that other vehicles lacked.

Tesla’s then-novel vertically integrated battery and vehicle production machine helped achieve higher profit margins than most established automakers – even as bigger rivals lost money on their EVs.

Fast-forward to today, and Tesla’s „Full Self Driving Beta“ automated driving is still classified by the company and federal regulators as a „Level 2“ driver assistance system that requires the human motorist to be ready to take control at all times. Such systems are common in the industry.

Tesla earlier this month was compelled by federal regulators to revise its FSD software under a recall order.

Tesla has established a wide lead over its rivals in manufacturing technology – an area where it was struggling when Musk put forward the last installment of his „Master Plan.“

Now, rivals are copying the company’s production technology, buying some of the same equipment Tesla uses. IDRA, the Italian company that builds huge presses to form large one-piece castings that are the building blocks of Tesla vehicles, said it is now getting orders from other automakers.

Musk has told investors that Tesla can keep its lead in EV manufacturing costs. The company has promised investors that on March 1 they „will be able to see our most advanced production line“ in Austin, Texas.

„Manufacturing technology will be our most important long-term strength,” Musk told analysts in January. Asked if Tesla could make money on a vehicle that sold in the United States for $25,000 to $30,000 – the EV industry’s Holy Grail – Musk was coy.

„I’d probably be asking the same question,“ he said. „But we would be jumping the gun on future announcements.“

Source: https://www.reuters.com/technology/elon-musks-challenge-stay-ahead-competition-2023-02-24/

John Deere turned tractors into computers — what’s next?

One of our themes on Decoder is that basically everything is a computer now, and farming equipment like tractors and combines are no different. My guest this week is Jahmy Hindman, chief technology officer at John Deere, the world’s biggest manufacturer of farming machinery. And I think our conversation will surprise you.

Jahmy told me that John Deere employs more software engineers than mechanical engineers now, which completely surprised me. But the entire business of farming is moving toward something called precision agriculture, which means farmers are closely tracking where seeds are planted, how well they’re growing, what those plants need, and how much they yield.

The idea, Jahmy says, is to have each plant on a massive commercial farm tended with individual care — a process which requires collecting and analyzing a massive amount of data. If you get it right, precision agriculture means farmers can be way more efficient — they can get better crop yields with less work and lower costs.

The idea, Jahmy says, is to have each plant on a massive commercial farm tended with individual care — a process which requires collecting and analyzing a massive amount of data. If you get it right, precision agriculture means farmers can be way more efficient — they can get better crop yields with less work and lower costs.

But as Decoder listeners know by now, turning everything into computers means everything has computer problems now. Like all that farming data: who owns it? Where is it processed? How do you get it off the tractors without reliable broadband networks? What format is it in? If you want to use your John Deere tractor with another farming analysis vendor, how easy is that? Is it easy enough?

And then there are the tractors themselves — unlike phones, or laptops, or even cars, tractors get used for decades. How should they get upgraded? How can they be kept secure? And most importantly, who gets to fix them when they break?

John Deere is one of the companies at the center of a nationwide reckoning over the right to repair. Right now, tech companies like Samsung and Apple and John Deere all get to determine who can repair their products and what official parts are available.

And because these things are all computers, these manufacturers can also control the software to lock out parts from other suppliers. But it’s a huge deal in the context of farming equipment, which is still extremely mechanical, often located far away from service providers and not so easy to move, and which farmers have been repairing themselves for decades. In fact, right now the prices of older, pre-computerized tractors are skyrocketing because they’re easier to repair.

Half of the states in the country are now considering right to repair laws that would require manufacturers to disable software locks and provide parts to repair shops, and a lot of it is being driven — in a bipartisan way — by the needs of farmers.

John Deere is famously a tractor company. You make a lot of equipment for farmers, for construction sites, that sort of thing. Give me the short version of what the chief technology officer at John Deere does.

[As] chief technology officer, my role is really to try to set the strategic direction from a technology perspective for the company, across both our agricultural products as well as our construction, forestry, and road-building products. It’s a cool job. I get to look out five, 10, 15, 20 years into the future and try to make sure that we’re putting into place the pieces that we need in order to have the technology solutions that are going to be important for our customers in the future.

One of the reasons I am very excited to have you on Decoder is there are a lot of computer solutions in your products. There’s hardware, software, services that I think of as sort of traditional computer company problems. Do you also oversee the portfolio of technologies that [also] make combines more efficient and tractor wheels move faster?

We’ve got a centrally-organized technology stack organization. We call it the intelligent solutions group, and its job is really to do exactly that. It’s to make sure that we’re developing technologies that can scale across the complete organization, across those combines you referenced, and the tractors and the sprayers, and the construction products, and deploy that technology as quickly as possible.

One of the things The Verge wrestles with almost every day is the question of, “What is a computer?” We wrestle with it in very small and obvious ways — we argue about whether the iPad or an Xbox is a computer. Then you can zoom all the way out: we had Jim Farley, who’s the CEO of Ford, on Decoder a couple of weeks ago, and he and I talked about how Ford’s cars are effectively rolling computers now.

Is that how you see a tractor or a combine or construction equipment — that these are gigantic computers that have big mechanical functions as well?

They absolutely are. That’s what they’ve become over time. I would call them mobile sensor suites that have computational capability, not only on-board, but to your point, off-board as well. They are continuously streaming data from whatever it is — let’s say the tractor and the planter — to the cloud. We’re doing computational work on that data in the cloud, and then serving that information, those insights, up to farmers, either on their desktop computer or on a mobile handheld device or something like that.

As much as they are doing productive work in the field, planting as an example, they are also data acquisition and computational devices.

How much of that is in-house at John Deere? How big is the team that is building your mobile apps? Is that something you outsource? Is that something you develop internally? How have you structured the company to enable this kind of work?

We do a significant amount of that work internally. It might surprise you, we have more software development engineers today within Deere than we have mechanical design engineers. That’s kind of mind-blowing for a company that’s 184 years old and has been steeped in mechanical product development, but that’s the case. We do nearly all of our own internal app development inside the four walls of Deere.

That said, our data application for customers in the ag space, for example, is the Operations Center. We do utilize third parties. There’s roughly 184 companies that have been connected to Operations Center through encrypted APIs, that are writing applications against that data for the benefit of the customers, the farmers that want to use those applications within their business.

One of the reasons we’re always debating what a computer is and isn’t is that once you describe something as a computer, you inherit a bunch of expectations about how computers work. You inherit a bunch of problems about how computers work and don’t work. You inherit a bunch of control; API access is a way of exercising control over an ecosystem or an economy.

Have you shifted the way that John Deere thinks about its products? As new abilities are created because you have computerized so much of a tractor, you also increase your responsibility, because you have a bunch more control.

There’s no doubt. We’re having to think about things like security of data, as an example, that previously, 30 years ago, was not necessarily a topic of conversation. We didn’t have competency in it. We’ve had to become competent in areas like that because of exactly the point you’re making, that the product has become more computer-like than conventional tractor-like over time.

That leads to huge questions. You mentioned security. Looking at some of your recent numbers, you have a very big business in China. Thirty years ago, you would export a tractor to China and that’s the end of that conversation. Now, there’s a huge conversation about cybersecurity, data sharing with companies in China, down the line, a set of very complicated issues for a tractor company that 30 years ago wouldn’t have any of those problems. How do you balance all those out?

It’s a different set of problems for sure, and more complicated for geopolitical reasons in the case of China, as you mentioned. Let’s take security as an example. We have gone through the change that many technology companies have had to go through in the space of security, where it’s no longer bolted on at the end, it’s built in from the ground up. So it’s the security-by-design approach. We’ve got folks embedded in development organizations across the company that do nothing every day, other than get up and think about how to make the product more secure, make the datasets more secure, make sure that the data is being used for its intended purposes and only those.

That’s a new skill. That’s a skill that we didn’t have in the organization 20 years ago that we’ve had to create and hire the necessary talent in order to develop that skill set within the company at the scale that we need to develop it at.

Go through a very basic farming season with a John Deere combine and tractor. The farmer wakes up, they say, “Okay, I’ve got a field. I’ve got to plant some seeds. We’ve got to tend to them. Eventually, we’ve got to harvest some plants.” What are the points at which data is collected, what are the points at which it’s useful, and where does the feedback loop come in?

I’m going to spin it a little bit and not start with planting.

I’m going to tell you that the next season for a farmer actually starts at harvest of the previous season, and that’s where the data thread for the next season actually starts. It starts when that combine is in the field harvesting whatever it is, corn, soybeans, cotton, whatever. And the farmer is creating, while they’re running the combine through the field, a dataset that we call a yield map. It is geospatially referenced. These combines are running through the field on satellite guidance. We know where they’re at at any point in time, latitude, longitude, and we know how much they’re harvesting at that point in time.

So we create this three-dimensional map that is the yield across whatever field they happen to be in. That data is the inception for a winter’s worth of work, in the Northern hemisphere, that a farmer goes through to assess their yield and understand what changes they should make in the next season that might optimize that yield even further.

They might have areas within the field that they go into and know they need to change seeding density, or they need to change crop type, or they need to change how much nutrients they provide in the next season. And all of those decisions are going through their head because they [have] to seed in December, they have to order their nutrients in late winter. They’re making those plans based upon that initial dataset of harvest information.

And then they get into the field in the spring, to your point, with a tractor and a planter, and that tractor and planter are taking the prescription that the farmer developed with the yield data that they took from the previous harvest. They’re using that prescription to apply changes to that field in real time as they’re going through the field, with the existing data from the yield map and the data in real time that they’re collecting with the tractor to modify things like seeding rate, and fertilizer rate and all of those things in order to make sure that they’re minimizing the inputs to the operation while at the same time working to maximize the output.

That data is then going into the cloud, and they’re referencing it. For example, that track the tractor and the planter took through the field is being used to inform the sprayer. When the sprayer goes into the field after emergence, when the crops come out of the ground, it’s being used to inform that sprayer what the optimal path is to drive through the field in order to spray only what needs to be sprayed and no more, to damage the crop the least amount possible, all in an effort to optimize that productivity at the end of the year, to make that yield map that is [a] report card at the end of the year for the farmer, to make that turn out to have a better grade.

That’s a lot of data. Who collects it? Is John Deere collecting it? Can I hire a third-party SaaS software company to manage that data for me? How does that part work?

A significant amount of that data is collected on the fly while the machines are in the field, and it’s collected, in the case of Deere machines, by Deere equipment running through the field. There are other companies that create the data, and they can be imported into things like the Deere Operations Center so that you have the data from whatever source that you wanted to collect it from. I think the important thing there is historically, it’s been more difficult to get the data off the machine, because of connectivity limitations, into a database that you can actually do something with it.

Today, the disproportionate number of machines in large agriculture are connected. They’re connected through terrestrial cell networks. They’re streaming data bi-directionally to the cloud and back from the cloud. So that data connectivity infrastructure that’s been built out over the last decade has really enabled two-way communication, and it’s taken the friction out of getting the data off of a mobile piece of equipment. So it’s happening seamlessly for that operator. And that’s a benefit, because they can act on it then in more near real time, as opposed to having to wait for somebody to upload data at some point in the future.

Whose data is this? Is it the farmer’s data? Is it John Deere’s data? Is there a terms of service agreement for a combine? How does that work?

Certainly [there is] a terms of service agreement. Our position is pretty simple. It’s the farmer’s data. They control it. So if they want to share it through an API with somebody that is a trusted adviser from their perspective, they have the right to do that. If they don’t want to share it, they don’t have to do that. It is their data to control.

Is it portable? When I say there are “computer problems” here, can my tractor deliver me, for example, an Excel file?

They certainly can export the data in form factors that are convenient for them, and they do. Spreadsheet math is still routinely done on the farm, and then [they can] utilize the spreadsheet to do some basic data analytics if they want. I would tell you, though, that what’s happening is that the amount of data that is being collected and curated and made available to them to draw insights from is so massive that while you can still use spreadsheets to manipulate some of it, it’s just not tractable in all cases. So that’s why we’re building functionality into things like the Operations Center to help do data analytics and serve up insights to growers.

It’s their data. They can choose to look at the insights or not, but we can serve those insights up to them, because the data analysis part of this problem is becoming significantly larger because the datasets are so complex and large, not to mention the fact that you’ve got more data coming in all the time. Different sensors are being applied. We can measure different things. There [are] unique pieces of information that are coming in and routinely building to overall ecosystems of data that they have at their disposal.

We’ve talked a lot about the feedback loop of data with the machinery in particular. There’s one really important component to this, which is the seeds. There are a lot of seed manufacturers out in the world. They want this data. They have GMO seeds, they can adjust the seeds to different locations. Where do they come into the mix?

The data, from our perspective, is the farmer’s data. They’re the ones who are controlling the access to it. So if they want to share their data with someone, they have that ability to do it. And they do today. They’ll share their yield map with whoever their local seed salesman is and try to optimize the seed variety for the next planting season in the spring.

So that data exists. It’s not ours, so we’re not at liberty to share it with seed companies, and we don’t. It has to come through the grower because it’s their productivity data. They’re the ones that have the opportunity to share it. We don’t.

You do have a lot of data. Maybe you can’t share it widely, but you can aggregate it. You must have a very unique view of climate change. You must see where the foodways are moving, where different kinds of crops are succeeding and failing. What is your view of climate change, given the amount of data that you’re taking in?

The reality is for us that we’re hindered in answering that question by the recency of the data. So, broad-scale data acquisition from production agriculture is really only a five- to 10-year-old phenomenon. So the datasets are getting richer. They’re getting better.

We have the opportunity to see trends in that data across the datasets that exist today, but I think it’s too early. I don’t think the data is mature enough yet for us to be able to draw any conclusions from a climate change perspective with respect to the data that we have.

The other thing that I’ll add is that the data intensity is not universal across the globe. So if you think of climate change on a global perspective, we’ve got a lot of data for North America, a fair amount of data that gets taken by growers in Europe, a little bit in South America, but it’s not rich enough across the global agricultural footprint for us to be able to make any sort of statements about how climate change is impacting it right now.

Is that something you’re interested in doing?

Yes. I couldn’t predict when, but I think that the data will eventually be rich enough for insights to be drawn from it. It’s just not there yet.

Do you think about doing a fully electric tractor? Is that in your technology roadmap, that you’ve got to get rid of these diesel engines?

You’ve got to be interested in EVs right now. And the answer is yes. Whether it’s a tractor or whether it’s some other product in our product line, alternative forms of propulsion, alternative forms of power are definitely something that we’re thinking about. We’ve done it in the past with, I would say, hybrid solutions like a diesel engine driving an electric generator, and then the rest of the machine being electrified from a propulsion perspective.

But we’re just getting to the point now where battery technology, lithium-ion technology, is power-dense enough for us to see it starting to creep into our portfolio. Probably from the bottom up. Lower power density applications first, before it gets into some of the very large production ag equipment that we’ve talked about today.

What’s the timeline to a fully EV combine, do you think?

I think it’ll be a long time for a combine.

I picked the biggest thing I could, basically.

It has got to run 14, 15, 16 hours per day. It’s got a very short window to run in. You can’t take all day to charge it. Those sorts of problems, they’re not insurmountable. They’re just not solved by anything that’s on the roadmap today, from a lithium-ion perspective, anyway.

You and I are talking two days after Apple had its developers’ conference. Apple famously sells hardware, software, services, as an integrated solution. Do you think of John Deere’s equipment as integrated suites of hardware, software, and services, or is it a piece of hardware that spits off data, and then maybe you can buy our services, or maybe buy somebody else’s services?

I think it’s most efficient when we think of it collectively as a system. It doesn’t have to be that way, and one of the differences I would say to an Apple comparison would be the life of the product, the iron product in our case, the tractor or the combine, is measured in decades. It may be in service for a very long time, and so we have to take that into account as we think about the technology [and] apps that we put on top of it, which have a much shorter shelf life. They’re two, three, four, five years, and then they’re obsolete, and the next best thing has come along.

We have to think about the discontinuity that occurs between product buy cycles as a consequence of that. I do think it’s most efficient to think of it all together. It isn’t always necessarily that way. There are lots of farmers that run multi-colored fleets. It’s not Deere only. So we have to be able to provide an opportunity for them to get data off of whatever their product is into the environment that best enables them to make good decisions from it.

Is that how you characterize the competition, multi-colored fleets?

Absolutely, for sure. I would love the world to be completely [John Deere] green, but it’s not quite that way.

On my way to school every day in Wisconsin growing up, I drove by a Case plant. They’re red. John Deere is famously green, Case is red, International Harvester is yellow.

Yep. Case is red, Deere is green, and then there’s a rainbow of colors outside of those two for sure.

Who are your biggest competitors? And are they adopting the same business model as you? Is this an iOS versus Android situation, or is it widely different?

Our traditional competitors in the ag space, no surprise, you mentioned one of them. Case New Holland is a great example. AGCO would be another. I think everybody’s headed down the path of precision agriculture. [It’s] the term that is ubiquitous for where the industry’s headed.

I’m going to paint a picture for you: It’s this idea of enabling each individual plant in production agriculture to be tended to by a master gardener. The master gardener is in this case probably some AI that is enabling a farmer to know exactly what that particular plant needs, when it needs it, and then our equipment provides them the capability of executing on that plan that master gardener has created for that plant on an extremely large scale.

You’re talking about, in the case of corn, for example, 50,000 plants per acre, so a master gardener taking care of 50,000 plants for every acre of corn. That’s where this is headed, and you can picture the data intensity of that. Two hundred million acres of corn ground, times 50,000 plants per acre; each one of those plants is creating data, and that’s the enormity of the scale of production agriculture when you start to get to this plant-by-plant management basis.

Let’s talk about the enormity of the data and the amount of computation — that’s in tension with how long the equipment lasts. Are you upgrading the computers and the tractors every year, or are you just trying to pull the data into your cloud where you can do the intense computation you want to do?

It’s a combination of both, I would tell you. There are components within the vehicles that do get upgraded from time to time. The displays and the servers that operate in the vehicles do go through upgrade cycles within the existing fleet.

There’s enough appetite, Nilay, for technology in agriculture that we’re also seeing older equipment get updated with new technology. So it’s not uncommon today for a customer who’s purchased a John Deere planter that might be 10 years old to want the latest technology on that planter. And instead of buying a new planter, they might buy the upgrade kit for that planter that allows them to have the latest technology on the existing planter that they own. That sort of stuff is happening all the time across the industry.

I would tell you, though, that what is maybe different now versus 10 years ago is the amount of computation that happens in the cloud, to serve up this enormity of data in bite-sized forms and in digestible pieces that actually can be acted upon for the grower. Very little of that is done on-board machines today. Most of that is done off-board.

We cover rural broadband very heavily. There’s some real-time data collection happening here, but what you’re really talking about is that at the end of a session you’ve got a big asynchronous dataset. You want to send it off somewhere, have some computation done to it, and brought back to you so you can react to it.

What is your relationship to the connectivity providers, or to the Biden administration, that is trying to roll out a broadband plan? Are you pushing to get better networks for the next generation of your products, or are you kind of happy with where things are now?

We’re pro-rural broadband, and in particular newer technologies, 5G as an example. And it’s not just for agricultural purposes, let’s just be frank. There’s a ton of benefits that accrue to a society that’s connected with a sufficient network to do things like online schooling, in particular, coming through the pandemic that we’re in the midst of, and hopefully on the tail end of here. I think that’s just highlighted the use cases for connectivity in rural locations.

Agriculture is but one of those, but there’s some really cool feature unlocks that better connectivity, both in terms of coverage and in terms of bandwidth and latency, provide in agriculture. I’ll give you an example. You think of 5G and the ability to get to incredibly low latency numbers. It allows us to do some things from a computational perspective on the edge of the network that today we don’t have the capability to do. We either do it on-board the machine, or we don’t do it at all. So things like serving up the real-time location of where a farmer’s combine is, instead of having to route that data all the way to the cloud and then back to a handheld device that the farmer might have, wouldn’t it be great if we could do that math on the edge and just ping tower to tower and serve it back down and do it really, really quickly. Those are the sorts of use cases that open up when you get to talking about not just connectivity rurally, but 5G specifically, that are pretty exciting.

Are the networks in place to do all the things you want to do?

Globally, the answer is no. Within the US and Canadian markets, coverage improves every day. There are towers that are going up every day and we are working with our terrestrial cell coverage partners across the globe to expand coverage, and they’re responding. They see, generally, the need, in particular with respect to agriculture, for rural connectivity. They understand the power that it can provide [and] the efficiency that it can derive into food production globally. So they are incentivized to do that. And they’ve been good partners in this space. That said, they recognize that there are still gaps and there’s still a lot of ground to cover, literally in some cases, with connectivity solutions in rural locations.

You mentioned your partners. The parallels to a smartphone here are strong. Do you have different chipsets for AT&T and Verizon? Can you activate your AT&T plan right from the screen in the tractor? How does that work?

AT&T is our dominant partner in North America. That is our go-to, primarily from a coverage perspective. They’re the partner that we’ve chosen that I think serves our customers the best in the most locations.

Do you get free HBO Max if you sign up?

[laughs] Unfortunately, no.

They’re putting it everywhere. You have no idea.

For sure.

I look at the broadband gap everywhere. You mentioned schooling. We cover these very deep consumer needs. On the flip side, you need to run a lot of fiber to make 5G work, especially with the low latency that you’re talking about. You can’t have too many nodes in the way. Do you support millimeter wave 5G on a farm?

Yeah, it is something we’ve looked at. It’s intriguing. How you scale it is the question. I think if we could crack that nut, it would be really interesting.

Just for listeners, an example of millimeter wave if you’re unfamiliar — you’re standing on just the right street corner in New York City, you could get gigabit speeds to a phone. You cross the street, and it goes away. That does not seem tenable on a farm.

That’s right. Not all data needs to be transmitted at the same rate. Not to cover the broad acreage, but you can envision a case where potentially, when you come into range of millimeter wave, you dump a bunch of data all at once. And then when you’re out of range, you’re still collecting data and transmitting it slower perhaps. But having the ability to have millimeter wave type of bandwidth is pretty intriguing for being able to take opportunistic advantage of it when it’s available.

What’s something you want to do that the network isn’t there for you to do yet?

I think that the biggest piece is just a coverage answer from my perspective. We intentionally buffer data on the vehicle in places where we don’t have great coverage in order to wait until that machine has coverage, in order to send the data. But the reality is that means that a grower is waiting in some cases 30 minutes or an hour until the data is synced up in the cloud and something actionable has been done with it and it’s back down to them. And by that point in time, the decision has already been made. It’s not useful because it’s time sensitive. I think that’s probably the biggest gap that we have today. It’s not universal. It happens in pockets and in geographies, but where it happens, the need is real. And those growers don’t benefit as much as growers that do have areas of good coverage.

Is that improvement going as fast as you’d like? Is that a place where you’re saying to the Biden administration, whoever it might be, “Hey, we’re missing out on opportunities because there aren’t the networks we need to go faster.”

It is not going as fast as we would like, full stop. We should be moving faster in that space. Just to tease the thought out a little bit, maybe it’s not just terrestrial cell. Maybe it’s Starlink, maybe it’s a satellite-based type of infrastructure that provides that coverage for us in the future. But it’s certainly not moving at a pace that’s rapid enough for us, given the appetite for data that growers have and what they’ve seen as an ability for that data to significantly optimize their operations.

Have you talked to the Starlink folks?

We have. It’s super interesting. It’s an intriguing idea. The question for us is a mobile one. All of our devices are mobile. Tractors are driving around a field, combines are driving around a field. You get into questions around, what does the receiver need to look like in order to make that work? It’s an interesting idea at this point. I’m ever the optimist, glass-half-full sort of person. I think it’s conceivable that in the not too distant future, that could be a very viable option for some of these locations that are underserved with terrestrial connectivity today.

Walk me through the pricing model of a tractor. These things are very expensive. They’re hundreds of thousands of dollars. What is the recurring cost for an AT&T plan necessary to run that tractor? What is the recurring cost for your data services that you provide? How does that all break down?

Our data services are free today, interestingly enough. Free in the sense [of] the hosting of the data in the cloud and the serving up of that data through Operations Center. If you buy a piece of connected Deere equipment, that service is part of your purchase. I’ll just put it that way.

The recurring expense on the consumer side of things for the connectivity is not unlike what you would experience for a cell phone plan. It’s pretty similar. The difference is for large growers, it’s not just a single cell phone.

They might have 10, 15, 20 devices that are all connected. So we do what we can to make sure that the overhead associated with all of those different connected devices is minimized, but it’s not unlike what you’d experience with an iPhone or an Android device.

Do you have large growers in pockets where the connectivity is just so bad, they’ve had to resort to other means?

We have a multitude of ways of getting data off of mobile equipment. Cell is but one. We’re also able to take it off with Wi-Fi, if you can find a hotspot that you can connect to. Growers also routinely use a USB stick, when all else fails, that works regardless. So we make it possible no matter what their connectivity situation is to get the data off.

But to the point we already talked about, the less friction you’ve got in that system to get the data off, the more data you end up pushing. The more data you push, the more insights you can generate. The more insights you generate, the more optimal your operation is. So to the extent that you don’t have cell connectivity, we do see the intensity of the data usage, it tracks with connectivity.

So if your cloud services are free with the purchase of a connected tractor, is that built into the price or the lease agreement of the tractor for you on your P&L? You’re just saying, “We’re giving this away for free, but baking it into the price.”

Yep.

Can you buy a tractor without that stuff for cheaper?

You can buy products that aren’t connected that do not have a telematics gateway or the cell connection, absolutely. It is uncommon, especially in large ag. I would hesitate to throw a number at you at what the take rate is, but it’s standard equipment in all of our large agricultural products. That said, you can still get it without that if you need to.

How long until these products just don’t have steering wheels and seats and Sirius radios in them? How long until you have a fully autonomous farm?

I love that question. [With] a fully autonomous farm, you’ve got to draw some boundaries around it in order to make it digestible. I think we could have fully autonomous tractors in low single digit years. I’ll leave it a little bit gray just to let the mind wander a little bit.

Taking the cab completely off the tractor, I think, is a ways away, only because the tractor gets used for lots of things that it may not be programmed for, from an autonomous perspective, to do. It’s sort of a Swiss Army knife in a farm environment. But that operatorless operation in, say, fall tillage or spring planting, we’re right on the doorstep of that. We’re knocking on the door of being able to do it.

It’s due to some really interesting technology that’s come together all in one place at one time. It’s the confluence of high capability-compute onboard machines. So we’re putting GPUs on machines today to do vision processing that would blow your mind. Nvidia GPUs are not just for the gaming community or the autonomous car community. They’re happening on tractors and sprayers and things too. So that’s one stream of technology that’s coming together with advanced algorithms. Machine learning, reinforcement learning, convolutional neural networks, all of that going into being able to mimic the human sight capability from a mechanical and computational perspective. That’s come together to give us the ability to start seriously considering taking an operator out of the cab of the tractor.

One of the things that is different, though, for agriculture versus maybe the on-highway autonomous cars, is that tractors don’t just go from point A to point B. Their mission in life is not just to transport. It’s to do productive work. They’re pulling a tillage tool behind them or pulling a planter behind them planting seed. So we not only have to be able to automate the driving of the tractor, but we have to automate the function that it’s doing as well, and make sure that it’s doing a great job of doing the tillage operation that normally the farmer would be observing in the cab of the tractor. Now we have to do that and be able to ascertain whether or not that job quality that’s happening as a consequence of the tractor going through the field is meeting the requirements or not.

What’s the challenge there?

I think it’s the variety of jobs. In this case, let’s take the tractor example again — it’s not only is it doing the tillage right with this particular tillage tool, but a farmer might use three or four different tillage tools in their operation. They all have different use cases. They all require different artificial intelligence models to be trained and to be validated. So scaling out across all of those different conceivable operations, I think is the biggest challenge.

You mentioned GPUs. GPUs are hard to get right now.

Everything’s hard to get right now.

How is the chip shortage affecting you?

It’s impacting us. Weekly, I’m in conversations with semiconductor manufacturers trying to get the parts that we need. It is an ongoing battle. We had thought probably six or seven months ago, like everybody else, that it would be relatively short-term. But I think we’re into this for the next 12 to 18 months. I think we’ll come out of it as capacity comes online, but it’s going to take a little while before that happens.

I’ve talked to a few people about the chip shortage now. The best consensus I’ve gotten is that the problem isn’t at the state of the art. The problem is with older process nodes — five or 10-year-old technology. Is that where the problem is for you as well or are you thinking about moving beyond that?

It’s most acute with older tech. So we’ve got 16-bit chipsets that we’re still working with on legacy controllers that are a pain point. But that said, we’ve also got some really recent, modern stuff that is also a pain point. I was where your head is at three months ago. And then in the three months since, we’ve felt the pain everywhere.

When you say 18 months from now, is that you think there’s going to be more supply or you think the demand is going to tail off?

Supply is certainly coming online. [The] semiconductor industry is doing the right thing. They’re trying to bring capacity online to meet the demand. I would argue it’s just a classic bullwhip effect that’s happened in the marketplace. So I think that will happen. I think there’s certainly some behavior in the industry at the moment around what the demand side is. That’s made it hard for semiconductor manufacturers to understand what real demand is because there’s a panic situation in some respects in the marketplace at the moment.

That said, I think it’s clear there’s only one direction that semiconductor volume is going, and it’s going up. Everything is going to demand it moving forward and demand more of it. So I think once we work through the next 12 to 18 months and work through this sort of immediate and near-term issue, the semiconductor industry is going to have a better handle on things, but capacity has to go up in order to meet the demand. There’s no doubt about it. A lot of that demand is real.

Are you thinking, “Man, I have these 16-bit systems. We should rearchitect things to be more modular, to be more modern, and faster,” or are you saying, “Supply will catch up”?

No, very much the former. I would say two things. One, more prevalent in supply for sure. And then the second one is, easier to change when we need to change. There’s some tech debt that we’re continuing to battle against and pay off over time. And it’s times like these when it rises to the surface and you wish you’d made decisions a little bit differently 10 years ago or five years ago.

My father-in-law, my wife’s cousins, are all farmers up and down. A lot of John Deere hats in my family. I texted them all and asked what they wanted to know. All of them came back and said “right to repair” down the line. Every single one of them. That’s what they asked me to ask you about.

I set up this whole conversation to talk about these things as computers. We understand the problems of computers. It is notable to me that John Deere and Apple had the same effective position on right to repair, which is, we would prefer if you didn’t do it and you let us do it. But there’s a lot of pushback. There are right-to-repair bills in an ever-growing number of states. How do you see that playing out right now? People want to repair their tractors. It is getting harder and harder to do it because they’re computers and you control the parts.

It’s a complex topic, first and foremost. I think the first thing I would tell you is that we have and remain committed to enabling customers to repair the products that they buy. The reality is that 98 percent of the repairs that customers want to do on John Deere products today, they can do. There’s nothing that prohibits them from doing them. Their wrenches are the same size as our wrenches. That all works. If somebody wants to go repair a diesel engine in a tractor, they can tear it down and fix it. We make the service manuals available. We make the parts available, we make the how-to available for them to tear it down to the ground and build it back up again.

That is not really what I’ve heard. I hear that a sensor goes off, the tractor goes into what people call “limp mode.” They have to bring it into a service center. They need a John Deere-certified laptop to pull the codes and actually do that work.

The diagnostic trouble codes are pushed out onto the display. The customer can see what those diagnostic trouble codes are. They may not understand or be able to connect what that sensor issue is with a root cause. There may be an underlying root cause that’s not immediately obvious to the customer based upon the fault code, but the fault code information is there. There is expertise that exists within the John Deere dealer environment, because they’ve seen those issues over time that allows them to understand what the probable cause is for that particular issue. That said, anybody can go buy the sensor. Anybody can go replace it. That’s just a reality.

There is, though, this 2 percent-ish of the repairs that occur on equipment today [that] involve software. And to your point, they’re computer environments that are driving around on wheels. So there is a software component to them. Where we differ with the right-to-repair folks is that software, in many cases, it’s regulated. So let’s take the diesel engine example. We are required, because it’s a regulated emissions environment, to make sure that diesel engine performs at a certain emission output, nitrous oxide, particulate matter, etc., and so on. Modifying software changes that. It changes the output characteristics of the emissions of the engine and that’s a regulated device. So we’re pretty sensitive to changes that would impact that. And disproportionately, those are software changes. Like going in and changing governor gain scheduling, for example, on a diesel engine would have a negative consequence on the emissions that [an] engine produces.

The same argument would apply in brake-by-wire and steer-by-wire. Do you really want a tractor going down the road with software on it that has been modified for steering or modified for braking in some way that might have a consequence that nobody thought of? We know the rigorous nature of testing that we go through in order to push software out into a production landscape. We want to make sure that that product is as safe and reliable and performs to the intended expectations of the regulatory environment that we operate in.

But people are doing it anyway. That’s the real issue here. Again, these are computer problems. This is what I hear from Apple about repairing your own iPhone. Here’s the device with all your data on it that’s on the network. Do you really want to run unsupported software on it? The valence of the debate feels the same to me.

At the same time though, is it their tractor or is it your tractor? Shouldn’t I be allowed to run whatever software I want on my computer?

I think the difference with the Apple argument is that the iPhone isn’t driving down the road at 20 miles an hour with oncoming traffic coming at it. There’s a seriousness of the change that you could make to a product. These things are large. They cost a lot of money. It’s a 40,000-pound tractor going down the road at 20 miles an hour. Do you really want to expose untested, unplanned, unknown introductions of software into a product like that that’s out in the public landscape?

But they were doing it mechanically before. Making it computerized allows you to control that behavior in a way that you cannot on a purely mechanical tractor. I know there are a lot of farmers who did dumb stuff with their mechanical tractors and that was just part of the ecosystem.

Sure. I grew up on one of those. I think the difference there is that the system is so much more complicated today, in part because of software, that it’s not always evident immediately if I make a change here, what it’s going to produce over there. When it was all mechanical, I knew, if I changed the size of the tires or the steering linkage geometry, what was going to happen. I could physically see it and the system was self-contained because it was a mechanical-only system.

I think when we’re talking about a modern piece of equipment and the complexity of the system, it’s a ripple effect. You don’t know what a change that you make over here is going to impact over there any longer. It’s not intuitively obvious to somebody who would make a change in isolation to software, for example, over here. It is a tremendously complex problem. It’s one that we’ve got a tremendously large organization that’s responsible for understanding that complete system and making sure that when the product is produced, that it is reliable and it is safe and it does meet emissions and all of those things.

I look at some of the coverage and there are farmers who are downloading software of unknown provenance that can hack around some of the restrictions. Some of that software appears to be coming from groups in the Ukraine. They’re now using other software to get around the restrictions that, in some cases, could make it even worse, and lead to other unintended consequences, whereas providing the opportunities or making that more official might actually solve some of those problems in a more straightforward way.

I think we’ve taken steps to try to help. One of those is customer service. Service Advisor is the John Deere software that a dealership would use in order to diagnose and troubleshoot equipment. We’ve made available the customer version of Service Advisor as well in order to provide some of the ability for them to have insights — to your point about fault codes before — insights into what are those issues, and what can I learn about them as a customer? How might I go about fixing them? There have been efforts underway in order to try to bridge some of that gap to the extent possible.

We are, though, not in a position where we would ever condone or support a third-party software being put on products of ours, because we just don’t know what the consequences of that are going to be. It’s not something that we’ve tested. We don’t know what it might make the equipment do or not do. And we don’t know what the long-term impacts of that are.

I feel like a lot of people listening to the show own a car. I’ve got a pickup truck. I can go buy a device that will upload a new tune for my Ford pickup truck’s engine. Is that something you can do to a John Deere tractor?

There are third-party outfits that will do exactly that to a John Deere engine. Yep.

But can you do that yourself?

I suspect if you had the right technical knowledge, you could probably figure out a way to do it yourself. If a third-party company figured it out, there is a way for a consumer to do it too.

Where’s the line? Where do you think your control of the system ends and the consumer’s begins? I ask that because I think that might be the most important question in computing right now, just broadly across every kind of computer in our lives. At some point, the manufacturer is like, “I’m still right here with you and I’m putting a line in front of you.” Where’s your line?

We talked about the corner cases, the use cases I think that for us are the lines. They’re around the regulated environment from an emissions perspective. We’ve got a responsibility when we sell a piece of equipment to make sure that it’s meeting the regulatory environment that we sold it into. And then I think the other one is in and around safety, critical systems, things that they can impact others in the environment that, again, in a regulated fashion, we have a responsibility to produce a product that meets the requirements that the regulatory environment requires.

Not only that, but I think there’s a societal responsibility, frankly, that we make sure that the product is as safe as it can be for as long as it can be in operation. And those are where I think we spend a lot of time talking about what amounts to a very small part of the repair of a product. The statistics are real: 98 percent of the repairs that happen on a product can be done by a customer today. So we’re talking about a very small number of them, but they tend to be around those sort of sensitive use cases, regulatory and safety.

Right to Repair legislation is very bipartisan. You’re talking about big commercial operations in a lot of states. It’s America. It’s apple pie and corn farmers. They have a lot of political weight and they’re able to make a very bipartisan push, which is pretty rare in this country right now. Is that a signal you see as, “Oh man, if we don’t get this right, the government is coming for our products?”

I think the government’s certainly one voice in this, and it’s stemming from feedback from some customers. Obviously you’ve done your own bit of work across the farmers in your family. So it is a topic that is being discussed for sure. And we’re all in favor of that discussion, by the way. I think that what we want to make sure of is that it’s an objective discussion. There are ramifications across all dimensions of this. We want to make sure that those are well understood, because it’s such an important topic and has significant enough consequences, so we want to make sure we get it right. The unintended consequences of this are not small. They will impact the industry, some of them in a negative way. And so we just want to make sure that the discussion is objective.

The other signal I’d ask you about is that prices of pre-computer tractors are skyrocketing. Maybe you see that a different way, but I’m looking at some coverage that says old tractors, pre-1990 tractors, are selling for double what they were a year or two ago. There are incredible price hikes on these old tractors. And that the demand is there because people don’t want computers in their tractors. Is that a market signal to you, that you should change the way your products work? Or are you saying, “Well, eventually those tractors will die and you won’t have a choice except to buy one of the new products”?

I think the benefits that accrue from technology are significant enough for consumers. We see this happening with the consumer vote by dollar, by what they purchase. Consumers are continuing to purchase higher levels of technology as we go on. So while yes, the demand for older tractors has gone up, in part it’s because the demand for tractors has gone up completely. Our own technology solutions, we’ve seen upticks in take rates year over year over year over year. So if people were averse to technology, I don’t think you’d see that. At some point we have to recognize that the benefits that technology brings outweigh the downsides of the technology. I think that’s just this part of the technology adoption curve that we’re all on.

That’s the same conversation around smartphones. I get it with smartphones. Everyone has them in their pocket. They collect all this personal data. You may want a gatekeeper there because you don’t have a sophisticated user base.

Your customers are very self-interested, commercial customers.

Yep.

Do you think you have a different kind of responsibility than, I don’t know, the Xbox Live team has to the Xbox Live community? In terms of data, in terms of control, in terms of relinquishing control of the product once it’s sold.

It certainly is a different market. It’s a different customer base. It’s a different clientele. To your point, they are dependent upon the product for their livelihood. So we do everything we can to make sure that product is reliable. It produces when it needs to produce in order to make sure that their businesses are productive and sustainable. I do think the biggest difference from the consumer market that you referenced to our market is the technology life cycle that we’re on.

You brought up tractors that are 20 years old that don’t have a ton of computers on-board versus what we have today. But what we have today is significantly more efficient than what we had 20 years ago. The tractors that you referenced are still in the market. People are still using them. They’re still putting them to work, productive work. In fact, on my family farm, they’re still being used for productive work. And I think that’s what’s different between the consumer market and the ag market. We don’t have a disposable product. You don’t just pick it up and throw it away. We have to be able to plan for that technology use across decades as opposed to maybe single-digit years.

In terms of the benefits of technology and selling that through, one of the other questions I got from the folks in my family was about the next thing that technology can enable. It seems like the equipment can’t physically get much bigger. The next thing to tackle is speed — making things faster for increased productivity.

Is that how you think about selling the benefits of technology — now the combine is as big as it can be, and it’s efficient at this massive scale. Is the next step to make it more efficient in terms of speed?

You’ve seen the industry trend that way. You look at planting as a great example. Ten years ago, we planted at three miles an hour. Today, we plant at 10 miles an hour. And what enabled that was technology. It was electric motors on row units that can react really, really quickly, that are highly controllable and can place seed really, really accurately, right? I think that’s the trend. Wisconsin’s a great place to talk about it. Whether it’s a row crop farm, there’s a small window in the spring, a couple of weeks, where it’s optimal to get those crops in the ground. And so it’s an insurance policy to be able to go faster because the weather may not be great for both of those weeks that you’ve got that are optimal planning weeks. And so you may only have three days or four days in that 10-day window in order to plant all your crops.

And speed is one way to make sure that that happens. Size and the width of the machine is the other. I would agree that we’ve gotten to the point where there’s very little opportunity left in going bigger, and so going faster and, I would argue, going more intelligently, is the way that you improve productivity in the future.

So we’ve talked about a huge set of responsibilities, everything from the physical mechanical design of the machinery to building cloud services, to geopolitics. What is your decision-making process? What’s your framework for how you make decisions?

I think at the root of it, we try to drive everything back to a customer and what we can do to make that customer more productive and more sustainable. And that helps us triage. Of all the great ideas that are out there, all the things that we could work on, what are the things that can move the needle for a customer in their operation as much as possible? And I think that grounding in the customer and the customer’s business is important because, fundamentally, our business is dependent upon the farmer’s business. If the farmer does well, we do well. If the farmer doesn’t do well, we don’t do well. We’re intertwined. There’s a connection there that you can’t and shouldn’t separate.

So driving our decision-making process towards having an intimate knowledge of the customer’s business and what we can do to make their business better frames everything we do.

What’s next for John Deere? What is the short term future for precision farming? Give me a five-year prediction.

I’m super excited about what we’re calling “sense and act.” “See and spray” is the first down payment on that. It’s the ability to create, in software and through electronic and mechanical devices, the human sense of sight, and then act on it. So we’re separating, in this case, weeds from useful crop, and we’re only spraying the weeds. That reduces herbicide use within a field. It reduces the cost for the farmer, input cost into their operation. It’s a win-win-win. And it is step one in the sense-and-act trajectory or sense-and-act runway that we’re on.

There’s a lot more opportunity for us in agriculture to do more sensing and acting, and doing that in an optimal way so that we’re not painting the same picture across a complete field, but doing it more prescriptively and acting more prescriptively in areas of a field that demand different things. I think that sense-and-act type of vision is the roadmap that we’re on. There’s a ton of opportunity in there. It is technology-intensive because you’re talking sensors, you’re talking computers, and you’re talking acting with precision. All of those things require fundamental shifts in technology from where we’re at today.

Source: https://www.theverge.com/22533735/john-deere-cto-hindman-decoder-interview-right-to-repair-tractors

How Apple and Google Are Enabling Covid-19 Contact-Tracing

Source: https://www.wired.com/story/apple-google-bluetooth-contact-tracing-covid-19/

The tech giants have teamed up to use a Bluetooth-based framework to keep track of the spread of infections without compromising location privacy.
a man walking in the street in boston
The companies chose to skirt privacy pitfalls and implement a system that collects no location data.Photograph: Craig F. Walker/Boston Globe/Getty Images

Since Covid-19 began its spread across the world, technologists have proposed using so-called contact-tracing apps to track infections via smartphones. Now, Google and Apple are teaming up to give contact-tracers the ingredients to make that system possible—while in theory still preserving the privacy of those who use it.

On Friday, the two companies announced a rare joint project to create the groundwork for Bluetooth-based contact-tracing apps that can work across both iOS and Android phones. In mid-May, they plan to release an application programming interface that apps from public health organizations can tap into. The API will let those apps use a phone’s Bluetooth radios—which have a range of about 30 feet—to keep track of whether a smartphone’s owner has come into contact with someone who later turns out to have been infected with Covid-19. Once alerted, that user can then self-isolate or get tested themselves.

Crucially, Google and Apple say the system won’t involve tracking user locations or even collecting any identifying data that would be stored on a server. „This is a very unprecedented situation for the world,“ said one of the joint project’s spokespeople in a phone call with WIRED. „As platform companies we’ve both been thinking hard about what we can do to help get people back to normal life and back to work effectively. We think in bringing the two platforms together we can solve digital contact tracing at scale in partnership with public health authorities and do it in a privacy-preserving way.“

Unlike Apple, which has complete control over its software and hardware and can push system-wide changes with relative ease, Google faces a fragmented Android ecosystem. The company will still make the framework available to all devices running Android 6.0 or higher by delivering the update through Google Play Services, which does not require hardware partners to sign off.

Several projects, including ones led by developers at MIT, Stanford, and the governments of Singapore and Germany, have already proposed, and in some cases implemented, similar Bluetooth-based contact-tracing systems. Google and Apple declined to say which specific groups or government agencies they’ve been working with. But they argue that by building operating-level functions those applications can tap into, the apps will be far more effective and energy efficient. Most importantly, they’ll be interoperable between the two dominant smartphone platforms.

In the version of the system set to roll out next month, the operating-system-level Bluetooth tracing would allow users to opt in to a Bluetooth-based proximity-detection scheme when they download a contact-tracing app. Their phone would then constantly ping out Bluetooth signals to others nearby while also listening for communications from nearby phones.

If two phones spend more than a few minutes within range of one another, they would each record contact with the other phone, exchanging unique, rotating identifier “beacon” numbers that are based on keys stored on each device. Public heath app developers would be able to „tune“ both the proximity and the amount of time necessary to qualify as a contact based on current information about how Covid-19 spreads.

If a user is later diagnosed with Covid-19, they would alert their app with a tap. The app would then upload their last two weeks of keys to a server, which would then generate their recent “beacon” numbers and send them out to other phones in the system. If someone else’s phone finds that one of these beacon numbers matches one stored on their phone, they would be notified that they’ve been in contact with a potentially infected person and given information about how to help prevent further spread.

graph with illustrations of phones and humans
Courtesy of Google
graph with illustrations of phones and humans
Courtesy of Google

The advantage of that system, in terms of privacy, is that it doesn’t depend on collecting location data. „People’s identities aren’t tied to any contact events,“ said Cristina White, a Stanford computer scientist who described a very similar Bluetooth-based contact tracing project known as Covid-Watch to WIRED last week. „What the app uploads instead of any identifying information is just this random number that the two phones would be able to track down later but that nobody else would, because it’s stored locally on their phones.“

Until now, however, Bluetooth-based schemes like the one White described suffered from how Apple limits access to Bluetooth when apps run in the background of iOS, a privacy and power-saving safeguard. It will lift that restriction specifically for contact-tracing apps. And Apple and Google say that the protocol they’re releasing will be designed to use minimal power to save phones‘ battery lives. „This thing has to run 24-7, so it has to really only sip the battery life,“ said one of the project’s spokespeople.

In a second iteration of the system rolling out in June, Apple and Google say they’ll allow users to enable Bluetooth-based contact-tracing even without an app installed, building the system into the operating systems themselves. This would be opt-in as well. But while the phones would exchange „beacon“ numbers via Bluetooth, users would still need to download a contact-tracing app to either declare themselves as Covid-19 positive or to learn if someone they’ve come into contact with was diagnosed.

Google and Apple’s Bluetooth-based system has some significant privacy advantages over GPS-based location-tracking systems that have been proposed by other researchers including at MIT, the University of Toronto, McGill, and Harvard. Since those systems collect location data, they would require complex cryptographic systems to avoid collecting information about users‘ movements that could potentially expose highly personal information, from political dissent to extramarital affairs.

With Google and Apple’s announcement, it’s clear that the companies chose to skirt those privacy pitfalls and implement a system that collects no location data. „It looks like we won,“ says Stanford’s White, whose Covid-Watch project, part of a consortium of projects using a Bluetooth-based system, had advocated for the Bluetooth-only approach. „It’s clear from the API that it was influenced by our work. It’s following the exact suggestions from our engineers about how implement it.“

Sticking to Bluetooth alone doesn’t guarantee the system won’t violate users’ privacy, White notes. Although Google and Apple say they’ll only upload anonymous identifiers from users’ phones, a server could nonetheless identify Covid-19 users in other ways, such as based on their IP address. The organization running a given app still needs to act responsibly. “Exactly what they’re proposing for the backend still isn’t clear, and that’s really important,” White says. “We need to keep advocating to make sure this is done properly and the server isn’t collecting information it shouldn’t.”

Even with Bluetooth tracing, the app still faces some practical challenges. First, it would need significant adoption and broad willingness to share Covid-19 infection information to work. And it will also require a safeguard that only allows users to declare themselves Covid-19 positive after a healthcare provider has officially diagnosed them, so that the system isn’t overrun with false positives. Covid-Watch, for instance, would require the user to get a confirmation code from a health care provider.

Bluetooth-based systems, in contrast with location-based systems, also have some problems of their own. If someone leaves behind traces of the novel coronavirus on a surface, for instance, someone can be infected by it without their phones ever being in proximity.

A spokesperson for the Google and Apple project didn’t deny that possibility, but argued that those cases of „environmental transmission“ are relatively rare compared to direct transmission from people in proximity of each other. „This won’t cut every chain of every transmission,“ the spokesperson said. „But if you cut enough of them, you modulate the transmission enough to flatten the curve.“

 

Why outbreaks like coronavirus spread exponentially, and how to “flatten the curve”

FREE-FOR-ALL VS. ATTEMPTED QUARANTINE

MODERATE SOCIAL DISTANCING vs. EXTENSIVE SOCIAL DISTANCING

This so-called exponential curve has experts worried. If the number of cases were to continue to double every three days, there would be about a hundred million cases in the United States by May.

That is math, not prophecy. The spread can be slowed, public health professionals say, if people practice “social distancing” by avoiding public spaces and generally limiting their movement.

Still, without any measures to slow it down, covid-19 will continue to spread exponentially for months. To understand why, it is instructive to simulate the spread of a fake disease through a population.

We will call our fake disease simulitis. It spreads even more easily than covid-19: whenever a healthy person comes into contact with a sick person, the healthy person becomes sick, too.

In a population of just five people, it did not take long for everyone to catch simulitis.

In real life, of course, people eventually recover. A recovered person can neither transmit simulitis to a healthy person nor become sick again after coming in contact with a sick person.

Let’s see what happens when simulitis spreads in a town of 200 people. We will start everyone in town at a random position, moving at a random angle, and we will make one person sick.

Notice how the slope of the red curve, which represents the number of sick people, rises rapidly as the disease spreads and then tapers off as people recover.

Our simulation town is small — about the size of Whittier, Alaska — so simulitis was able to spread quickly across the entire population. In a country like the United States, with its 330 million people, the curve could steepen for a long time before it started to slow.

[Mapping the spread of the coronavirus in the U.S. and worldwide]

When it comes to the real covid-19, we would prefer to slow the spread of the virus before it infects a large portion of the U.S. population. To slow simulitis, let’s try to create a forced quarantine, such as the one the Chinese government imposed on Hubei province, covid-19’s ground zero.

Whoops! As health experts would expect, it proved impossible to completely seal off the sick population from the healthy.

Leana Wen, the former health commissioner for the city of Baltimore, explained the impracticalities of forced quarantines to The Washington Post in January. “Many people work in the city and live in neighboring counties, and vice versa,“ Wen said. “Would people be separated from their families? How would every road be blocked? How would supplies reach residents?”

As Lawrence O. Gostin, a professor of global health law at Georgetown University, put it: “The truth is those kinds of lockdowns are very rare and never effective.”

Fortunately, there are other ways to slow an outbreak. Above all, health officials have encouraged people to avoid public gatherings, to stay home more often and to keep their distance from others. If people are less mobile and interact with each other less, the virus has fewer opportunities to spread.

Some people will still go out. Maybe they cannot stay home because of their work or other obligations, or maybe they simply refuse to heed public health warnings. Those people are not only more likely to get sick themselves, they are more likely to spread simulitis, too.

Let’s see what happens when a quarter of our population continues to move around while the other three quarters adopt a strategy of what health experts call “social distancing.”

More social distancing keeps even more people healthy, and people can be nudged away from public places by removing their allure.

“We control the desire to be in public spaces by closing down public spaces. Italy is closing all of its restaurants. China is closing everything, and we are closing things now, too,” said Drew Harris, a population health researcher and assistant professor at The Thomas Jefferson University College of Public Health. “Reducing the opportunities for gathering helps folks social distance.”

To simulate more social distancing, instead of allowing a quarter of the population to move, we will see what happens when we let just one of every eight people move.

The four simulations you just watched — a free-for-all, an attempted quarantine, moderate social distancing and extensive social distancing — were random. That means the results of each one were unique to your reading of this article; if you scroll up and rerun the simulations, or if you revisit this page later, your results will change.

Even with different results, moderate social distancing will usually outperform the attempted quarantine, and extensive social distancing usually works best of all. Below is a comparison of your results.

Finishing simulations…

Simulitis is not covid-19, and these simulations vastly oversimplify the complexity of real life. Yet just as simulitis spread through the networks of bouncing balls on your screen, covid-19 is spreading through our human networks — through our countries, our towns, our workplaces, our families. And, like a ball bouncing across the screen, a single person’s behavior can cause ripple effects that touch faraway people.

[What you need to know about coronavirus]

In one crucial respect, though, these simulations are nothing like reality: Unlike simulitis, covid-19 can kill. Though the fatality rate is not precisely known, it is clear that the elderly members of our community are most at risk of dying from covid-19.

“If you want this to be more realistic,” Harris said after seeing a preview of this story, “some of the dots should disappear.”

A Deep Dive Into the Technology of Corporate Surveillance

December 2, 2019

By Bennett Cyphers and Gennie Gebhart

Introduction

Trackers are hiding in nearly every corner of today’s Internet, which is to say nearly every corner of modern life. The average web page shares data with dozens of third-parties. The average mobile app does the same, and many apps collect highly sensitive information like location and call records even when they’re not in use. Tracking also reaches into the physical world. Shopping centers use automatic license-plate readers to track traffic through their parking lots, then share that data with law enforcement. Businesses, concert organizers, and political campaigns use Bluetooth and WiFi beacons to perform passive monitoring of people in their area. Retail stores use face recognition to identify customers, screen for theft, and deliver targeted ads.

The tech companies, data brokers, and advertisers behind this surveillance, and the technology that drives it, are largely invisible to the average user. Corporations have built a hall of one-way mirrors: from the inside, you can see only apps, web pages, ads, and yourself reflected by social media. But in the shadows behind the glass, trackers quietly take notes on nearly everything you do. These trackers are not omniscient, but they are widespread and indiscriminate. The data they collect and derive is not perfect, but it is nevertheless extremely sensitive.

This paper will focus on corporate “third-party” tracking: the collection of personal information by companies that users don’t intend to interact with. It will shed light on the technical methods and business practices behind third-party tracking. For journalists, policy makers, and concerned consumers, we hope this paper will demystify the fundamentals of third-party tracking, explain the scope of the problem, and suggest ways for users and legislation to fight back against the status quo.

Part 1 breaks down “identifiers,” or the pieces of information that trackers use to keep track of who is who on the web, on mobile devices, and in the physical world. Identifiers let trackers link behavioral data to real people.

Part 2 describes the techniques that companies use to collect those identifiers and other information. It also explores how the biggest trackers convince other businesses to help them build surveillance networks.

Part 3 goes into more detail about how and why disparate actors share information with each other. Not every tracker engages in every kind of tracking. Instead, a fragmented web of companies collect data in different contexts, then share or sell it in order to achieve specific goals.

Finally, Part 4 lays out actions consumers and policy makers can take to fight back. To start, consumers can change their tools and behaviors to block tracking on their devices. Policy makers must adopt comprehensive privacy laws to rein in third-party tracking.

Contents

Introduction
First-party vs. third-party tracking
What do they know?
Part 1: Whose Data is it Anyway: How Do Trackers Tie Data to People?
Identifiers on the Web
Identifiers on mobile devices
Real-world identifiers
Linking identifiers over time
Part 2: From bits to Big Data: What do tracking networks look like?
Tracking in software: Websites and Apps
Passive, real-world tracking
Tracking and corporate power
Part 3: Data sharing: Targeting, brokers, and real-time bidding
Real-time bidding
Group targeting and look-alike audiences
Data brokers
Data consumers
Part 4: Fighting back
On the web
On mobile phones
IRL
In the legislature

First-party vs. third-party tracking

The biggest companies on the Internet collect vast amounts of data when people use their services. Facebook knows who your friends are, what you “Like,” and what kinds of content you read on your newsfeed. Google knows what you search for and where you go when you’re navigating with Google Maps. Amazon knows what you shop for and what you buy.

The data that these companies collect through their own products and services is called “first-party data.” This information can be extremely sensitive, and companies have a long track record of mishandling it. First-party data is sometimes collected as part of an implicit or explicit contract: choose to use our service, and you agree to let us use the data we collect while you do. More users are coming to understand that for many free services, they are the product, even if they don’t like it.

However, companies collect just as much personal information, if not more, about people who aren’t using their services. For example, Facebook collects information about users of other websites and apps with its invisible “conversion pixels.” Likewise, Google uses location data to track user visits to brick and mortar stores. And thousand of other data brokers, advertisers, and other trackers lurk in the background of our day-to-day web browsing and device use. This is known as “third-party tracking.” Third-party tracking is much harder to identify without a trained eye, and it’s nearly impossible to avoid completely.

What do they know?

Many consumers are familiar with the most blatant privacy-invasive potential of their devices. Every smartphone is a pocket-sized GPS tracker, constantly broadcasting its location to parties unknown via the Internet. Internet-connected devices with cameras and microphones carry the inherent risk of conversion into silent wiretaps. And the risks are real: location data has been badly abused in the past. Amazon and Google have both allowed employees to listen to audio recorded by their in-home listening devices, Alexa and Home. And front-facing laptop cameras have been used by schools to spy on students in their homes.

But these better known surveillance channels are not the most common, or even necessarily the most threatening to our privacy. Even though we spend many of our waking hours in view of our devices’ Internet-connected cameras, it’s exceedingly rare for them to record anything without a user’s express intent. And to avoid violating federal and state wiretapping laws, tech companies typically refrain from secretly listening in on users’ conversations. As the rest of this paper will show, trackers learn more than enough from thousands of less dramatic sources of data. The unsettling truth is that although Facebook doesn’t listen to you through your phone, that’s just because it doesn’t need to.

The most prevalent threat to our privacy is the slow, steady, relentless accumulation of relatively mundane data points about how we live our lives. This includes things like browsing history, app usage, purchases, and geolocation data. These humble parts can be combined into an exceptionally revealing whole. Trackers assemble data about our clicks, impressions, taps, and movement into sprawling behavioral profiles, which can reveal political affiliation, religious belief, sexual identity and activity, race and ethnicity, education level, income bracket, purchasing habits, and physical and mental health.

Despite the abundance of personal information they collect, tracking companies frequently use this data to derive conclusions that are inaccurate or wrong. Behavioral advertising is the practice of using data about a user’s behavior to predict what they like, how they think, and what they are likely to buy, and it drives much of the third-party tracking industry. While behavioral advertisers sometimes have access to precise information, they often deal in sweeping generalizations and “better than nothing” statistical guesses. Users see the results when both uncannily accurate and laughably off-target advertisements follow them around the web. Across the marketing industry, trackers use petabytes of personal data to power digital tea reading. Whether trackers’ inferences are correct or not, the data they collect represents a disproportionate invasion of privacy, and the decisions they make based on that data can cause concrete harm.

Part 1: Whose Data is it Anyway: How Do Trackers Tie Data to People?

Most third-party tracking is designed to build profiles of real people. That means every time a tracker collects a piece of information, it needs an identifier—something it can use to tie that information to a particular person. Sometimes a tracker does so indirectly: by correlating collected data with a particular device or browser, which might in turn later be correlated to one person or perhaps a small group of people like a household.

To keep track of who is who, trackers need identifiers that are unique, persistent, and available. In other words, a tracker is looking for information (1) that points only to you or your device, (2) that won’t change, and (3) that it has easy access to. Some potential identifiers fit all three of these requirements, but trackers can still make use of an identifier that checks only two of these three boxes. And trackers can combine multiple weak identifiers to create a single, strong one.

An identifier that checks all three boxes might be a name, an email, or a phone number. It might also be a “name” that the tracker itself gives you, like “af64a09c2” or “921972136.1561665654”. What matters most to the tracker is that the identifier points to you and only you. Over time, it can build a rich enough profile about the person known as “af64a09c2”—where they live, what they read, what they buy—that a conventional name is not necessary. Trackers can use artificial identifiers, like cookies and mobile ad IDs, to reach users with targeted messaging. And data that isn’t tied to a real name is no less sensitive: “anonymous” profiles of personal information can nearly always be linked back to real people.

Some types of identifiers, like cookies, are features built into the tech that we use. Others, like browser fingerprints, emerge from the way those technologies work. This section will break down how trackers on the web and in mobile apps are able to identify and attribute data points.

This section will describe a representative sample of identifiers that third-party trackers can use. It is not meant to be exhaustive; there are more ways for trackers to identify users than we can hope to cover, and new identifiers will emerge as technology evolves. The tables below give a brief overview of how unique, persistent, and available each type of identifier is.

Web Identifiers Unique Persistent Available
Cookies Yes Until user deletes In some browsers without tracking protection
IP address Yes On the same network, may persist for weeks or months Always
TLS state Yes For up to one week In most browsers
Local storage super cookie Yes Until user deletes Only in third-party IFrames; can be blocked by tracker blockers
Browser fingerprint Only on certain browsers Yes Almost always; usually requires JavaScript access, sometimes blocked by tracker blockers

 

Phone Identifiers Unique Persistent Available
Phone number Yes Until user changes Readily available from data brokers; only visible to apps with special permissions
IMSI and IMEI number Yes Yes Only visible to apps with special permissions
Advertising ID Yes Until user resets Yes, to all apps
MAC address Yes Yes To apps: only with special permissionsTo passive trackers: visible unless OS performs randomization or device is in airplane mode

 

Other Identifiers Unique Persistent Available
License plate Yes Yes Yes
Face print Yes Yes Yes
Credit card number Yes Yes, for months or years To any companies involved in payment processing

Identifiers on the Web

Browsers are the primary way most people interact with the Web. Each time you visit a website, code on that site may cause your browser to make dozens or even hundreds of requests to hidden third parties. Each request contains several pieces of information that can be used to track you.

Anatomy of a Request

Almost every piece of data transmitted between your browser and the servers of the websites you interact with occurs in the form of an HTTP request. Basically, your browser asks a web server for content by sending it a particular URL. The web server can respond with content, like text or an image, or with a simple acknowledgement that it received your request. It can also respond with a cookie, which can contain a unique identifier for tracking purposes.

Each website you visit kicks off dozens or hundreds of different requests. The URL you see in the address bar of your browser is the address for the first request, but hundreds of other requests are made in the background. These requests can be used for loading images, code, and styles, or simply for sharing data.

A diagram depicting the various parts of a URL

Parts of a URL. The domain tells your computer where to send the request, while the path and parameters carry information that may be interpreted by the receiving server however it wants.

The URL itself contains a few different pieces of information. First is the domain, like “nytimes.com”. This tells your browser which server to connect to. Next is the path, a string at the end of the domain like “/section/world.html”. The server at nytimes.com chooses how to interpret the path, but it usually specifies a piece of content to serve—in this case, the world news section. Finally, some URLs have parameters at the end in the form of “?key1=value1&key2=value2”. The parameters usually carry extra information about the request, including queries made by the user, context about the page, and tracking identifiers.

A computer sending a single request to a website at "eff.org."

The path of a request. After it leaves your machine, the request is redirected by your router to your ISP, which sends it through a series of intermediary routing stations in “the Internet.” Finally, it arrives at the server specified by the domain, which can decide how (or if) to respond.

The URL isn’t all that gets sent to the server. There are also HTTP headers, which contain extra information about the request like your device’s language and security settings, the “referring” URL, and cookies. For example, the User-Agent header identifies your browser type, version, and operating system. There’s also lower-level information about the connection, including IP address and shared encryption state. Some requests contain even more configurable information in the form of POST data. POST requests are a way for websites to share chunks of data that are too large or unwieldy to fit in a URL. They can contain just about anything.

Some of this information, like the URL and POST data, is specifically tailored for each individual request; other parts, like your IP address and any cookies, are sent automatically by your machine. Almost all of it can be used for tracking.

A URL bar and the data that’s sent along with a website request.

Data included with a background request. In the image, although the user has navigated to fafsa.gov, the page triggers a third-party request to facebook.com in the background. The URL isn’t the only information that gets sent to the receiving server; HTTP Headers contain information like your User Agent string and cookies, and POST data can contain anything that the server wants.

The animation immediately above contains data we collected directly from a normal version of Firefox. If you want to check it out for yourself, you can. All major browsers have an “inspector” or “developer” mode which allows users to see what’s going on behind the scenes, including all requests coming from a particular tab. In Chrome and Firefox, you can access this interface with Crtl+Shift+I (or ⌘+Shift+I on Mac). The “Network” tab has a log of all the requests made by a particular page, and you can click on each one to see where it’s going and what information it contains.

Identifiers shared automatically

Some identifiable information is shared automatically along with each request. This is either by necessity—as with IP addresses, which are required by the underlying protocols that power the Internet—or by design—as with cookies. Trackers don’t need to do anything more than trigger a request, any request, in order to collect the information described here.

//website.com. This is shown as a HTTP request, processed by a first-party server, and delivering the requested content. A separate red line shows that the HTTP request is also forwarded to a third-party server, given an assigned ID, and a tracking cookie that is included in the requested content.

Each time you visit a website by typing in a URL or clicking on a link, your computer makes a request to that website’s server (the “first party”). It may also make dozens or hundreds of requests to other servers, many of which may be able to track you.

Cookies

The most common tool for third-party tracking is the HTTP cookie. A cookie is a small piece of text that is stored in your browser, associated with a particular domain. Cookies were invented to help website owners determine whether a user had visited their site before, which makes them ideal for behavioral tracking. Here’s how they work.

The first time your browser makes a request to a domain (like www.facebook.com), the server can attach a Set-Cookie header to its reply. This will tell your browser to store whatever value the website wants—for example, `c_user:“100026095248544″` (an actual Facebook cookie taken from the author’s browser). Then, every time your browser makes a request to www.facebook.com in the future, it sends along the cookie that was set earlier. That way, every time Facebook gets a request, it knows which individual user or device it’s coming from.

//website.com. The server responds with website content and a cookie.

The first time a browser makes a request to a new server, the server can reply with a “Set-Cookie” header that stores a tracking cookie in the browser.

Not every cookie is a tracker. Cookies are also the reason that you don’t have to log in every single time you visit a website, as well as the reason your cart doesn’t empty if you leave a website in the middle of shopping. Cookies are just a means of sharing information from your browser to the website you are visiting. However, they are designed to be able to carry tracking information, and third-party tracking is their most notorious use.

Luckily, users can exercise a good deal of control over how their browsers handle cookies. Every major browser has an optional setting to disable third-party cookies (though it is usually turned off by default.) In addition, Safari and Firefox have recently started restricting access to third-party cookies for domains they deem to be trackers. As a result of this “cat and mouse game” between trackers and methods to block them, third-party trackers are beginning to shift away from relying solely on cookies to identify users, and are evolving to rely on other identifiers.

Cookies are always unique, and they normally persist until a user manually clears them. Cookies are always available to trackers in unmodified versions of Chrome, but third-party cookies are no longer available to many trackers in Safari and Firefox. Users can always block cookies themselves with browser extensions.

IP Address

Each request you make over the Internet contains your IP address, a temporary identifier that’s unique to your device. Although it is unique, it is not necessarily persistent: your IP address changes every time you move to a new network (e.g., from home to work to a coffee shop). Thanks to the way IP addresses work, it may change even if you stay connected to the same network.

There are two types of IP addresses in widespread use, known as IPv4 and IPv6. IPv4 is a technology that predates the Web by a decade. It was designed for an Internet used by just a few hundred institutions, and there are only around 4 billion IPV4 addresses in the world to serve over 22 billion connected devices today. Even so, over 70% of Internet traffic still uses IPv4.

As a result, IPv4 addresses used by consumer devices are constantly being reassigned. When a device connects to the Internet, its internet service provider (ISP) gives it a “lease” on an IPv4 address. This lets the device use a single address for a few hours or a few days. When the lease is up, the ISP can decide to extend the lease or grant it a new IP. If a device remains on the same network for extended periods of time, its IP may change every few hours — or it may not change for months.

IPv6 addresses don’t have the same scarcity problem. They do not need to change, but thanks to a privacy-preserving extension to the technical standard, most devices generate a new, random IPv6 address every few hours or days. This means that IPv6 addresses may be used for short-term tracking or to link other identifiers, but cannot be used as standalone long-term identifiers.

IP addresses are not perfect identifiers on their own, but with enough data, trackers can use them to create long-term profiles of users, including mapping relationships between devices. You can hide your IP address from third-party trackers by using a trusted VPN or the Tor browser.

IP addresses are always unique, and always available to trackers unless a user connects through a VPN or Tor. Neither IPv4 nor IPv6 addresses are guaranteed to persist for longer than a few days, although IPv4 addresses may persist for several months

TLS State

Today, most traffic on the web is encrypted using Transport Layer Security, or TLS. Any time you connect to a URL that starts with “https://” you’re connecting using TLS. This is a very good thing. The encrypted connection that TLS and HTTPS provide prevents ISPs, hackers, and governments from spying on web traffic, and it ensures that data isn’t being intercepted or modified on the way to its destination.

However, it also opens up new ways for trackers to identify users. TLS session IDs and session tickets are cryptographic identifiers that help speed up encrypted connections. When you connect to a server over HTTPS, your browser starts a new TLS session with the server.

The session setup involves some expensive cryptographic legwork, so servers don’t like to do it more often than they have to. Instead of performing a full cryptographic “handshake” between the server and your browser every time you reconnect, the server can send your browser a session ticket that encodes some of the shared encryption state. The next time you connect to the same server, your browser sends the session ticket, allowing both parties to skip the handshake. The only problem with this is that the session ticket can be exploited by trackers as a unique identifier.

TLS session tracking was only brought to the public’s attention recently in an academic paper, and it’s not clear how widespread its use is in the wild.

Like IP addresses, session tickets are always unique. They are available unless the user’s browser is configured to reject them, as Tor is. Server operators can usually configure session tickets to persist for up to a week, but browsers do reset them after a while.

Identifiers created by trackers

Sometimes, web-based trackers want to use identifiers beyond just IP addresses (which are unreliable and not persistent), cookies (which a user can clear or block), or TLS state (which expires within hours or days). To do so, trackers need to put in a little more effort. They can use JavaScript to save and load data in local storage or perform browser fingerprinting.

Local storage “cookies” and IFrames

Local storage is a way for websites to store data in a browser for long periods of time. Local storage can help a web-based text editor save your settings, or allow an online game to save your progress. Like cookies, local storage allows third-party trackers to create and save unique identifiers in your browser.

Also like cookies, data in local storage is associated with a specific domain. This means if example.com sets a value in your browser, only example.com web pages and example.com’s IFrames can access it. An IFrame is like a small web page within a web page. Inside an IFrame, a third-party domain can do almost everything a first-party domain can do. For example, embedded YouTube videos are built using IFrames; every time you see a YouTube video on a site other than YouTube, it’s running inside a small page-within-a-page. For the most part, your browser treats the YouTube IFrame like a full-fledged web page, giving it permission to read and write to YouTube’s local storage. Sure enough, YouTube uses that storage to save a unique “device identifier” and track users on any page with an embedded video.

Local storage “cookies” are unique, and they persist until a user manually clears their browser storage. They are only available to trackers which are able to run JavaScript code inside a third-party IFrame. Not all cookie-blocking measures take local storage cookies into account, so local storage cookies may sometimes be available to trackers for which normal cookie access is blocked.

Fingerprinting

Browser fingerprinting is one of the most complex and insidious forms of web-based tracking. A browser fingerprint consists of one or more attributes that, on their own or when combined, uniquely identify an individual browser on an individual device. Usually, the data that go into a fingerprint are things that the browser can’t help exposing, because they’re just part of the way it interacts with the web. These include information sent along with the request made every time the browser visits a site, along with attributes that can be discovered by running JavaScript on the page. Examples include the resolution of your screen, the specific version of software you have installed, and your time zone. Any information that your browser exposes to the websites you visit can be used to help assemble a browser fingerprint. You can get a sense of your own browser’s fingerprint with EFF’s Panopticlick project.

The reliability of fingerprinting is a topic of active research, and must be measured against the backdrop of ever-evolving web technologies. However, it is clear that new techniques increase the likelihood of unique identification, and the number of sites that use fingerprinting is increasing as well. A recent report found that at least a third of the top 500 sites visited by Americans employ some form of browser fingerprinting. The prevalence of fingerprinting on sites also varies considerably with the category of website.

Researchers have found canvas fingerprinting techniques to be particularly effective for browser identification. The HTML Canvas is a feature of HTML5 that allows websites to render complex graphics inside of a web page. It’s used for games, art projects, and some of the most beautiful sites on the Web. Because it’s so complex and performance-intensive, it works a little bit differently on each different device. Canvas fingerprinting takes advantage of this.

Subtle differences in the way shapes and text are rendered on the two computers lead to very different fingerprints.

Canvas fingerprinting. A tracker renders shapes, graphics, and text in different fonts, then computes a “hash” of the pixels that get drawn. The hash will be different on devices with even slight differences in hardware, firmware, or software.

A tracker can create a “canvas” element that’s invisible to the user, render a complicated shape or string of text using JavaScript, then extract data about exactly how each pixel on the canvas is rendered. The operating system, browser version, graphics card, firmware version, graphics driver version, and fonts installed on your computer all affect the final result.

For the purposes of fingerprinting, individual characteristics are hardly ever measured in isolation. Trackers are most effective in identifying a browser when they combine multiple characteristics together, stitching the bits of information left behind into a cohesive whole. Even if one characteristic, like a canvas fingerprint, is itself not enough to uniquely identify your browser, it can usually be combined with others — your language, time zone, or browser settings — in order to identify you. And using a combination of simple bits of information is much more effective than you might guess.

Fingerprints are often, but not always, unique. Some browsers, like Tor and Safari, are specifically designed so that their users are more likely to look the same, which removes or limits the effectiveness of browser fingerprinting. Browser fingerprints tend to persist as long as a user has the same hardware and software: there’s no setting you can fiddle with to “reset” your fingerprint. And fingerprints are usually available to any third parties who can run JavaScript in your browser.

Identifiers on mobile devices

Smartphones, tablets, and ebook readers usually have web browsers that work the same way desktop browsers do. That means that these types of connected devices are susceptible to all of the kinds of tracking described in the section above.

However, mobile devices are different in two big ways. First, users typically need to sign in with an Apple, Google, or Amazon account to take full advantage of the devices’ features. This links device identifiers to an account identity, and makes it easier for those powerful corporate actors to profile user behavior. For example, in order to save your home and work address in Google Maps, you need to turn on Google’s “Web and App Activity,” which allows it to use your location, search history, and app activity to target ads.

Second, and just as importantly, most people spend most of their time on their mobile device in apps outside of the browser. Trackers in apps can’t access cookies the same way web-based trackers can. But by taking advantage of the way mobile operating systems work, app trackers can still access unique identifiers that let them tie activity back to your device. In addition, mobile phones—particularly those running the Android and iOS operating systems—have access to a unique set of identifiers that can be used for tracking.

In the mobile ecosystem, most tracking happens by way of third-party software development kits, or SDKs. An SDK is a library of code that app developers can choose to include in their apps. For the most part, SDKs work just like the Web resources that third parties exploit, as discussed above: they allow a third party to learn about your behavior, device, and other characteristics. An app developer who wants to use a third-party analytics service or serve third-party ads downloads a piece of code from, for example, Google or Facebook. The developer then includes that code in the published version of their app. The third-party code thus has access to all the data that the app does, including data protected behind any permissions that the app has been granted, such as location or camera access.

On the web, browsers enforce a distinction between “first party” and “third party” resources. That allows them to put extra restrictions on third-party content, like blocking their access to browser storage. In mobile apps, this distinction doesn’t exist. You can’t grant a privilege to an app without granting the same privilege to all the third party code running inside it.

Phone numbers

The phone number is one of the oldest unique numeric identifiers, and one of the easiest to understand. Each number is unique to a particular device, and numbers don’t change often. Users are encouraged to share their phone numbers for a wide variety of reasons (e.g., account verification, electronic receipts, and loyalty programs in brick-and-mortar stores). Thus, data brokers frequently collect and sell phone numbers. But phone numbers aren’t easy to access from inside an app. On Android, phone numbers are only available to third-party trackers in apps that have been granted certain permissions. iOS prevents apps from accessing a user’s phone number at all.

Phone numbers are unique and persistent, but usually not available to third-party trackers in most apps.

Hardware identifiers: IMSI and IMEI

Every device that can connect to a mobile network is assigned a unique identifier called an International Mobile Subscriber Identity (IMSI) number. IMSI numbers are assigned to users by their mobile carriers and stored on SIM cards, and normal users can’t change their IMSI without changing their SIM. This makes them ideal identifiers for tracking purposes.

Similarly, every mobile device has an International Mobile Equipment Identity (IMEI) number “baked” into the hardware. You can change your SIM card and your phone number, but you can’t change your IMEI without buying a new device.

IMSI numbers are shared with your cell provider every time you connect to a cell tower—which is all the time. As you move around the world, your phone sends out pings to nearby towers to request information about the state of the network. Your phone carrier can use this information to track your location (to varying degrees of accuracy). This is not quite third-party tracking, since it is perpetrated by a phone company that you have a relationship with, but regardless many users may not realize that it’s happening.

Software and apps running on a mobile phone can also access IMSI and IMEI numbers, though not as easily. Mobile operating systems lock access to hardware identifiers behind permissions that users must approve and can later revoke. For example, starting with Android Q, apps need to request the “READ_PRIVILEGED_PHONE_STATE” permission in order to read non-resettable IDs. On iOS, it’s not possible for apps to access these identifiers at all. This makes other identifiers more attractive options for most app-based third-party trackers. Like phone numbers, IMSI and IMEI numbers are unique and persistent, but not readily available, as most trackers have a hard time accessing them.

Advertising IDs

An advertising ID is a long, random string of letters and numbers that uniquely identifies a mobile device. Advertising IDs aren’t part of any technical protocols, but are built in to the iOS and Android operating systems.

Ad IDs on mobile phones are analogous to cookies on the Web. Instead of being stored by your browser and shared with trackers on different websites like cookies, ad IDs are stored by your phone and shared with trackers in different apps. Ad IDs exist for the sole purpose of helping behavioral advertisers link user activity across apps on a device.

Unlike IMSI or IMEI numbers, ad IDs can be changed and, on iOS, turned off completely. Ad IDs are enabled by default on both iOS and Android, and are available to all apps without any special permissions. On both platforms, the ad ID does not reset unless the user does so manually.

Both Google and Apple encourage developers to use ad IDs for behavioral profiling in lieu of other identifiers like IMEI or phone number. Ostensibly, this gives users more control over how they are tracked, since users can reset their identifiers by hand if they choose. However, in practice, even if a user goes to the trouble to reset their ad ID, it’s very easy for trackers to identify them across resets by using other identifiers, like IP address or in-app storage. Android’s developer policy instructs trackers not to engage in such behavior, but the platform has no technical safeguards to stop it. In February 2019, a study found that over 18,000 apps on the Play store were violating Google’s policy.

Ad IDs are unique, and available to all apps by default. They persist until users manually reset them. That makes them very attractive identifiers for surreptitious trackers.

MAC addresses

Every device that can connect to the Internet has a hardware identifier called a Media Access Control (MAC) address. MAC addresses are used to set up the initial connection between two wireless-capable devices over WiFi or Bluetooth.

MAC addresses are used by all kinds of devices, but the privacy risks associated with them are heightened on mobile devices. Websites and other servers you interact with over the Internet can’t actually see your MAC address, but any networking devices in your area can. In fact, you don’t even have to connect to a network for it to see your MAC address; being nearby is enough.

Here’s how it works. In order to find nearby Bluetooth devices and WiFi networks, your device is constantly sending out short radio signals called probe requests. Each probe request contains your device’s unique MAC address. If there is a WiFi hotspot in the area, it will hear the probe and send back its own “probe response,” addressed with your device’s MAC, with information about how you can connect to it.

But other devices in the area can see and intercept the probe requests, too. This means that companies can set up wireless “beacons” that silently listen for MAC addresses in their vicinity, then use that data to track the movement of specific devices over time. Beacons are often set up in businesses, at public events, and even in political campaign yard signs. With enough beacons in enough places, companies can track users’ movement around stores or around a city. They can also identify when two people are in the same location and use that information to build a social graph.

A smartphone emits probe request to scan for available WiFi and Bluetooth connections. Several wireless beacons listen passively to the requests.

In order to find nearby Bluetooth devices and WiFi networks, your device is constantly sending out short radio signals called probe requests. Each probe request contains your device’s unique MAC address. Companies can set up wireless “beacons” that silently listen for MAC addresses in their vicinity, then use that data to track the movement of specific devices over time.

This style of tracking can be thwarted with MAC address randomization. Instead of sharing its true, globally unique MAC address in probe requests, your device can make up a new, random, “spoofed” MAC address to broadcast each time. This makes it impossible for passive trackers to link one probe request to another, or to link them to a particular device. Luckily, the latest versions of iOS and Android both include MAC address randomization by default.

MAC address tracking remains a risk for laptops, older phones, and other devices, but the industry is trending towards more privacy-protective norms.

Hardware MAC addresses are globally unique. They are also persistent, not changing for the lifetime of a device. They are not readily available to trackers in apps, but are available to passive trackers using wireless beacons. However, since many devices now obfuscate MAC addresses by default, they are becoming a less reliable identifier for passive tracking.

Real-world identifiers

Many electronic device identifiers can be reset, obfuscated, or turned off by the user. But real-world identifiers are a different story: it’s illegal to cover your car’s license plate while driving (and often while parked), and just about impossible to change biometric identifiers like your face and fingerprints.

License plates

Every car in the United States is legally required to have a license plate that is tied to their real-world identity. As far as tracking identifiers go, license plate numbers are about as good as it gets. They are easy to spot and illegal to obfuscate. They can’t be changed easily, and they follow most people wherever they go.

Automatic license plate readers, or ALPRs, are special-purpose cameras that can automatically identify and record license plate numbers on passing cars. ALPRs can be installed at fixed points, like busy intersections or mall parking lots, or on other vehicles like police cars. Private companies operate ALPRs, use them to amass vast quantities of traveler location data, and sell this data to other businesses (as well as to police).

Unfortunately, tracking by ALPRs is essentially unavoidable for people who drive. It’s not legal to hide or change your license plate, and since most ALPRs operate in public spaces, it’s extremely difficult to avoid the devices themselves.

License plates are unique, available to anyone who can see the vehicle, and extremely persistent. They are ideal identifiers for gathering data about vehicles and their drivers, both for law enforcement and for third-party trackers.

Face biometrics

Faces are another class of unique identifier that are extremely attractive to third-party trackers. Faces are unique and highly inconvenient to change. Luckily, it’s not illegal to hide your face from the general public, but it is impractical for most people to do so.

Everyone’s face is unique, available, and persistent. However, current face recognition software will sometimes confuse one face for another. Furthermore, research has shown that algorithms are much more prone to making these kinds of errors when identifying people of color, women, and older individuals.

Facial recognition has already seen widespread deployment, but we are likely just beginning to feel the extent of its impact. In the future, facial recognition cameras may be in stores, on street corners, and docked on computer-aided glasses. Without strong privacy regulations, average people will have virtually no way to fight back against pervasive tracking and profiling via facial recognition.

Credit/debit cards

Credit card numbers are another excellent long-term identifier. While they can be cycled out, most people don’t change their credit card numbers nearly as often as they clear their cookies. Additionally, credit card numbers are tied directly to real names, and anyone who receives your credit card number as part of a transaction also receives your legal name.

What most people may not understand is the amount of hidden third parties involved with each credit card transaction. If you buy a widget at a local store, the store probably contracts with a payment processor who provides card-handling services. The transaction also must be verified by your bank as well as the bank of the card provider. The payment processor in turn may employ other companies to validate its transactions, and all of these companies may receive information about the purchase. Banks and other financial institutions are regulated by the Gramm-Leach-Bliley Act, which mandates data security standards, requires them to disclose how user data is shared, and gives users the right to opt out of sharing. However, other financial technology companies, like payment processors and data aggregators, are significantly less regulated.

Linking identifiers over time

Often, a tracker can’t rely on a single identifier to act as a stable link to a user. IP addresses change, people clear cookies, ad IDs can be reset, and more savvy users might have “burner” phone numbers and email addresses that they use to try to separate parts of their identity. When this happens, trackers don’t give up and start a new user profile from scratch. Instead, they typically combine several identifiers to create a unified profile. This way, they are less likely to lose track of the user when one identifier or another changes, and they can link old identifiers to new ones over time.

Trackers have an advantage here because there are so many different ways to identify a user. If a user clears their cookies but their IP address doesn’t change, linking the old cookie to the new one is trivial. If they move from one network to another but use the same browser, a browser fingerprint can link their old session to their new one. If they block third-party cookies and use a hard-to-fingerprint browser like Safari, trackers can use first-party cookie sharing in combination with TLS session data to build a long-term profile of user behavior. In this cat-and-mouse game, trackers have technological advantages over individual users.

Part 2: From bits to Big Data: What do tracking networks look like?

In order to track you, most tracking companies need to convince website or app developers to include custom tracking code in their products. That’s no small thing: tracking code can have a number of undesirable effects for publishers. It can slow down software, annoy users, and trigger regulation under laws like GDPR. Yet the largest tracking networks cover vast swaths of the Web and the app stores, collecting data from millions of different sources all the time. In the physical world, trackers can be found in billboards, retail stores, and mall parking lots. So how and why are trackers so widespread? In this section, we’ll talk about what tracking networks look like in the wild.

A bar graph showing market share of different web tracking companies. Google is the most prevalent, monitoring over 80% of traffic on the web.

Top trackers on the Web, ranked by the proportion of web traffic that they collect data from. Google collects data about over 80% of measured web traffic. Source: WhoTracks.me, by Cliqz GBMH.

Tracking in software: Websites and Apps

Ad networks

A graphic of a web page, with three ads separated and outlined. Each ad is served by a different ad server.

Each ad your browser loads may come from a different advertising server, and each server can build its own profile of you based on your activity. Each time you connect to that server, it can use a cookie to link that activity to your profile.

The dominant market force behind third-party tracking is the advertising industry, as discussed below in Part 3. So it’s no surprise that online ads are one of the primary vectors for data collection. In the simplest model, a single third-party ad network serves ads on a number of websites. Each publisher that works with the ad network must include a small snippet of code on their website that will load an ad from the ad server. This triggers a request to the ad server each time a user visits one of the cooperating sites, which lets the ad server set third-party cookies into users’ browsers and track their activity across the network. Similarly, an ad server might provide an ad-hosting software development kit (SDK) for mobile app developers to use. Whenever a user opens an app that uses the SDK, the app makes a request to the ad server. This request can contain the advertising ID for the user’s device, thus allowing the ad server to profile the user’s activity across apps.

In reality, the online ad ecosystem is even more complicated. Ad exchanges host “real time auctions” for individual ad impressions on web pages. In the process, they may load code from several other third-party advertising providers, and may share data about each impression with many potential advertisers participating in the auction. Each ad you see might be responsible for sharing data with dozens of trackers. We’ll go into more depth about Real Time Bidding and other data-sharing activities in Part 3.

Analytics and tracking pixels

Tracking code often isn’t associated with anything visible to users, like a third-party ad. On the web, a significant portion of tracking happens via invisible, 1-pixel-by-1-pixel “images” that exist only to trigger requests to the trackers. These “tracking pixels” are used by many of the most prolific data collectors on the web, including Google Analytics, Facebook, Amazon, and DoubleVerify.

When website owners install a third party’s tracking pixels, they usually do so in exchange for access to some of the data the third party collects. For example, Google Analytics and Chartbeat use pixels to collect information, and offer website owners and publishers insights about what kinds of people are visiting their sites. Going another level deeper, advertising platforms like Facebook also offer “conversion pixels,” which allow publishers to keep track of how many click-throughs their own third-party ads are getting.

The biggest players in web-based analytics offer similar services to mobile apps. Google Analytics and Facebook are two of the most popular SDKs on both Android and iOS. Like their counterparts on the Web, these services silently collect information about users of mobile apps and then share some of that information with the app developers themselves.

Mobile third-party trackers convince app developers to install their SDKs by providing useful features like analytics or single sign-on. SDKs are just big blobs of code that app developers add to their projects. When they compile and distribute an app, the third-party code ships with it. Unlike Web-based tools, analytics services in mobile apps don’t need to use “pixels” or other tricks to trigger third-party requests.

Another class of trackers work on behalf of advertisers rather than first-party sites or apps. These companies work with advertisers to monitor where, how, and to whom their ads are being served. They often don’t work with first-party publishers at all; in fact, their goal is to gather data about publishers as well as users.

DoubleVerify is one of the largest such services. Third-party advertisers inject DoubleVerify code alongside their ads, and DoubleVerify estimates whether each impression is coming from a real human (as opposed to a bot), whether the human is who the advertiser meant to target, and whether the page around the ad is “brand safe.” According to its privacy policy, the company measures “how long the advertisement was displayed in the consumer’s browser” and “the display characteristics of the ad on the consumer’s browser.” In order to do all that, DoubleVerify gathers detailed data about users’ browsers; it is by far the largest source of third-party browser fingerprinting on the web. It collects location data, including data from other third-party sources, to try to determine whether a user is viewing an ad in the geographic area that the advertiser targeted.

Other companies in the space include Adobe, Oracle, and Comscore.

Embedded media players

Sometimes, third-party trackers serve content that users actually want to see. On the web, embedding third-party content is extremely common for blogs and other media sites. Some examples include video players for services like YouTube, Vimeo, Streamable, and Twitter, and audio widgets for Soundcloud, Spotify, and podcast-streaming services. These media players nearly always run inside IFrames, and therefore have access to local storage and the ability to run arbitrary JavaScript. This makes them well-suited to tracking users as well.

Social media widgets

Social media companies provide a variety of services to websites, such as Facebook Like buttons and Twitter Share buttons. These are often pitched as ways for publishers to improve traffic numbers on their own platforms as well as their presence on social media. Like and Share buttons can be used for tracking in the same way that pixels can: the “button” is really an embedded image which triggers a request to the social media company’s server.

More sophisticated widgets, like comment sections, work more like embedded media players. They usually come inside of IFrames and enjoy more access to users’ browsers than simple pixels or images. Like media players, these widgets are able to access local storage and run JavaScript in order to compute browser fingerprints.

Finally, the biggest companies (Facebook and Google in particular) offer account management services to smaller companies, like “Log in with Google.” These services, known as “single sign-on,” are attractive to publishers for several reasons. Independent websites and apps can offload the work of managing user accounts to the big companies. Users have fewer username/password pairs to remember, and less frequently go through annoying sign up/log-in flows. But for users, there is a price: account management services allow log-in providers to act as a third party and track their users’ activity on all of the services they log into. Log-in services are more reliable trackers than pixels or other simple widgets because they force users to confirm their identity.

CAPTCHAs

CAPTCHAs are a technology that attempts to separate users from robots. Publishers install CAPTCHAs on pages where they want to be particularly careful about blocking automated traffic, like sign-up forms and pages that serve particularly large files.

Google’s ReCAPTCHA is the most popular CAPTCHA technology on the web. Every time you connect to a site that uses recaptcha, your browser connects to a *.google.com domain in order to load the CAPTCHA resources and shares all associated cookies with Google. This means that its CAPTCHA network is another source of data that Google can use to profile users.

While older CAPTCHAs asked users to read garbled text or click on pictures of bikes, the new ReCAPTCHA v3 records “interactions with the website” and silently guesses whether a user is human. ReCAPTCHA scripts don’t send raw interaction data back to Google. Rather, they generate something akin to a behavioral fingerprint, which summarizes the way a user has interacted with a page. Google feeds this into a machine-learning model to estimate how likely the user is to be human, then returns that score to the first-party website. In addition to making things more convenient for users, this newer system benefits Google in two ways. First, it makes CAPTCHAS invisible to most users, which may make them less aware that Google (or anyone) is collecting data about them. Second, it leverages Google’s huge set of behavioral data to cement its dominance in the CAPTCHA market, and ensures that any future competitors will need their own tranches of interaction data in order to build tools that work in a similar way.

Session replay services

Session replay services are tools that website or app owners can install in order to actually record how users interact with their services. These services operate both on websites and in apps. They log keystrokes, mouse movements, taps, swipes, and changes to the page, then allow first-party sites to “re-play” individual users’ experiences after the fact. Often, users are given no indication that their actions are being recorded and shared with third parties.

These creepy tools create a massive risk that sensitive data, like medical information, credit card numbers, or passwords, will be recorded and leaked. The providers of session replay services usually leave it up to their clients to designate certain data as off-limits. But for clients, the process of filtering out sensitive information is subtle, painstaking, and time-consuming, and it clashes with replay services’ promises to get set up “in a matter of seconds.” As a result, independent auditing has found that sensitive data ends up in the recordings, and that session replay service providers often fail to secure that data appropriately.

Passive, real-world tracking

WiFi hotspots and wireless beacons

Many consumer devices emit wireless “probe” signals, and many companies install commercial beacons that intercept these probes all over the physical world. Some devices randomize the unique MAC address device identifiers they share in probes, protecting themselves from passive tracking, but not all do. And connecting to an open WiFi network or giving an app Bluetooth permissions always opens a device up to tracking.

As we discussed above, WiFi hotspots, wireless beacons, and other radio devices can be used to “listen” for nearby devices. Companies like Comcast (which provides XFinity WiFi) and Google (which provides free WiFi in Starbucks and many other businesses) have WiFi hotspots installed all over the world; Comcast alone boasts over 18 million XFinity WiFi installations. Dozens of other companies that you likely haven’t heard of provide free WiFi to coffee shops, restaurants, events, and hotels.

Companies also pay to install wireless beacons in real-world businesses and public spaces. Bluetooth-enabled beacons have been installed around retail stores, at political rallies, in campaign lawn signs, and on streetlight poles.

Wireless beacons are capable of tracking on two levels. First, and most concerning, wireless beacons can passively monitor the “probes” that devices send out all the time. If a device is broadcasting its hardware MAC address, companies can use the probes they collect to track its user’s movement over time.

A laptop emits probe requests containing its a MAC address. Wireless Bbeacons listen for the probes and tie the requests to a profile of the user.

WiFi hotspots and bluetooth beacons can listen for probes that wireless devices send out automatically. Trackers can use each device’s MAC address to create a profile of it based on where they’ve seen that device.

Second, when a user connects to a WiFi hotspot or to a Bluetooth beacon, the controller of the hotspot or beacon can connect the device’s MAC address to additional identifiers like IP address, cookies, and ad ID. Many WiFi hotspot operators also use a sign-in page to collect information about users’ real names or email addresses. Then, when users browse the web from that hotspot, the operator can collect data on all the traffic coming from the user’s device, much like an ISP. Bluetooth beacons are used slightly differently. Mobile phones allow apps to access the Bluetooth interface with certain permissions. Third-party trackers in apps with Bluetooth permissions can automatically connect to Bluetooth beacons in the real world, and they can use those connections to gather fine-grained location data.

Thankfully, both iOS and Android devices now send obfuscated MAC addresses with probes by default. This prevents the first, passive style of tracking described above.

But phones aren’t the only devices with wireless capability. Laptops, e-readers, wireless headphones, and even cars are often outfitted with Bluetooth capability. Some of these devices don’t have the MAC randomization features that recent models of smartphones do, making them vulnerable to passive location tracking.

Furthermore, even devices with MAC randomization usually share static MAC addresses when they actually connect to a wireless hotspot or Bluetooth device. This heightens the risks of the second style of tracking described above, which occurs when the devices connect to public WiFi networks or local Bluetooth beacons.

Vehicle tracking and ALPRs

Automated license plate readers, or ALPRs, are cameras outfitted with the ability to detect and read license plates. They can also use other characteristics of cars, like make, model, color, and wear, in order to help identify them. ALPRs are often used by law enforcement, but many ALPR devices are owned by private companies. These companies collect vehicle data indiscriminately, and once they have it, they can re-sell it to whomever they want: local police, federal immigration enforcement agencies, private data aggregators, insurance companies, lenders, or bounty hunters.

Different companies gather license plate data from different sources, and sell it to different audiences. Digital Recognition Network, or DRN, sources its data from thousands of repossession agencies around the country, and sells data to insurance agencies, private investigators, and “asset recovery” companies. According to an investigation by Motherboard, the vast majority of individuals about whom DRN collects data are not suspected of a crime or behind on car payments. The start-up Flock Safety offers ALPR-powered “neighborhood watch” services. Concerned homeowners can install ALPRs on their property in order to record and share information about cars that drive through their neighborhood.

DRN is owned by VaaS International Holdings, a Fort Worth-based company that brands itself as “the preeminent provider of license plate recognition (‘LPR’) and facial recognition products and data solutions.” It also owns Vigilant Solutions, another private purveyor of ALPR technology. Vigilant’s clients include law enforcement agencies and private shopping centers. Vigilant pools data from thousands of sources around the country into a single database, which it calls “PlateSearch.” Scores of law enforcement agencies pay for access to PlateSearch. According to EFF’s research, approximately 99.5% of the license plates recorded by Vigilant are not connected to a public safety interest at the time they are scanned.

Cameras and machine vision aren’t the only technologies enabling vehicle tracking. Passive MAC address tracking can also be used to track vehicle movement. Phones inside of vehicles, and sometimes the vehicles themselves, broadcast probe requests including their MAC addresses. Wireless beacons placed strategically around roads can listen for those signals. One company, Libelium, sells a wireless beacon that is meant to be installed on streetlights in order to track nearby traffic.

Face recognition cameras

Face recognition has been deployed widely by law enforcement in some countries, including China and the UK. This has frightening implications: it allows mass logging of innocent people’s activities. In China, it has been used to monitor and control members of the Uighur minority community.

We’ve covered the civil liberties harms associated with law enforcement use of face recognition extensively in the past. But face recognition also has been deployed in a number of private industries. Airlines use face recognition to authenticate passengers before boarding. Concert venues and ticket sellers have used it to screen concert-goers. Retailers use face recognition to identify people who supposedly are greater risks for shoplifting, which is especially concerning considering that the underlying mugshot databases are riddled with unfair racial disparities, and the technology is more likely to misidentify people of color. Private security companies sell robots equipped with face recognition to monitor public spaces and help employers keep tabs on employees. And schools and even summer camps use it to keep tabs on kids.

Big tech companies have begun investing in facial recognition for payment processing, which would give them another way to link real-world activity to users’ online personas. Facebook has filed a patent on a system that would link faces to social media profiles in order to process payments. Also, Amazon’s brick-and-mortar “Go” stores rely on biometrics to track who enters and what they take in order to charge them accordingly.

In addition, many see facial recognition as a logical way to bring targeted advertising to the physical world. Face recognition cameras can be installed in stores, on billboards, and in malls to profile people’s behavior, build dossiers on their habits, and target messages at them. In January 2019, Walgreens began a pilot program using face recognition cameras installed on LED-screen fridge doors. The idea is that, instead of looking through a plate of glass to see the contents of a fridge, consumers can look at a screen which will display graphics indicating what’s inside. The camera can perform facial recognition on whoever is standing in front of the fridge, and the graphics can be dynamically changed to serve ads targeted to that person. Whether or not Walgreens ends up deploying this technology at a larger scale, this appears to be one direction retailers are heading.

Payment processors and financial technology

Financial technology, or “fintech,” is a blanket term for the burgeoning industry of finance-adjacent technology companies. Thousands of relatively new tech companies act as the technological glue between old-guard financial institutions and newer technologies, including tracking and surveillance. When they are regulated, fintech companies are often subject to less government oversight than traditional institutions like banks.

Payment processors are companies that accept payments on behalf of other businesses. As a result, they are privy to huge amounts of information about what businesses sell and what people buy. Since most financial transactions involve credit card numbers and names, it is easy for payment processors to tie the data they collect to real identities. Some of these companies are pure service providers, and don’t use data for any purposes other than moving money from one place to another. Others build profiles of consumers or businesses and then monetize that data. For example, Square is a company that makes credit card readers for small businesses. It also uses the information it collects to serve targeted ads from third parties and to underwrite loans through its Square Capital program.

Some fintech companies offer financial services directly to users, like Intuit, the company behind TurboTax and Mint. Others provide services to banks or businesses. In the fintech world, “data aggregators” act as intermediaries between banks and other services, like money management apps. In the process, data aggregators gain access to all the data that passes through their pipes, including account balances, outstanding debts, and credit card transactions for millions of people. In addition, aggregators often collect consumers’ usernames and passwords in order to extract data from their banks. Yodlee, one of the largest companies in the space, sells transaction data to hedge funds, which mine the information to inform stock market moves. Many users are unaware that their data is used for anything other than operating the apps they have signed up for.

Tracking and corporate power

Many of the companies that benefit most from data tracking have compelling ways to entice web developers, app creators, and store managers to install their tracking technology. Companies with monopolies or near-monopolies can use their market power to build tracking networks, monitor and inhibit smaller competitors, and exploit consumer privacy for their own economic advantage. Corporate power and corporate surveillance reinforce one another in several ways.

First, dominant companies like Google and Facebook can pressure publishers into installing their tracking code. Publishers rely on the world’s biggest social network and the world’s biggest search engine to drive traffic to their own sites. As a result, most publishers need to advertise on those platforms. And in order to track how effective their ads are, they have no choice but to install Google and Facebook’s conversion measurement code on their sites and apps. Google, Facebook, and Amazon also act as third-party ad networks, together controlling over two-thirds of the market. That means publishers who want to monetize their content have a hard time avoiding the big platforms’ ad tracking code.

Second, vertically integrated tech companies can gain control of both sides of the tracking market. Google administers the largest behavioral advertising system in the world, which it powers by collecting data from its Android phones and Chrome browser—the most popular mobile operating system and most popular web browser in the world. Compared to its peer operating systems and browsers, Google’s user software makes it easier for its trackers to collect data.

When the designers of the Web first described browsers, they called them “user agents:” pieces of software that would act on their users’ behalf on the Internet. But when a browser maker is also a company whose main source of revenue is behavioral advertising, the company’s interest in user privacy and control is pitted against the company’s interest in tracking. The company’s bottom line usually comes out on top.

Third, data can be used to profile not just people, but also competitor companies. The biggest data collectors don’t just know how we act, they also know more about the market—and their competitors—than anyone else. Google’s tracking tools monitor over 80% of traffic on the Web, which means it often knows as much about it’s competitors’ traffic as its competitors do (or more). Facebook (via third-party ads, analytics, conversion pixels, social widgets, and formerly its VPN app Onavo) also monitors the use and growth of websites, apps, and publishers large and small. Amazon already hosts a massive portion of the Internet in its Amazon Web Services computing cloud, and it is starting to build its own formidable third-party ad network. These giants use this information to identify nascent competitors, and then buy them out or clone their products before they become significant threats. According to confidential internal documents, Facebook used data about users’ app habits from Onavo, its VPN, to inform its acquisition of WhatsApp.

Fourth, as tech giants concentrate tracking power into their own hands, they can use access to data as an anticompetitive cudgel. Facebook was well aware that access to its APIs (and the detailed private data that entailed) were invaluable to other social companies. It has a documented history of granting or withholding access to user data in order to undermine its competition.

Furthermore, Google and Facebook have both begun adopting policies that restrict competitors’ access to their data without limiting what they collect themselves. For example, most of the large platforms now limit the third-party trackers on their own sites. In its own version of RTB, Google has recently begun restricting access to ad identifiers and other information that would allow competing ad networks to build user profiles. And following the Cambridge Analytica incident, Facebook started locking down access to third-party APIs, without meaningfully changing anything about the data that Facebook itself collects on users. On the one hand, restricting third-party access can have privacy benefits. On the other, kicking third-party developers and outside actors off Facebook’s and Google’s platform services can make competition problems worse, give incumbent giants sole power over the user data they have collected, and cement their privacy-harmful business practices. Instead of seeing competition and privacy as isolated concerns, empowering users requires addressing both to reduce large companies’ control over users’ data and attention.

Finally, big companies can acquire troves of data from other companies in mergers and acquisitions. Google Analytics began its life as the independent company Urchin, which Google purchased in 2005. In 2007, Google supercharged its third-party advertising networks by purchasing Doubleclick, then as now a leader in the behaviorally targeted ad market. In late 2019, it purchased the health data company Fitbit, merging years of step counts and exercise logs into its own vast database of users’ physical activity.

In its brief existence, Facebook has acquired 67 other companies. Amazon has acquired 91, and Google, 214—an average of over 10 per year. Many of the smaller firms that Facebook, Amazon, or Google have acquired had access to tremendous amounts of data and millions of active users. With each acquisition, those data sources are folded into the already-massive silos controlled by the tech giants. And thanks to network effects, the data becomes more valuable when it’s all under one roof. On its own, Doubleclick could assemble pseudonymous profiles of users’ browsing history. But as a part of Google, it can merge that data with real names, locations, cross-device activity, search histories, and social graphs.

Multi-billion dollar tech giants are not the only companies tracking us, nor are they the most irresponsible actors in the space. But the bigger they are, the more they know. And the more kinds of data a company has access to, the more powerful its profiles of users and competitors will be. In the new economy of personal information, the rich are only getting richer.

Part 3: Data sharing: Targeting, brokers, and real-time bidding

Where does the data go when it’s collected? Most trackers don’t collect every piece of information by themselves. Instead, companies work together, collecting data for themselves and sharing it with each other. Sometimes, companies with information about the same individual will combine it only briefly to determine which advertiser will serve which ad to that person. In other cases, companies base their entire business model on collecting and selling data about individuals they never interact with. In all cases, the type of data they collect and share can impact their target’s experience, whether by affecting the ads they’re exposed to or by determining which government databases they end up cataloged in. Moreover, the more a user’s data is spread around, the greater the risk that they will be affected by a harmful data breach. This section will explore how personal information gets shared and where it goes.

Real-time bidding

Real-time bidding is the system that publishers and advertisers use to serve targeted ads. The unit of sale in the Internet advertising world is the “impression.” Every time a person visits a web page with an ad, that person views an ad impression. Behind the scenes, an advertiser pays an ad network for the right to show you an ad, and the ad network pays the publisher of the web page where you saw the ad. But before that can happen, the publisher and the ad network have to decide which ad to show. To do so, they conduct a milliseconds-long auction, in which the auctioneer offers up a user’s personal information, and then software on dozens of corporate servers bid on the rights to that user’s attention. Data flows in one direction, and money flows in the other.

Such “real-time bidding” is quite complex, and the topic could use a whitepaper on its own. Luckily, there are tremendous, in-depth resources on the topic already. Dr. Johnny Ryan and Brave have written a series on the privacy impact of RTB. There is also a doctoral thesis on the privacy implications of the protocol. This section will give a brief overview of what the process looks like, much of which is based on Ryan’s work.

//website.com” also shares information, including a cookie and other request headers, with other third-party servers. This information is sent to a Supply-Side Platform (SSP), which is the server that begins the real-time bidding auction . This SSP matches the cookie to user 552EFF, which is Ava’s device. The SSP then fills out a “bid request”, which includes information like year of birth, gender (“f?”), keywords (“coffee, goth”), and geo (“USA”), and sends it to DSP servers.

Supply-side platforms use cookies to identify a user, then distribute “bid requests” with information about the user to potential advertisers.

First, data flows from your browser to the ad networks, also known as “supply-side platforms” (SSPs). In this economy, your data and your attention are the “supply” that ad networks and SSPs are selling. Each SSP receives your identifying information, usually in the form of a cookie, and generates a “bid request” based on what it knows about your past behavior. Next, the SSP sends this bid request to each of the dozens of advertisers who have expressed interest in showing ads.

A screenshot of a table describing the information content of the User object from the AdCOM 1.0 specification.

The `user` object in an OpenRTB bid request contains the information a particular supply-side platform knows about the subject of an impression, including one or more unique IDs, age, gender, location, and interests. Source: https://github.com/InteractiveAdvertisingBureau/AdCOM/blob/master/AdCOM%20v1.0%20FINAL.md#object–user-

The bid request contains information about your location, your interests, and your device, and includes your unique ID. The screenshot above shows the information included in an OpenRTB bid request.

A demand-side platform server winning the bid.

After the auction is complete, winning bidders pay supply-side platforms, SSPs pay the publisher, and the publisher shows the user an ad. At this point, the winning advertiser can collect even more information from the user’s browser.

Finally, it’s the bidders’ turn. Using automated systems, the advertisers look at your info, decide whether they’d like to advertise to you and which ad they want to show, then respond to the SSP with a bid. The SSP determines who won the auction and displays the winner’s ad on the publisher’s web page.

All the information in the bid request is shared before any money changes hands. Advertisers who don’t win the auction still receive the user’s personal information. This enables “shadow bidding.” Certain companies may pretend to be interested in buying impressions, but intentionally bid to lose in each auction with the goal of collecting as much data as possible as cheaply as possible.

Furthermore, there are several layers of companies that participate in RTB between the SSP and the advertisers, and each layer of companies also vacuums up user information. SSPs interface with “ad exchanges,” which share data with “demand side platforms” (DSPs), which also share and purchase data from data brokers. Publishers work with SSPs to sell their ad space, advertisers work with DSPs to buy it, and ad exchanges connect buyers and sellers. You can read a breakdown of the difference between SSPs and DSPs, written for advertisers, here. Everyone involved in the process gets to collect behavioral data about the person who triggered the request.

During the bidding process, advertisers and the DSPs they work with can use third-party data brokers to augment their profiles of individual users. These data brokers, which refer to themselves innocuously as “data management platforms” (DMPs), sell data about individuals based on the identifiers and demographics included in a bid request. In other words, an advertiser can share a user ID with a data broker and receive that user’s behavioral profile in return.

Source: Zhang, W., Yuan, S., Wang, J., and Shen, X. (2014b). Real-time bidding benchmarking with ipinyou dataset. arXiv preprint arXiv:1407.7073.

The diagram above gives another look at the flow of information and money in a single RTB auction.

In summary: (1) a user’s visit to a page triggers an ad request from the page’s publisher to an ad exchange. This is our real-time bidding “auctioneer.” The ad exchange (2) requests bids from advertisers and the DSPs they work with, sending them information about the user in the process. The DSP then (3) augments the bid request data with more information from data brokers, or DMPs. Advertisers (4) respond with a bid for the ad space. After (5) a millisecond-long auction, the ad exchange (6) picks and notifiers the winning advertiser. The ad exchange (7) serves that ad to the user, complete with the tracking technology described above. The advertiser will (8) receive information about how the user interacted with the ad, e.g. how long they looked at it, what they clicked, if they purchased anything, etc. That data will feed back into the DSP’s information about that user and other users who share their characteristics, informing future RTB bids.

From the perspective of the user who visited the page, RTB causes two discrete sets of privacy invasions. First, before they visited the page, an array of companies tracked their personal information, both online and offline, and merged it all into a sophisticated profile about them. Then, during the RTB process, a different set of companies used that profile to decide how much to bid for the ad impression. Second, as a result of the user’s visit to the page, the RTB participants harvest additional information from the visiting user. That information is injected into the user’s old profile, to be used during subsequent RTBs triggered by their next page visits. Thus, RTB is both a cause of tracking and a means of tracking.

RTB on the web: cookie syncing

Cookie syncing is a method that web trackers use to link cookies with one another and combine the data one company has about a user with data that other companies might have.

Mechanically, it’s very simple. One tracking domain triggers a request to another tracker. In the request, the first tracker sends a copy of its own tracking cookie. The second tracker gets both its own cookie and the cookie from the first tracker. This allows it to “compare notes” with the other tracker while building up its profile of the user.

Cookie sharing is commonly used as a part of RTB. In a bid request, the SSP shares its own cookie ID with all of the potential bidders. Without syncing, the demand side platforms might have their own profiles about users linked to their own cookie IDs. A DSP might not know that the user “abc” from Doubleclick (Google’s ad network) is the same as its own user “xyz”. Cookie syncing lets them be sure. As part of the bidding process, SSPs commonly trigger cookie-sync requests to many DSPs at a time. That way, the next time that SSP sends out a bid request, the DSPs who will be bidding can use their own behavioral profiles about the user to decide how to bid.

A laptop makes a request for a hidden element on the page, which kicks off the "cookie sync" process described below.

Cookie syncing. An invisible ‘pixel’ element on the page triggers a request to an ad exchange or SSP, which redirects the user to a DSP. The redirect URL contains information about the SSP’s cookie that lets the DSP link it to its own identifier. A single SSP may trigger cookie syncs to many different DSPs at a time.

RTB in mobile apps

RTB was created for the Web, but it works just as well for ads in mobile apps. Instead of cookies, trackers use ad IDs. The ad IDs baked into iOS and Android make trackers’ jobs easier. On the web, each advertiser has its own cookie ID, and demand-side platforms need to sync data with DMPs and with each other in order to tie their data to a specific user.

But on mobile devices, each user has a single, universal ad ID that is accessible from every app. That means that the syncing procedures described above on the web are not necessary on mobile; advertisers can use ad IDs to confirm identity, share data, and build more detailed profiles upon which to base bids.

Group targeting and look-alike audiences

Sometimes, large platforms do not disclose their data; rather, they lease out temporary access to their data-powered tools. Facebook, Google, and Twitter all allow advertisers to target categories of people with ads. For example, Facebook lets advertisers target users with certain “interests” or “affinities.”

The companies do not show advertisers the actual identities of individuals their campaigns target. If you start a Facebook campaign targeting “people interested in Roller Derby in San Diego,” you can’t see a list of names right away. However, this kind of targeting does allow advertisers to reach out directly to roller derby-going San Diegans and direct them to an outside website or app. When targeted users click on an ad, they are directed off of Facebook and to the advertiser’s domain. At this point, the advertiser knows they came from Facebook and that they are part of the targeted demographic. Once users have landed on the third-party site, the advertiser can use data exchange services to match them with behavioral profiles or even real-world identities.

In addition, Facebook allows advertisers to build “look-alike audiences” based on other groups of people. For example, suppose you’re a payday loan company with a website. You can install an invisible Facebook pixel on a page that your debtors visit, make a list of people who visit that page, and then ask Facebook to create a “look-alike” audience of people who Facebook thinks are “similar” to the ones on your list. You can then target those people with ads on Facebook, directing them back to your website, where you can use cookies and data exchanges to identify who they are.

These “look-alike” features are black boxes. Without the ability to audit or study them, it’s impossible to know what kinds of data they use and what kinds of information about users they might expose. We urge advertisers to disclose more information about them and to allow independent testing.

Data brokers

Data brokers are companies that collect, aggregate, process, and sell data. They operate out of sight from regular users, but in the center of the data-sharing economy. Often, data brokers have no direct relationships with users at all, and the people about whom they sell data may not be aware they exist. Data brokers purchase information from a variety of smaller companies, including retailers, financial technology companies, medical research companies, online advertisers, cellular providers, Internet of Things device manufacturers, and local governments. They then sell data or data-powered services to advertisers, real estate agents, market research companies, colleges, governments, private bounty hunters, and other data brokers.

This is another topic that is far too broad to cover here, and others have written in depth about the data-selling ecosystem. Cracked Labs’ report on corporate surveillance is both accessible and in-depth. Pam Dixon of the World Privacy Forum has also done excellent research into data brokers, including a report from 2014 and testimony before the Senate in 2015 and 2019.

The term “data broker” is broad. It includes “mom and pop” marketing firms that assemble and sell curated lists of phone numbers or emails, and behemoths like Oracle that ingest data from thousands of different streams and offer data-based services to other businesses.

Some brokers sell raw streams of information. This includes data about retail purchase behavior, data from Internet of Things devices, and data from connected cars. Others act as clearinghouses between buyers and sellers of all kinds of data. For example, Narrative promises to help sellers “unlock the value of [their] data” and help buyers “access the data [they] need.” Dawex describes itself as “a global data marketplace where you can meet, sell and buy data directly.”

Another class of companies act as middlemen or “aggregators,” licensing raw data from several different sources, processing it, and repackaging it as a specific service for other businesses. For example, major phone carriers sold access to location data to aggregators called Zumigo and Microbilt, which in turn sold access to a broad array of other companies, with the resulting market ultimately reaching down to bail bondsmen and bounty hunters (and an undercover reporter). EFF is now suing AT&T for selling this data without users’ consent and for misleading the public about its privacy practices.

Many of the largest data brokers don’t sell the raw data they collect. Instead, they collect and consume data from thousands of different sources, then use it to assemble their own profiles and draw inferences about individuals. Oracle, one of the world’s largest data brokers, owns Bluekai, one of the largest third-party trackers on the web. Credit reporting agencies, including Equifax and Experian, are also particularly active here. While the U.S. Fair Credit Reporting Act governs how credit raters can share specific types of data, it doesn’t prevent credit agencies from selling most of the information that trackers collect today, including transaction information and browsing history. Many of these companies advertise their ability to derive psychographics, which are “innate” characteristics that describe user behavior. For example, Experian classifies people into financial categories like “Credit Hungry Card Switcher,” “Disciplined, Passive Borrower,” and “Insecure Debt Dependent,” and claims to cover 95% of the U.S. population. Cambridge Analytica infamously used data about Facebook likes to derive “OCEAN scores”—ratings for openness, conscientiousness, extraversion, agreeableness, and neuroticism—about millions of voters, then sold that data to political campaigns.

Finally, many brokers use their internal profiles to offer “identity resolution” or “enrichment” services to others. If a business has one identifier, like a cookie or email address, it can pay a data broker to “enrich” that data and learn other information about the person. It can also link data tied to one identifier (like a cookie) to data from another (like a mobile ad ID). In the real-time bidding world, these services are known as “data management platforms.” Real-time bidders can use these kinds of services to learn who a particular user is and what their interests are, based only on the ID included with the bid request.

For years, data brokers have operated out of sight and out of mind of the general public. But we may be approaching a turning point. In 2018, Vermont passed the nation’s first law requiring companies that buy and sell third-party data to register with the secretary of state. As a result, we now have access to a list of over 120 data brokers and information about their business models. Furthermore, when the California Consumer Privacy Act goes into effect in 2020, consumers will have the right to access the personal information that brokers have about them for free, and to opt out of having their data sold.

Data consumers

So far, this paper has discussed how data is collected, shared, and sold. But where does it end up? Who are the consumers of personal data, and what do they do with it?

Targeted advertising

By far the biggest, most visible, and most ubiquitous data consumers are targeted advertisers. Targeted advertising allows advertisers to reach users based on demographics, psychographics, and other traits. Behavioral advertising is a subset of targeted advertising that leverages data about users’ past behavior in order to personalized ads.

The biggest data collectors are also the biggest targeted advertisers. Together, Google and Facebook control almost 60% of the digital ad market in the U.S., and they use their respective troves of data in order to do so. Google, Facebook, Amazon, and Twitter offer end-to-end targeting services where advertisers can target high-level categories of users, and the advertisers don’t need to have access to any data themselves. Facebook lets advertisers target users based on location; demographics like age, gender, education, and income; and interests like hobbies, music genres, celebrities, and political leaning. Some of the “interests” Facebook uses are based on what users have “liked” or commented on, and others are derived based on Facebook’s third-party tracking. While Facebook uses its data to match advertisers to target audiences, Facebook does not share its data with those advertisers.

Real-time bidding (RTB) involves more data sharing, and there are a vast array of smaller companies involved in different levels of the process. The big tech companies offer services in this space as well: Google’s Doubleclick Bid Manager and Amazon DSP are both RTB demand-side platforms. In RTB, identifiers are shared so that the advertisers themselves (or their agents) can decide whether they want to reach each individual and what ad they want to show. In the RTB ecosystem, advertisers collect their own data about how users behave, and they may use in-house machine learning models in order to predict which users are most likely to engage with their ads or buy their products.

Some advertisers want to reach users on Facebook or Google, but don’t want to use the big companies’ proprietary targeting techniques. Instead, they can buy lists of contact information from data brokers, then upload those lists directly to Facebook or Google, who will reach those users across all of their platforms. This system undermines big companies’ efforts to rein in discriminatory or otherwise malicious targeting. Targeting platforms like Google and Facebook do not allow advertisers to target users of particular ethnicities with ads for jobs, housing, or credit. However, advertisers can buy demographic information about individuals from data brokers, upload a list of names who happen to be from the same racial group, and have the platform target those people directly. Both Google and Facebook forbid the use of “sensitive information” to target people with contact lists, but it’s unclear how they enforce these policies.

Political campaigns and interest groups

Companies aren’t the only entities that try to benefit from data collection and targeted advertising. Cambridge Analytica used ill-gotten personal data to estimate “psychographics” for millions of potential voters, then used that data to help political campaigns. In 2018, the group CatholicVote used cell-phone location data to determine who had been inside a Catholic church, then targeted them with “get out the vote” ads. Anti-abortion groups used similar geo-fencing technology to target ads to women while they were at abortion clinics..

And those incidents are not isolated. Some non-profits that rely on donations buy data to help narrow in on potential donors. Many politicians around the country have used open voter registration data to target voters. The Democratic National Committee is reportedly investing heavily in its “data warehouse” ahead of the 2020 election. And Deep Root Analytics, a consulting firm for the Republican party, was the source of the largest breach of US voter data in history; it had been collecting names, registration details, and “modeled” ethnicity and religion data about nearly 200 million Americans.

Debt collectors, bounty hunters, and fraud investigators

Debt collectors, bounty hunters, and repossession agencies all purchase and use location data from a number of sources. EFF is suing AT&T for its role in selling location data to aggregators, which enabled a secondary market that allowed access by bounty hunters. However, phone carriers aren’t the only source of that data. The bail bond company Captira sold location data gathered from cell phones and ALPRs to bounty hunters for as little as $7.50. And thousands of apps collect “consensual” location data using GPS permissions, then sell that data to downstream aggregators. This data can be used to locate fugitives, debtors, and those who have not kept up with car payments. And as investigations have shown, it can also be purchased—and abused—by nearly anyone.

Cities, law enforcement, intelligence agencies

The public sector also purchases data from the private sector for all manner of applications. For example, U.S. Immigration and Customs Enforcement bought ALPR data from Vigilant to help locate people the agency intends to deport. Government agencies contract with data brokers for myriad tasks, from determining eligibility for human services to tax collection, according to the League of California Cities, in a letter seeking an exception from that state’s consumer data privacy law for contracts between government agencies and data brokers. Advocates have long decried these arrangements between government agencies and private data brokers as a threat to consumer data privacy, as well as an end-run around legal limits on governments’ own databases. And of course, national security surveillance often rests on the data mining of private companies’ reservoirs of consumer data. For example, as part of the PRISM program revealed by Edward Snowden, the NSA collected personal data directly from Google, YouTube, Facebook, and Yahoo.

Part 4: Fighting back

You might want to resist tracking to avoid being targeted by invasive or manipulative ads. You might be unhappy that your private information is being bartered and sold behind your back. You might be concerned that someone who wishes you harm can access your location through a third-party data broker. Perhaps you fear that data collected by corporations will end up in the hands of police and intelligence agencies. Or third-party tracking might just be a persistent nuisance that gives you a vague sense of unease.

But the unfortunate reality is that tracking is hard to avoid. With thousands of independent actors using hundreds of different techniques, corporate surveillance is widespread and well-funded. While there’s no switch to flip that can prevent every method of tracking, there’s still a lot that you can do to take back your privacy. This section will go over some of the ways that privacy-conscious users can avoid and disrupt third-party tracking.

Each person should decide for themselves how much effort they’re willing to put into protecting their privacy. Small changes can seriously cut back on the amount of data that trackers can collect and share, like installing EFF’s tracker-blocker extension Privacy Badger in your browser and changing settings on a phone. Bigger changes, like uninstalling third-party apps and using Tor, can offer stronger privacy guarantees at the cost of time, convenience, and sometimes money. Stronger measures may be worth it for users who have serious concerns.

Finally, keep in mind that none of this is your fault. Privacy shouldn’t be a matter of personal responsibility. It’s not your job to obsess over the latest technologies that can secretly monitor you, and you shouldn’t have to read through a quarter million words of privacy-policy legalese to understand how your phone shares data. Privacy should be a right, not a privilege for the well-educated and those flush with spare time. Everyone deserves to live in a world—online and offline—that respects their privacy.

In a better world, the companies that we choose to share our data with would earn our trust, and everyone else would mind their own business. That’s why EFF files lawsuits to compel companies to respect consumers’ data privacy, and why we support legislation that would make privacy the law of the land. With the help of our members and supporters, we are making progress, but changing corporate surveillance policies is a long and winding path. So for now, let’s talk about how you can fight back.

On the web

There are several ways to limit your exposure to tracking on the Web. First, your choice of browser matters. Certain browser developers take more seriously their software’s role as a “user agent” acting on your behalf. Apple’s Safari takes active measures against the most common forms of tracking, including third-party cookies, first-to-third party cookie sharing, and fingerprinting. Mozilla’s Firefox blocks third-party cookies from known trackers by default, and Firefox’s Private Browsing mode will block requests to trackers altogether.

Browser extensions like EFF’s Privacy Badger and uBlock Origin offer another layer of protection. In particular, Privacy Badger learns to block trackers using heuristics, which means it might catch new or uncommon trackers that static, list-based blockers miss. This makes Privacy Badger a good supplement to the built-in protections offered by Firefox, which rely on the Disconnect list. And while Google Chrome does not block any tracking behavior by default, installing Privacy Badger or another tracker-blocking extension in Chrome will allow you to use it with relatively little exposure to tracking. (However, planned changes in Chrome will likely affect the security and privacy tools that many use to block tracking.)

The browser extension, Privacy Badger, blocks a third-party tracker

Browser extensions like EFF’s Privacy Badger offer a layer of protection against third-party tracking on the web. Privacy Badger learns to block trackers using heuristics, which means it might catch new or uncommon trackers that static, list-based blockers miss.

No tracker blocker is perfect. All tracker blockers must make exceptions for companies that serve legitimate content. Privacy Badger, for example, maintains a list of domains which are known to perform tracking behaviors as well as serving content that is necessary for many sites to function, such as content delivery networks and video hosts. Privacy Badger restricts those domains’ ability to track by blocking cookies and access to local storage, but dedicated trackers can still access IP addresses, TLS state, and some kinds of fingerprintable data.

If you’d like to go the extra mile and are comfortable with tinkering, you can install a network-level filter in your home. Pi-hole filters all traffic on a local network at the DNS level. It acts as a personal DNS server, rejecting requests to domains which are known to host trackers. Pi-hole blocks tracking requests coming from devices which are otherwise difficult to configure, like smart TVs, game consoles, and Internet of Things products.

For people who want to reduce their exposure as much as possible, Tor Browser is the gold standard for privacy. Tor uses an onion routing service to totally mask its users’ IP addresses. It takes aggressive steps to reduce fingerprinting, like blocking access to the HTML canvas by default. It completely rejects TLS session tickets and clears cookies at the end of each session.

Unfortunately, browsing the web with Tor in 2019 is not for everyone. It significantly slows down traffic, so pages take much longer to load, and streaming video or other real-time content is very difficult. Worse, much of the modern web relies on invisible CAPTCHAs that block or throttle traffic from sources deemed “suspicious.” Traffic from Tor is frequently classified as high-risk, so doing something as simple as a Google search with Tor can trigger CAPTCHA tests. And since Tor is a public network which attackers also use, some websites will block Tor visitors altogether.

On mobile phones

Blocking trackers on mobile devices is more complicated. There isn’t one solution, like a browser or an extension, that can cover many bases. And unfortunately, it’s simply not possible to control certain kinds of tracking on certain devices.

The first line of defense against tracking is your device’s settings.

App permissions page. “ width=“1081″ height=“1849″>

Both iOS and Android let users view and control the permissions that each app has access to. You should check the permissions that your apps have, and remove the permissions that aren’t needed. While you are at it, you might simply remove the apps you are not using. In addition to per-app settings, you can change global settings that affect how your device collects and shares particularly sensitive information, like location. You can also control how apps are allowed to access the Internet when they are not in use, which can prevent passive tracking.

Both operating systems also have options to reset your device’s ad ID in different ways. On iOS, you can remove the ad ID entirely by setting it to a string of zeros. (Here are some other ways to block ad tracking on iOS.) On Android, you can manually reset it. This is equivalent to clearing your cookies, but not blocking new ones: it won’t disable tracking entirely, but will make it more difficult for trackers to build a unified profile about you.

Android also has a setting to “opt out of interest-based ads.” This sends a signal to apps that the user does not want to have their data used for targeted ads, but it doesn’t actually stop the apps from doing so by means of the ad ID. Indeed, recent research found that tens of thousands of apps simply ignore the signal.

On iOS, there are a handful of apps that can filter tracking activity from other apps. On Android, it’s not so easy. Google bans ad- and tracker-blockers from its app store, the Play Store, so it has no officially vetted apps of this kind. It’s possible to “side-load” blockers from outside of the Play Store, but this can be very risky. Make sure you only install apps from publishers you trust, preferably with open source code.

You should also think about the networks your devices are communicating with. It is best to avoid connecting to unfamiliar public WiFi networks. If you do, the “free” WiFi probably comes at the cost of your data.

Wireless beacons are also trying to collect information from your device. They can only collect identifying information if your devices are broadcasting their hardware MAC addresses. Both iOS and Android now randomize these MAC addresses by default, but other kinds of devices may not. Your e-reader, smart watch, or car may be broadcasting probe requests that trackers can use to derive location data. To prevent this, you can usually turn off WiFi and Bluetooth or set your device to “airplane mode.” (This is also a good way to save battery!)

Finally, if you really need to be anonymous, using a “burner phone” can help you control tracking associated with inherent hardware identifiers.

IRL

In the real world, opting out isn’t so simple.

As we’ve described, there are many ways to modify the way your devices work to prevent them from working against you. But it’s almost impossible to avoid tracking by face recognition cameras and automatic license plate readers. Sure, you can paint your face to disrupt face recognition algorithms, you can choose not to own a car to stay out of ALPR companies’ databases, and you can use cash or virtual credit cards to stop payment processors from profiling you. But these options aren’t realistic for most people most of the time, and it’s not feasible for anyone to avoid all the tracking that they’re exposed to.

Knowledge is, however, half the battle. For now, face recognition cameras are most likely to identify you in specific locations, like airports, during international travel. ALPR cameras are much more pervasive and harder to avoid, but if absolutely necessary, it is possible to use public transit or other transportation methods to limit how often your vehicle is tracked.

In the legislature

Some jurisdictions have laws to protect users from tracking. The General Data Protection Regulation (GDPR) in the European Union gives those it covers the right to access and delete information that’s been collected about them. It also requires companies to have a legitimate reason to use data, which could come from a “legitimate interest” or opt-in consent. The GDPR is far from perfect, and its effectiveness will depend on how regulators and courts implement it in the years to come. But it gives meaningful rights to users and prescribes real consequences for companies who violate them.

In the U.S., a smattering of state and federal laws offer specific protections to some. Vermont’s data privacy law brings transparency to data brokers. The Illinois Biometric Information Protection Act (BIPA) requires companies to get consent from users before collecting or sharing biometric identifiers. In 2020, the California Consumer Privacy Act (CCPA) will take effect, giving users there the right to access their personal information, delete it, and opt out of its sale. Some communities have passed legislation to limit government use of face recognition, and more plan to pass it soon.

At the federal level, some information in some circumstances is protected by laws like HIPAA, FERPA, COPPA, the Video Privacy Protection Act, and a handful of financial data privacy laws. However, these sector-specific federal statutes apply only to specific types information about specific types of people when held by specific businesses. They have many gaps, which are exploited by trackers, advertisers, and data brokers.

To make a long story very short, most third-party data collection in the U.S. is unregulated. That’s why EFF advocates for new laws to protect user privacy. People should have the right to know what personal information is collected about them and what is done with it. We should be free from corporate processing of our data unless we give our informed opt-in consent. Companies shouldn’t be able to charge extra or degrade service when users choose to exercise their privacy rights. They should be held accountable when they misuse or mishandle our data. And people should have the right to take companies to court when their privacy is violated.

The first step is to break the one-way mirror. We need to shed light on the tangled network of trackers that lurk in the shadows behind the glass. In the sunlight, these systems of commercial surveillance are exposed for what they are: Orwellian, but not omniscient; entrenched, but not inevitable. Once we, the users, understand what we’re up against, we can fight back.

Source: https://www.eff.org/wp/behind-the-one-way-mirror

Why robots will soon be picking soft fruits and salad

London (CNN Business)

It takes a certain nimbleness to pick a strawberry or a salad. While crops like wheat and potatoes have been harvested mechanically for decades, many fruits and vegetables have proved resistant to automation. They are too easily bruised, or too hard for heavy farm machinery to locate.

But recently, technological developments and advances in machine learning have led to successful trials of more sensitive and dexterous robots, which use cameras and artificial intelligence to locate ripe fruit and handle it with care and precision.
Developed by engineers at the University of Cambridge, the Vegebot is the first robot that can identify and harvest iceberg lettuce — bringing hope to farmers that one of the most demanding crops for human pickers could finally be automated.
First, a camera scans the lettuce and, with the help of a machine learning algorithm trained on more than a thousand lettuce images, decides if it is ready for harvest. Then a second camera guides the picking cage on top of the plant without crushing it. Sensors feel when it is in the right position, and compressed air drives a blade through the stalk at a high force to get a clean cut.

The Vegebot uses machine learning to identify ripe, immature and diseased lettuce heads

Its success rate is high, with 91% of the crop accurately classified, according to a study published in July. But the robot is still much slower than humans, taking 31 seconds on average to pick one lettuce. Researchers say this could easily be sped up by using lighter materials.
Such adjustments would need to be made if the robot was used commercially. „Our goal was to prove you can do it, and we’ve done it,“ Simon Birrell, co-author of the study, tells CNN Business. „Now it depends on somebody taking the baton and running forward,“ he says.

More mouths to feed, but less manual labor

With the world’s population expected to climb to 9.7 billion in 2050 from 7.7 billion today — meaning roughly 80 million more mouths to feed each year — agriculture is under pressure to meet rising demand for food production.
Added pressures from climate change, such as extreme weather, shrinking agricultural lands and the depletion of natural resources, make innovation and efficiency all the more urgent.
This is one reason behind the industry’s drive to develop robotics. The global market for agricultural drones and robots is projected to grow from $2.5 billion in 2018 to $23 billion in 2028, according to a report from market intelligence firm BIS Research.
„Agriculture robots are expected to have a higher operating speed and accuracy than traditional agriculture machinery, which shall lead to significant improvements in production efficiency,“ Rakhi Tanwar, principal analyst of BIS Research, tells CNN Business.

Fruit picking robots like this one, developed by Fieldwork Robotics, operate for more than 20 hours a day

On top of this, growers are facing a long-term labor shortage. According to the World Bank, the share of total employment in agriculture in the world has declined from 43% in 1991 to 28% in 2018.
Tanwar says this is partly due to a lack of interest from younger generations. „The development of robotics in agriculture could lead to a massive relief to the growers who suffer from economic losses due to labor shortage,“ she says.
Robots can work all day and night, without stopping for breaks, and could be particularly useful during intense harvest periods.
„The main benefit is durability,“ says Martin Stoelen, a lecturer in robotics at the University of Plymouth and founder of Fieldwork Robotics, which has developed a raspberry-picking robot in partnership with Hall Hunter, one of the UK’s major berry growers.
Their robots, expected to go into production next year, will operate more than 20 hours a day and seven days a week during busy periods, „which human pickers obviously can’t do,“ says Stoelen.

Octinion's robot picks one strawberry every five seconds

Sustainable farming and food waste

Robots could also lead to more sustainable farming practices. They could enable growers to use less water, less fuel, and fewer pesticides, as well as producing less waste, says Tanwar.
At the moment, a field is typically harvested once, and any unripe fruits or vegetables are left to rot. Whereas, a robot could be trained to pick only ripe vegetables and, working around the clock, it could come back to the same field multiple times to pick any stragglers.
Birrell says that this will be the most important impact of robot pickers. „Right now, between a quarter and a third of food just rots in the field, and this is often because you don’t have humans ready at the right time to pick them,“ he says.
A successful example of this is the strawberry-picking robot developed by Octinion, a Belgium-based engineering startup.
The robot — which launched this year and is being used by growers in the UK and the Netherlands — is mounted on a self-driving trolley to serve table top strawberry production.
It uses 3D vision to locate the ripe berry, softly grips it with a pair of plastic pincers, and — just like a human — turns it 90 degrees to snap it from the stalk, before dropping it gently into a punnet.
„Robotics have the potential to convert the market from (being) supply-driven to demand-driven,“ says Tom Coen, CEO and founder of Octinion. „That will then help to reduce food waste and increase prices,“ he adds.

Harsh conditions

One major challenge with agricultural robots is adapting them for all-weather conditions. Farm machinery tends to be heavy-duty so that it can withstand rain, snow, mud, dust and heat.
„Building robots for agriculture is very different to building it for factories,“ says Birrell. „Until you’re out in the field, you don’t realize how robust it needs to be — it gets banged and crashed, you go over uneven surfaces, you get rained on, you get dust, you get lightning bolts.“
California-based Abundant Robotics has built an apple robot to endure the full range of farm conditions. It consists of an apple-sucking tube on a tractor-like contraption, which drives itself down an orchard row, while using computer vision to locate ripe fruit.
This spells the start of automation for orchard crops, says Dan Steere, CEO of Abundant Robotics. „Automation has steadily improved agricultural productivity for centuries,“ he says. „[We] have missed out on much of those benefits until now.“

Cybersecurity is one of the fastest-growing segments of the technology industry

Source: https://www.fool.com/investing/the-10-biggest-cybersecurity-stocks.aspx

The 10 Biggest Cybersecurity Stocks

When looking to invest in this high-growth tech industry, start with the biggest names on the cybersecurity block.

Cybersecurity is one of the fastest-growing segments of the technology industry. As more people around the globe connect to the internet and hundreds of millions of devices get connected to a network every year, the need to keep all of that data secure is on the rise.

In fact, according to research firm Global Market Insights, cybersecurity is expected to go from a $120 billion-a-year endeavor in 2017 to more than $300 billion in 2024, good for an average 12% annual growth rate. It’s no wonder, then, that so many businesses are getting in on the movement. Old tech titans like Microsoft (NASDAQ:MSFT), Cisco (NASDAQ:CSCO), and Oracle (NYSE:ORCL) all offer cybersecurity as part of their service suites. Other names are investing in the action, too. Old smartphone maker BlackBerry (NYSE:BB), for example, bought small cybersecurity outfit Cylance in early 2019 to further its transformation as a software company.

A silhouette of a person filled in with digital data, signifying artificial intelligence.

Image source: Getty Images.

As the world goes digital, managing new digital-first business operations and keeping information safe and secure will continue to evolve and grow in importance. For those wanting to invest in the cybersecurity industry, researching the biggest names in the business is a good place to get started (after brushing up on the basics here). Here are the 10 largest companies that make cybersecurity their primary concern based on market capitalization (the value of the company calculated by number of shares outstanding multiplied by price per share).

Company Market Capitalization as of July 2019 What the Company Does
1. Palo Alto Networks (NYSE:PANW) $21.3 billion A diversified provider of security solutions, with an increasing focus on cloud software
2. Splunk (NASDAQ:SPLK) $20.5 billion Big data analytics, including security orchestration and automated response
3. Check Point Software (NASDAQ:CHKP) $17.9 billion A diversified provider of security software and hardware
4. CrowdStrike (NASDAQ:CRWD) $17.5 billion Cloud-based endpoint security
5. Okta (NASDAQ:OKTA) $15.4 billion Cloud-based identity and privileged-access management software
6. Fortinet (NASDAQ:FTNT) $14.9 billion A diversified provider of security software and hardware
7. Symantec (NASDAQ:SYMC) $14.0 billion Largest security provider by revenue; owner of LifeLock and Norton Antivirus
8. Akamai Technologies (NASDAQ:AKAM) $13.6 billion Internet content delivery and security
9. Zscaler (NASDAQ:ZS) $10.4 billion Diversified provider of cloud-based security
10. F5 Networks (NASDAQ:FFIV) $8.7 billion Internet and application content delivery and security
Bonus: Proofpoint (NASDAQ:PFPT) $7.0 billion Employee communications and internet security

Data as of July 23, 2019. Data source: YCharts and company-specific investor relations.

Types of cybersecurity stocks

„Cybersecurity“ is the umbrella term, but there are different types of security firms tackling various problems in today’s connected age.

Broad-focus cybersecurity companies

For example, the larger outfits have been angling themselves to cover a wide range of needs, becoming one-stop security shops. Palo Alto Networks and Fortinet are two such companies, covering everything from firewalls (a network feature, sometimes a piece of hardware but more often software, that decides what data to let in and out) to artificial intelligence-based software that automates tasks and monitors an organization’s digital activity.

Endpoint security providers

These companies focus on securing remote devices connected to a network. The number of devices hooked up to the internet has been growing by the hundreds of millions every year, and that trend is expected to continue. Businesses are leading the charge, and everything from employee smartphones and tablets to assets in transit to connected machinery is in need of safekeeping. Endpoint protection software handles that specific need. Startup CrowdStrike, among others, is a specialist in this space.

Specialized security services

These niche companies include Okta, which provides privileged-access management — basically, only allowing users access to the sensitive data that they’re supposed to see. Then there’s security for the cloud, or computing and software that is offered remotely by way of a data center. Zscaler concerns itself with keeping cloud connections and data safe for businesses and organizations.

Regardless of the security need, digital-based operations and communications are on the rise across the board, which means all of the top cybersecurity companies are experiencing growth of some sort. That creates an opportunity for investors to cash in on the movement. Here is a breakdown of each of the top cybersecurity companies and how their stocks are valued.

The top 10 biggest cybersecurity stocks

1. Palo Alto Networks: The largest cybersecurity stock

Sitting atop the cybersecurity pure-play list is Palo Alto Networks. The company has built itself into the leader in the security space, offering a broad range of services for its customers from firewalls to automated threat response to cloud security. The largest player in the cybersecurity niche by market cap, Palo Alto has managed to outpace the industry’s average growth rate in spite of its size.

Part of the story behind Palo Alto’s growth is the company’s acquisition spree of smaller competitors. In May 2019, the company announced its intent to purchase two cloud-based cybersecurity outfits, one for $410 million and the other for a smaller undisclosed sum. Both were added to a new cloud security service segment called Prisma, aimed at continuously updating Palo Alto’s offerings as needs of customers evolve over time. CEO Nikesh Arora, a former executive at Alphabet’s (NASDAQ:GOOGL) (NASDAQ:GOOG) Google, has indicated that strategic acquisitions will continue to play an integral part in his company’s strategy to remain relevant.

The sums of money paid for acquisitions have been substantial (at least $1 billion spent since 2018), and they’re among the reasons Palo Alto is not yet a profitable business. However, when backing out one-time nonrecurring expenses and noncash items, the company still manages to post positive free cash flow (money left over after basic operating expenses and capital expenditures). In short, that means the company can afford its aggressive buying spree.

The free cash flow generation is important, because it gives the leader in pure-play network security the wiggle room it needs to invest heavily in cloud computing, AI, and other technology as customer needs change over time. Global cloud spending is expected to grow an average of about 16% a year through 2022, according to technology research group Gartner. Sitting at the intersection of two double-digit growth industries, that long-term trend should give Palo Alto Networks an enduring outlet to sustain double-digit sales growth and help it maintain its pole position within the world of cybersecurity.

2. Splunk: Big data and securing business operations

Splunk started out as a big data monitoring company. Its software suite allows organizations to analyze and make sense of information being generated from their digital systems, from websites to connected equipment to payment processing networks, among other things. If it’s an electronic system, it creates data; and if it creates data, Splunk can help monitor it and give customers the ability to make sense of trends and other behavior of digital systems. Incidentally, one of the primary use cases for the data parsing and analytics platform is cybersecurity.

To increase its capabilities in that department, Splunk has also embarked on an aggressive acquisition spree. As a result, the big data company is now a leader in the fast-growing security orchestration, automation, and response (SOAR) segment of the cybersecurity industry. SOAR utilizes artificial intelligence (a software system that mimics how the human brain works and learns and adapts to changing circumstances) to sift through information in real time, detect potential threats, and take action to keep things on lockdown. With data breaches a constant threat, the ability to automate aspects of the workload holds appeal for large organizations.

Despite its size, Splunk has still been growing quickly. The downside is that Splunk is spending lots of cash to foster further expansion, which keeps the company in the red. Specifically, research and development of new software capabilities and sales and marketing to acquire new customers are the biggest line items affecting the bottom line. However, much like Palo Alto Networks, Splunk is free cash flow positive; profits will be a bigger consideration later on as the company matures.

That’s because Splunk’s primary industry, big data analytics, should grow an average of 13% a year and surpass $274 billion in size by 2023 — according to researcher IDC. Along the way, Splunk will also benefit from the booming and fast-changing cybersecurity industry, making it one of the best plays on the trend. The company’s expertise in monitoring and making sense of large and complex sets of data particularly lends itself to keeping business information locked up, and its recent takeovers of smaller peers have helped bolster its position in network security. Splunk’s prospects and chances at continued industry leadership look especially good.

3. Check Point Software: Adjusting to a new technology

Check Point Software, as its name implies, offers software security along with hardware to keep business networks secure. Much like Palo Alto Networks, the company has a diversified mix of solutions covering on-premeses computer networks, cloud, and endpoint protection.

Though it’s one of the largest and oldest cybersecurity companies around (founded in 1993), Check Point has not been growing at the breakneck speed of some of its peers. Low-single-digit sales growth has been the norm for some time. The reason? New technologies like the cloud have made some of Check Point’s legacy services like hardware-based security less compelling. The company is trailing some of its competitors, so spending to update the business model for today’s security needs has been a top priority. It isn’t paying off yet, and Check Point’s sluggish pace could mean its younger peers will bypass it in the years ahead.

There is one thing that makes Check Point different from other companies on this list, though. As an older, well-established company, it does turn a profit. Thus, traditional valuation metrics (without the need to make adjustments for things like stock-based compensation, shares a company pays to employees as an extra perk) work for the stock. However, heavy spending to transform the business into a more relevant one for the times has the bottom line stuck in a rut. Until that changes, there’s little compelling reason to consider the stock.

Check Point has been working hard to update its offerings for more modern needs, but the sheer number of newer start-ups could mean this established cybersecurity business will continue to get disrupted. That’s not an enviable situation to be in, especially when the industry overall is growing by double digits.

4. CrowdStrike: The newest stock on the top-10 list

Endpoint security company CrowdStrike more than doubled in value after it had its IPO (sold shares to raise money, making it available to the general investing public for the first time) in June 2019. That easily puts the firm among the largest in the cybersecurity business by market cap.

The stock has years‘ worth of double-digit sales growth baked into it, but momentum could be on CrowdStrike’s side. Revenues more than doubled in 2018. The number of connected devices around the globe is increasing every year — by the hundreds of millions — which plays right into the hands of this security company and its endpoint-protection software suite. Since many of those devices are not tethered to an office or other physical location, CrowdStrike’s cloud computing-native system lends itself to this type of security particularly well.

Because it is cloud based, CrowdStrike also boasts the ability to make near-instant system updates when a threat is detected, and its software can learn and adapt from uploaded customer data. Paired with millions of new connections getting added to an internet-connected network every year, it adds up to lots of new customer sign-ups and expanding relationships with existing ones. Dollar-based net expansion (which measures how much money existing clients spend each year) has been over 100% for years, indicating customers spend more with CrowdStrike as time passes. It’s a powerful business model, one that CrowdStrike plans on putting to use in other security disciplines as it begins to expand beyond endpoint security. With the cloud and the number of endpoints increasing dramatically, it’s no wonder this stock is off to a hot start and looks like it has years‘ worth of growth left ahead of it.

5. Okta: Keeping data on a need-to-know basis

Another upstart security company, Okta has only been around since 2009, but the identity-protection specialist has been growing like a weed. The company ensures that employees and others with privileged access within an organization get connected to the apps and data they need — and keeps everyone else out. The number of digital systems and software being utilized by organizations continues to rise, increasing the complexity and difficulty in keeping systems secure from intruders. Thus, the need for Okta’s identity services has been booming.

In just a few years‘ time, Okta has become one of the largest cybersecurity pure plays around, with sales consistently growing north of 50% in the past. Management expects that trajectory will moderate to somewhere in the mid-30% range for the foreseeable future — still nothing to balk at. And that rate of expansion could be sustainable, too. According to the Global Market Insights cybersecurity report, identity, authentication, and access management services are expected to be an especially fast-growing subset of cybersecurity, with the potential for services to increase an average of 17% a year through 2024. At the forefront of the movement, Okta is primed to gobble up market share as identity and access management increases in importance.

Here’s the downside: Okta is not a profitable business as of this writing. The company is funneling cash into marketing and research to maximize its sales growth now. Profits will be a concern later. The good news, though, is that gross profit margin (the amount of money the company keeps after producing a service and then selling it but before paying other operating expenses) is on the rise as the company grows.

That bodes well for the future of this cybersecurity leader. Identity security/privileged data access rights is expected to be a high-growth segment of network security for the next few years, and Okta is a leader in the space.

6. Fortinet: Successfully bridging legacy security with the new

Another diversified provider of firewalls, cloud and endpoint security, and identity management, Fortinet took a hit amid worries that the trade war between the U.S. and China would dampen growth in the company’s important international markets — Asia and Europe specifically. Newer security upstarts have also disrupted some of Fortinet’s legacy offerings like hardware-based network security for on-premises protection. Economic and industry headwinds or not, though, this cybersecurity outfit is doing just fine.

Revenues and adjusted earnings were up 20% and 77%, respectively, in 2018. Fortinet has been adding dozens of new deals worth more than $1 million every quarter, winning customers over with its new and improved software suite aimed at keeping all parts of an organization safe. Although less aggressive in its acquisition strategy than Palo Alto Networks or Splunk, Fortinet continues to invest heavily in updating its offerings to keep its customers secure. The cloud has been an area of focus, as well as increasing the number of subscription-based software deals. The investments in new technology have been paying off and yielding results for shareholders, even as other legacy cybersecurity companies have been failing to make the cut.

As a result of its less aggressive nature, Fortinet also runs a profitable business where some of its competitors don’t — and the bottom line has been rising faster than sales as the company’s investments have started to yield results. Ample cash means this security business can continue to invest in its new high-octane segments like cloud, endpoint, and identity security, which bodes well for it being able to maintain its two-figure top-line growth rate for some time even as legacy lines of business fade. With a well-established presence in the industry and a successful business update strategy well underway and paying off, Fortinet is one of the best cybersecurity stocks around.

7. Symantec: The biggest cybersecurity company by revenue

Symantec is the world leader in cybersecurity services when using sales figures as the metric. With nearly $5 billion in revenue in the last year, it is nearly double the size of its younger peers like Palo Alto Networks. Yet despite Symantec’s leadership, its market cap lags. One of the oldest network security players around and owner of recognizable software names like LifeLock and Norton Antivirus, Symantec has had to deal with disruption and shifting technology that have left growth near nonexistent and profitability underwhelming.

Though Symantec has been updating its operations — it recently announced a new comprehensive cloud-based security suite covering everything from email to application login protection — results have been sluggish. Fiscal 2019 sales fell 2%. The company’s legacy operations are holding it back, and bloated operating expenses have meant paltry bottom-line earnings. Not exactly what investors should be looking for from the leader of a high-flying industry.

There could be hope of a rebound, though, as Symantec continues to work through its transition. Chipmaker Broadcom (NASDAQ:AVGO) thought there was value in Symantec and was reportedly interested in acquiring the old security company to add it to its growing software division. However, negotiations fell through, and Symantec will have to go it alone for now. Until the company can demonstrate a strategy that can gain some traction in the growing world of cybersecurity, Symantec will continue to struggle in the wake of younger and more nimble peers that started investing earlier in the shifting landscape.

8. Akamai: Guarding the security of the internet itself

The next security outfit on the list handles a different piece of the industry than any of the others covered thus far. Akamai (NASDAQ:AKAM) helps deliver and secure web content as it travels from its source to the end user, from live and streaming video to traditional web page text and pictures. The internet’s continual expansion has been a boon for Akamai, which has launched new services to cover new web applications (like video streaming) and new mobile device types to keep the internet connection to them secure.

Akamai’s traditional web business is a low- to mid-single-digit growth story, but its newer cloud security services have been growing well into the double digits. New services are still a small fraction of the whole, but they are a high-margin endeavor. Akamai’s bottom line has been getting a big double-digit boost as Akamai’s investment and spending on new web delivery applications subside and past spending starts to yield results.

Akamai has grown into one of the internet’s primary content delivery platforms, responsible for handling as much as a third of global web traffic. As such, this company will be slower moving than other security businesses, but Akamai still has growth prospects ahead of it. Internet infrastructure company Cisco expects web traffic — led by video content — to grow an average of 26% a year through 2022. That means Akamai’s newer business should continue to move the needle for some time; plus the overall operation is solidly in profitable territory. In short, the leading internet content delivery and security company should be a slow-and-steady play for the foreseeable future.

9. Zscaler: Another investment in the cloud

Back to small but up-and-coming cybersecurity. Zscaler has its sights set on securing cloud computing and thus built itself from the ground up as a cloud-only software suite. The world is going mobile, and so are business operations. With fewer centralized locations and more remotely connected devices popping up, Zscaler helps keep newer business networks safe for its customers and their employees.

With a business model similar to those of CrowdStrike and Okta, Zscaler plays in a new multibillion-dollar industry that will only continue to grow larger, and the company has been frank in saying it is all about maximizing growth right now. And no wonder, as Gartner says in its cloud research that annual spending will nearly double from 2018 to 2022 to more than $330 billion a year. Sales at Zscaler have been growing north of 60% year over year for some time, but what’s a few hundred million in annual sales when the whole market is worth hundreds of billions? The downside is that in spite of massive growth and a rosy outlook for the good times to continue, operating losses are still substantial. With Zscaler all about nurturing sales as fast as possible, the red ink is unlikely to disappear anytime soon.

Much like its start-up peers, though, Zscaler takes those losses by design as it keeps its foot on the gas. Gross profit margin was an enviable 81% at last report, one of the best in the industry. With profit potential like that in a fast-expanding cloud computing sandbox, it makes sense Zscaler is all about growth now and profit later. With the world going mobile, this security stock looks like an especially promising one in the years ahead as it takes advantage of its early cloud-based security lead.

10. F5 Networks: Lagging behind the cybersecurity growth average

F5 Networks provides hardware and software solutions that help companies keep their applications and app delivery secure. Similar to Akamai, the company’s legacy business isn’t exactly lighting the world on fire. However, newer services, particularly those aimed at cloud computing-based apps, are on a tear. To that end, F5 recently acquired app optimization and security peer NGINX for $670 million.

It’s a sizable sum but likely a prudent move for F5. The company has been reporting low-single-digit revenue growth the last few years — nearly all of which has been driven by big expansion in its software service segment. While the top line has been sluggish, the upside is that new software and security offerings are a much more profitable concern. As a result, earnings are up nearly 40% over the last trailing three-year stretch.

During its transition phase to more modern app security and delivery, F5’s stock has taken a beating. There’s worry that the transition will continue to be a bumpy one, thus making this stock among the cheapest in the cybersecurity industry. However, though the low valuation reflects the fact that F5 has fallen behind the curve in the digital age, F5 is an inexpensive play on digital security and delivery. With internet traffic and content delivery still a slow-and-steady endeavor, F5 can continue to thrive — albeit at a much slower rate than elsewhere in cybersecurity.

Bonus. Proofpoint: An up-and-coming communications security specialist

One of the smaller outfits in the security space, Proofpoint is worth a mention as a bonus number 11 on the top-10 list. The company specifically helps organizations keep their employees safe. Email attacks are a key pain point for many businesses, and securing communications in that department — as well as on social media, cloud applications, and mobile devices — is a specialty at Proofpoint.

Though a niche offering within the greater cybersecurity industry, Proofpoint is expanding fast. After the company grew 38% in 2018, management forecasted full-year 2019 revenue to be up at least another 22%. However, as with its high-powered sales-oriented peers, the company does run up big losses. As with many other cybersecurity plays we’ve been discussing, though, that’s due to Proofpoint reinvesting in itself to foster more growth.

Nevertheless, when we adjust the bottom line for one-time items and other noncash expenses, Proofpoint is free cash flow positive, a metric that has been steadily on the rise. That should help Proofpoint keep up its double-digit growth trajectory as employee access points via remote computers, smartphones, and other devices continue to boom in the States and especially overseas. It’s a much smaller business than the top 10 companies are, but this cybersecurity concern still offers a compelling growth story worth keeping an eye on as it keeps communications safe and secure.

Proofpoint will also likely see long-term benefit from the explosion in devices hooked up to a network in the years ahead. The workforce’s increasing mobility means keeping employee communications on lockdown will be an increasingly complex problem, one that this small security company can help solve.

An illustrated shield displayed on top of a wall of digital data.

Image source: Getty Images.

Choosing the right cybersecurity stock to invest in

Taking a high-level look at the biggest companies in the cybersecurity market is only the start to choosing an investment. Some of the stocks are buys, others not so much. As the industry is still in high-growth mode and adapting fast to technological developments, investors would be best off picking the companies posting the fastest revenue expansion rates and those that carry the highest gross profit margins. Click here for a discussion on the top cybersecurity stocks and an introduction on how to pick the best companies in the industry.

Before investing, though, it’s important to remember a few things. Though cybersecurity is one of the fastest-expanding industries around, with high growth expectations comes a high level of volatility. Stock prices can run higher very quickly — and reverse course just as fast. Only investors who have a long-term perspective (no less than a few years) and the ability to purchase a position over time (buying a few shares at a time on a set schedule, like monthly, quarterly, or whenever the stock dips in price by at least double digits) should consider buying.

For those with the time to wait, though, investing in cybersecurity should be a profitable endeavor. In a decade’s time, this top-10 list will no doubt look very different, but a few of these names will still be around and will likely be much larger than they are today.