Archiv der Kategorie: Marketing

John Deere turned tractors into computers — what’s next?

One of our themes on Decoder is that basically everything is a computer now, and farming equipment like tractors and combines are no different. My guest this week is Jahmy Hindman, chief technology officer at John Deere, the world’s biggest manufacturer of farming machinery. And I think our conversation will surprise you.

Jahmy told me that John Deere employs more software engineers than mechanical engineers now, which completely surprised me. But the entire business of farming is moving toward something called precision agriculture, which means farmers are closely tracking where seeds are planted, how well they’re growing, what those plants need, and how much they yield.

The idea, Jahmy says, is to have each plant on a massive commercial farm tended with individual care — a process which requires collecting and analyzing a massive amount of data. If you get it right, precision agriculture means farmers can be way more efficient — they can get better crop yields with less work and lower costs.

The idea, Jahmy says, is to have each plant on a massive commercial farm tended with individual care — a process which requires collecting and analyzing a massive amount of data. If you get it right, precision agriculture means farmers can be way more efficient — they can get better crop yields with less work and lower costs.

But as Decoder listeners know by now, turning everything into computers means everything has computer problems now. Like all that farming data: who owns it? Where is it processed? How do you get it off the tractors without reliable broadband networks? What format is it in? If you want to use your John Deere tractor with another farming analysis vendor, how easy is that? Is it easy enough?

And then there are the tractors themselves — unlike phones, or laptops, or even cars, tractors get used for decades. How should they get upgraded? How can they be kept secure? And most importantly, who gets to fix them when they break?

John Deere is one of the companies at the center of a nationwide reckoning over the right to repair. Right now, tech companies like Samsung and Apple and John Deere all get to determine who can repair their products and what official parts are available.

And because these things are all computers, these manufacturers can also control the software to lock out parts from other suppliers. But it’s a huge deal in the context of farming equipment, which is still extremely mechanical, often located far away from service providers and not so easy to move, and which farmers have been repairing themselves for decades. In fact, right now the prices of older, pre-computerized tractors are skyrocketing because they’re easier to repair.

Half of the states in the country are now considering right to repair laws that would require manufacturers to disable software locks and provide parts to repair shops, and a lot of it is being driven — in a bipartisan way — by the needs of farmers.

John Deere is famously a tractor company. You make a lot of equipment for farmers, for construction sites, that sort of thing. Give me the short version of what the chief technology officer at John Deere does.

[As] chief technology officer, my role is really to try to set the strategic direction from a technology perspective for the company, across both our agricultural products as well as our construction, forestry, and road-building products. It’s a cool job. I get to look out five, 10, 15, 20 years into the future and try to make sure that we’re putting into place the pieces that we need in order to have the technology solutions that are going to be important for our customers in the future.

One of the reasons I am very excited to have you on Decoder is there are a lot of computer solutions in your products. There’s hardware, software, services that I think of as sort of traditional computer company problems. Do you also oversee the portfolio of technologies that [also] make combines more efficient and tractor wheels move faster?

We’ve got a centrally-organized technology stack organization. We call it the intelligent solutions group, and its job is really to do exactly that. It’s to make sure that we’re developing technologies that can scale across the complete organization, across those combines you referenced, and the tractors and the sprayers, and the construction products, and deploy that technology as quickly as possible.

One of the things The Verge wrestles with almost every day is the question of, “What is a computer?” We wrestle with it in very small and obvious ways — we argue about whether the iPad or an Xbox is a computer. Then you can zoom all the way out: we had Jim Farley, who’s the CEO of Ford, on Decoder a couple of weeks ago, and he and I talked about how Ford’s cars are effectively rolling computers now.

Is that how you see a tractor or a combine or construction equipment — that these are gigantic computers that have big mechanical functions as well?

They absolutely are. That’s what they’ve become over time. I would call them mobile sensor suites that have computational capability, not only on-board, but to your point, off-board as well. They are continuously streaming data from whatever it is — let’s say the tractor and the planter — to the cloud. We’re doing computational work on that data in the cloud, and then serving that information, those insights, up to farmers, either on their desktop computer or on a mobile handheld device or something like that.

As much as they are doing productive work in the field, planting as an example, they are also data acquisition and computational devices.

How much of that is in-house at John Deere? How big is the team that is building your mobile apps? Is that something you outsource? Is that something you develop internally? How have you structured the company to enable this kind of work?

We do a significant amount of that work internally. It might surprise you, we have more software development engineers today within Deere than we have mechanical design engineers. That’s kind of mind-blowing for a company that’s 184 years old and has been steeped in mechanical product development, but that’s the case. We do nearly all of our own internal app development inside the four walls of Deere.

That said, our data application for customers in the ag space, for example, is the Operations Center. We do utilize third parties. There’s roughly 184 companies that have been connected to Operations Center through encrypted APIs, that are writing applications against that data for the benefit of the customers, the farmers that want to use those applications within their business.

One of the reasons we’re always debating what a computer is and isn’t is that once you describe something as a computer, you inherit a bunch of expectations about how computers work. You inherit a bunch of problems about how computers work and don’t work. You inherit a bunch of control; API access is a way of exercising control over an ecosystem or an economy.

Have you shifted the way that John Deere thinks about its products? As new abilities are created because you have computerized so much of a tractor, you also increase your responsibility, because you have a bunch more control.

There’s no doubt. We’re having to think about things like security of data, as an example, that previously, 30 years ago, was not necessarily a topic of conversation. We didn’t have competency in it. We’ve had to become competent in areas like that because of exactly the point you’re making, that the product has become more computer-like than conventional tractor-like over time.

That leads to huge questions. You mentioned security. Looking at some of your recent numbers, you have a very big business in China. Thirty years ago, you would export a tractor to China and that’s the end of that conversation. Now, there’s a huge conversation about cybersecurity, data sharing with companies in China, down the line, a set of very complicated issues for a tractor company that 30 years ago wouldn’t have any of those problems. How do you balance all those out?

It’s a different set of problems for sure, and more complicated for geopolitical reasons in the case of China, as you mentioned. Let’s take security as an example. We have gone through the change that many technology companies have had to go through in the space of security, where it’s no longer bolted on at the end, it’s built in from the ground up. So it’s the security-by-design approach. We’ve got folks embedded in development organizations across the company that do nothing every day, other than get up and think about how to make the product more secure, make the datasets more secure, make sure that the data is being used for its intended purposes and only those.

That’s a new skill. That’s a skill that we didn’t have in the organization 20 years ago that we’ve had to create and hire the necessary talent in order to develop that skill set within the company at the scale that we need to develop it at.

Go through a very basic farming season with a John Deere combine and tractor. The farmer wakes up, they say, “Okay, I’ve got a field. I’ve got to plant some seeds. We’ve got to tend to them. Eventually, we’ve got to harvest some plants.” What are the points at which data is collected, what are the points at which it’s useful, and where does the feedback loop come in?

I’m going to spin it a little bit and not start with planting.

I’m going to tell you that the next season for a farmer actually starts at harvest of the previous season, and that’s where the data thread for the next season actually starts. It starts when that combine is in the field harvesting whatever it is, corn, soybeans, cotton, whatever. And the farmer is creating, while they’re running the combine through the field, a dataset that we call a yield map. It is geospatially referenced. These combines are running through the field on satellite guidance. We know where they’re at at any point in time, latitude, longitude, and we know how much they’re harvesting at that point in time.

So we create this three-dimensional map that is the yield across whatever field they happen to be in. That data is the inception for a winter’s worth of work, in the Northern hemisphere, that a farmer goes through to assess their yield and understand what changes they should make in the next season that might optimize that yield even further.

They might have areas within the field that they go into and know they need to change seeding density, or they need to change crop type, or they need to change how much nutrients they provide in the next season. And all of those decisions are going through their head because they [have] to seed in December, they have to order their nutrients in late winter. They’re making those plans based upon that initial dataset of harvest information.

And then they get into the field in the spring, to your point, with a tractor and a planter, and that tractor and planter are taking the prescription that the farmer developed with the yield data that they took from the previous harvest. They’re using that prescription to apply changes to that field in real time as they’re going through the field, with the existing data from the yield map and the data in real time that they’re collecting with the tractor to modify things like seeding rate, and fertilizer rate and all of those things in order to make sure that they’re minimizing the inputs to the operation while at the same time working to maximize the output.

That data is then going into the cloud, and they’re referencing it. For example, that track the tractor and the planter took through the field is being used to inform the sprayer. When the sprayer goes into the field after emergence, when the crops come out of the ground, it’s being used to inform that sprayer what the optimal path is to drive through the field in order to spray only what needs to be sprayed and no more, to damage the crop the least amount possible, all in an effort to optimize that productivity at the end of the year, to make that yield map that is [a] report card at the end of the year for the farmer, to make that turn out to have a better grade.

That’s a lot of data. Who collects it? Is John Deere collecting it? Can I hire a third-party SaaS software company to manage that data for me? How does that part work?

A significant amount of that data is collected on the fly while the machines are in the field, and it’s collected, in the case of Deere machines, by Deere equipment running through the field. There are other companies that create the data, and they can be imported into things like the Deere Operations Center so that you have the data from whatever source that you wanted to collect it from. I think the important thing there is historically, it’s been more difficult to get the data off the machine, because of connectivity limitations, into a database that you can actually do something with it.

Today, the disproportionate number of machines in large agriculture are connected. They’re connected through terrestrial cell networks. They’re streaming data bi-directionally to the cloud and back from the cloud. So that data connectivity infrastructure that’s been built out over the last decade has really enabled two-way communication, and it’s taken the friction out of getting the data off of a mobile piece of equipment. So it’s happening seamlessly for that operator. And that’s a benefit, because they can act on it then in more near real time, as opposed to having to wait for somebody to upload data at some point in the future.

Whose data is this? Is it the farmer’s data? Is it John Deere’s data? Is there a terms of service agreement for a combine? How does that work?

Certainly [there is] a terms of service agreement. Our position is pretty simple. It’s the farmer’s data. They control it. So if they want to share it through an API with somebody that is a trusted adviser from their perspective, they have the right to do that. If they don’t want to share it, they don’t have to do that. It is their data to control.

Is it portable? When I say there are “computer problems” here, can my tractor deliver me, for example, an Excel file?

They certainly can export the data in form factors that are convenient for them, and they do. Spreadsheet math is still routinely done on the farm, and then [they can] utilize the spreadsheet to do some basic data analytics if they want. I would tell you, though, that what’s happening is that the amount of data that is being collected and curated and made available to them to draw insights from is so massive that while you can still use spreadsheets to manipulate some of it, it’s just not tractable in all cases. So that’s why we’re building functionality into things like the Operations Center to help do data analytics and serve up insights to growers.

It’s their data. They can choose to look at the insights or not, but we can serve those insights up to them, because the data analysis part of this problem is becoming significantly larger because the datasets are so complex and large, not to mention the fact that you’ve got more data coming in all the time. Different sensors are being applied. We can measure different things. There [are] unique pieces of information that are coming in and routinely building to overall ecosystems of data that they have at their disposal.

We’ve talked a lot about the feedback loop of data with the machinery in particular. There’s one really important component to this, which is the seeds. There are a lot of seed manufacturers out in the world. They want this data. They have GMO seeds, they can adjust the seeds to different locations. Where do they come into the mix?

The data, from our perspective, is the farmer’s data. They’re the ones who are controlling the access to it. So if they want to share their data with someone, they have that ability to do it. And they do today. They’ll share their yield map with whoever their local seed salesman is and try to optimize the seed variety for the next planting season in the spring.

So that data exists. It’s not ours, so we’re not at liberty to share it with seed companies, and we don’t. It has to come through the grower because it’s their productivity data. They’re the ones that have the opportunity to share it. We don’t.

You do have a lot of data. Maybe you can’t share it widely, but you can aggregate it. You must have a very unique view of climate change. You must see where the foodways are moving, where different kinds of crops are succeeding and failing. What is your view of climate change, given the amount of data that you’re taking in?

The reality is for us that we’re hindered in answering that question by the recency of the data. So, broad-scale data acquisition from production agriculture is really only a five- to 10-year-old phenomenon. So the datasets are getting richer. They’re getting better.

We have the opportunity to see trends in that data across the datasets that exist today, but I think it’s too early. I don’t think the data is mature enough yet for us to be able to draw any conclusions from a climate change perspective with respect to the data that we have.

The other thing that I’ll add is that the data intensity is not universal across the globe. So if you think of climate change on a global perspective, we’ve got a lot of data for North America, a fair amount of data that gets taken by growers in Europe, a little bit in South America, but it’s not rich enough across the global agricultural footprint for us to be able to make any sort of statements about how climate change is impacting it right now.

Is that something you’re interested in doing?

Yes. I couldn’t predict when, but I think that the data will eventually be rich enough for insights to be drawn from it. It’s just not there yet.

Do you think about doing a fully electric tractor? Is that in your technology roadmap, that you’ve got to get rid of these diesel engines?

You’ve got to be interested in EVs right now. And the answer is yes. Whether it’s a tractor or whether it’s some other product in our product line, alternative forms of propulsion, alternative forms of power are definitely something that we’re thinking about. We’ve done it in the past with, I would say, hybrid solutions like a diesel engine driving an electric generator, and then the rest of the machine being electrified from a propulsion perspective.

But we’re just getting to the point now where battery technology, lithium-ion technology, is power-dense enough for us to see it starting to creep into our portfolio. Probably from the bottom up. Lower power density applications first, before it gets into some of the very large production ag equipment that we’ve talked about today.

What’s the timeline to a fully EV combine, do you think?

I think it’ll be a long time for a combine.

I picked the biggest thing I could, basically.

It has got to run 14, 15, 16 hours per day. It’s got a very short window to run in. You can’t take all day to charge it. Those sorts of problems, they’re not insurmountable. They’re just not solved by anything that’s on the roadmap today, from a lithium-ion perspective, anyway.

You and I are talking two days after Apple had its developers’ conference. Apple famously sells hardware, software, services, as an integrated solution. Do you think of John Deere’s equipment as integrated suites of hardware, software, and services, or is it a piece of hardware that spits off data, and then maybe you can buy our services, or maybe buy somebody else’s services?

I think it’s most efficient when we think of it collectively as a system. It doesn’t have to be that way, and one of the differences I would say to an Apple comparison would be the life of the product, the iron product in our case, the tractor or the combine, is measured in decades. It may be in service for a very long time, and so we have to take that into account as we think about the technology [and] apps that we put on top of it, which have a much shorter shelf life. They’re two, three, four, five years, and then they’re obsolete, and the next best thing has come along.

We have to think about the discontinuity that occurs between product buy cycles as a consequence of that. I do think it’s most efficient to think of it all together. It isn’t always necessarily that way. There are lots of farmers that run multi-colored fleets. It’s not Deere only. So we have to be able to provide an opportunity for them to get data off of whatever their product is into the environment that best enables them to make good decisions from it.

Is that how you characterize the competition, multi-colored fleets?

Absolutely, for sure. I would love the world to be completely [John Deere] green, but it’s not quite that way.

On my way to school every day in Wisconsin growing up, I drove by a Case plant. They’re red. John Deere is famously green, Case is red, International Harvester is yellow.

Yep. Case is red, Deere is green, and then there’s a rainbow of colors outside of those two for sure.

Who are your biggest competitors? And are they adopting the same business model as you? Is this an iOS versus Android situation, or is it widely different?

Our traditional competitors in the ag space, no surprise, you mentioned one of them. Case New Holland is a great example. AGCO would be another. I think everybody’s headed down the path of precision agriculture. [It’s] the term that is ubiquitous for where the industry’s headed.

I’m going to paint a picture for you: It’s this idea of enabling each individual plant in production agriculture to be tended to by a master gardener. The master gardener is in this case probably some AI that is enabling a farmer to know exactly what that particular plant needs, when it needs it, and then our equipment provides them the capability of executing on that plan that master gardener has created for that plant on an extremely large scale.

You’re talking about, in the case of corn, for example, 50,000 plants per acre, so a master gardener taking care of 50,000 plants for every acre of corn. That’s where this is headed, and you can picture the data intensity of that. Two hundred million acres of corn ground, times 50,000 plants per acre; each one of those plants is creating data, and that’s the enormity of the scale of production agriculture when you start to get to this plant-by-plant management basis.

Let’s talk about the enormity of the data and the amount of computation — that’s in tension with how long the equipment lasts. Are you upgrading the computers and the tractors every year, or are you just trying to pull the data into your cloud where you can do the intense computation you want to do?

It’s a combination of both, I would tell you. There are components within the vehicles that do get upgraded from time to time. The displays and the servers that operate in the vehicles do go through upgrade cycles within the existing fleet.

There’s enough appetite, Nilay, for technology in agriculture that we’re also seeing older equipment get updated with new technology. So it’s not uncommon today for a customer who’s purchased a John Deere planter that might be 10 years old to want the latest technology on that planter. And instead of buying a new planter, they might buy the upgrade kit for that planter that allows them to have the latest technology on the existing planter that they own. That sort of stuff is happening all the time across the industry.

I would tell you, though, that what is maybe different now versus 10 years ago is the amount of computation that happens in the cloud, to serve up this enormity of data in bite-sized forms and in digestible pieces that actually can be acted upon for the grower. Very little of that is done on-board machines today. Most of that is done off-board.

We cover rural broadband very heavily. There’s some real-time data collection happening here, but what you’re really talking about is that at the end of a session you’ve got a big asynchronous dataset. You want to send it off somewhere, have some computation done to it, and brought back to you so you can react to it.

What is your relationship to the connectivity providers, or to the Biden administration, that is trying to roll out a broadband plan? Are you pushing to get better networks for the next generation of your products, or are you kind of happy with where things are now?

We’re pro-rural broadband, and in particular newer technologies, 5G as an example. And it’s not just for agricultural purposes, let’s just be frank. There’s a ton of benefits that accrue to a society that’s connected with a sufficient network to do things like online schooling, in particular, coming through the pandemic that we’re in the midst of, and hopefully on the tail end of here. I think that’s just highlighted the use cases for connectivity in rural locations.

Agriculture is but one of those, but there’s some really cool feature unlocks that better connectivity, both in terms of coverage and in terms of bandwidth and latency, provide in agriculture. I’ll give you an example. You think of 5G and the ability to get to incredibly low latency numbers. It allows us to do some things from a computational perspective on the edge of the network that today we don’t have the capability to do. We either do it on-board the machine, or we don’t do it at all. So things like serving up the real-time location of where a farmer’s combine is, instead of having to route that data all the way to the cloud and then back to a handheld device that the farmer might have, wouldn’t it be great if we could do that math on the edge and just ping tower to tower and serve it back down and do it really, really quickly. Those are the sorts of use cases that open up when you get to talking about not just connectivity rurally, but 5G specifically, that are pretty exciting.

Are the networks in place to do all the things you want to do?

Globally, the answer is no. Within the US and Canadian markets, coverage improves every day. There are towers that are going up every day and we are working with our terrestrial cell coverage partners across the globe to expand coverage, and they’re responding. They see, generally, the need, in particular with respect to agriculture, for rural connectivity. They understand the power that it can provide [and] the efficiency that it can derive into food production globally. So they are incentivized to do that. And they’ve been good partners in this space. That said, they recognize that there are still gaps and there’s still a lot of ground to cover, literally in some cases, with connectivity solutions in rural locations.

You mentioned your partners. The parallels to a smartphone here are strong. Do you have different chipsets for AT&T and Verizon? Can you activate your AT&T plan right from the screen in the tractor? How does that work?

AT&T is our dominant partner in North America. That is our go-to, primarily from a coverage perspective. They’re the partner that we’ve chosen that I think serves our customers the best in the most locations.

Do you get free HBO Max if you sign up?

[laughs] Unfortunately, no.

They’re putting it everywhere. You have no idea.

For sure.

I look at the broadband gap everywhere. You mentioned schooling. We cover these very deep consumer needs. On the flip side, you need to run a lot of fiber to make 5G work, especially with the low latency that you’re talking about. You can’t have too many nodes in the way. Do you support millimeter wave 5G on a farm?

Yeah, it is something we’ve looked at. It’s intriguing. How you scale it is the question. I think if we could crack that nut, it would be really interesting.

Just for listeners, an example of millimeter wave if you’re unfamiliar — you’re standing on just the right street corner in New York City, you could get gigabit speeds to a phone. You cross the street, and it goes away. That does not seem tenable on a farm.

That’s right. Not all data needs to be transmitted at the same rate. Not to cover the broad acreage, but you can envision a case where potentially, when you come into range of millimeter wave, you dump a bunch of data all at once. And then when you’re out of range, you’re still collecting data and transmitting it slower perhaps. But having the ability to have millimeter wave type of bandwidth is pretty intriguing for being able to take opportunistic advantage of it when it’s available.

What’s something you want to do that the network isn’t there for you to do yet?

I think that the biggest piece is just a coverage answer from my perspective. We intentionally buffer data on the vehicle in places where we don’t have great coverage in order to wait until that machine has coverage, in order to send the data. But the reality is that means that a grower is waiting in some cases 30 minutes or an hour until the data is synced up in the cloud and something actionable has been done with it and it’s back down to them. And by that point in time, the decision has already been made. It’s not useful because it’s time sensitive. I think that’s probably the biggest gap that we have today. It’s not universal. It happens in pockets and in geographies, but where it happens, the need is real. And those growers don’t benefit as much as growers that do have areas of good coverage.

Is that improvement going as fast as you’d like? Is that a place where you’re saying to the Biden administration, whoever it might be, “Hey, we’re missing out on opportunities because there aren’t the networks we need to go faster.”

It is not going as fast as we would like, full stop. We should be moving faster in that space. Just to tease the thought out a little bit, maybe it’s not just terrestrial cell. Maybe it’s Starlink, maybe it’s a satellite-based type of infrastructure that provides that coverage for us in the future. But it’s certainly not moving at a pace that’s rapid enough for us, given the appetite for data that growers have and what they’ve seen as an ability for that data to significantly optimize their operations.

Have you talked to the Starlink folks?

We have. It’s super interesting. It’s an intriguing idea. The question for us is a mobile one. All of our devices are mobile. Tractors are driving around a field, combines are driving around a field. You get into questions around, what does the receiver need to look like in order to make that work? It’s an interesting idea at this point. I’m ever the optimist, glass-half-full sort of person. I think it’s conceivable that in the not too distant future, that could be a very viable option for some of these locations that are underserved with terrestrial connectivity today.

Walk me through the pricing model of a tractor. These things are very expensive. They’re hundreds of thousands of dollars. What is the recurring cost for an AT&T plan necessary to run that tractor? What is the recurring cost for your data services that you provide? How does that all break down?

Our data services are free today, interestingly enough. Free in the sense [of] the hosting of the data in the cloud and the serving up of that data through Operations Center. If you buy a piece of connected Deere equipment, that service is part of your purchase. I’ll just put it that way.

The recurring expense on the consumer side of things for the connectivity is not unlike what you would experience for a cell phone plan. It’s pretty similar. The difference is for large growers, it’s not just a single cell phone.

They might have 10, 15, 20 devices that are all connected. So we do what we can to make sure that the overhead associated with all of those different connected devices is minimized, but it’s not unlike what you’d experience with an iPhone or an Android device.

Do you have large growers in pockets where the connectivity is just so bad, they’ve had to resort to other means?

We have a multitude of ways of getting data off of mobile equipment. Cell is but one. We’re also able to take it off with Wi-Fi, if you can find a hotspot that you can connect to. Growers also routinely use a USB stick, when all else fails, that works regardless. So we make it possible no matter what their connectivity situation is to get the data off.

But to the point we already talked about, the less friction you’ve got in that system to get the data off, the more data you end up pushing. The more data you push, the more insights you can generate. The more insights you generate, the more optimal your operation is. So to the extent that you don’t have cell connectivity, we do see the intensity of the data usage, it tracks with connectivity.

So if your cloud services are free with the purchase of a connected tractor, is that built into the price or the lease agreement of the tractor for you on your P&L? You’re just saying, “We’re giving this away for free, but baking it into the price.”

Yep.

Can you buy a tractor without that stuff for cheaper?

You can buy products that aren’t connected that do not have a telematics gateway or the cell connection, absolutely. It is uncommon, especially in large ag. I would hesitate to throw a number at you at what the take rate is, but it’s standard equipment in all of our large agricultural products. That said, you can still get it without that if you need to.

How long until these products just don’t have steering wheels and seats and Sirius radios in them? How long until you have a fully autonomous farm?

I love that question. [With] a fully autonomous farm, you’ve got to draw some boundaries around it in order to make it digestible. I think we could have fully autonomous tractors in low single digit years. I’ll leave it a little bit gray just to let the mind wander a little bit.

Taking the cab completely off the tractor, I think, is a ways away, only because the tractor gets used for lots of things that it may not be programmed for, from an autonomous perspective, to do. It’s sort of a Swiss Army knife in a farm environment. But that operatorless operation in, say, fall tillage or spring planting, we’re right on the doorstep of that. We’re knocking on the door of being able to do it.

It’s due to some really interesting technology that’s come together all in one place at one time. It’s the confluence of high capability-compute onboard machines. So we’re putting GPUs on machines today to do vision processing that would blow your mind. Nvidia GPUs are not just for the gaming community or the autonomous car community. They’re happening on tractors and sprayers and things too. So that’s one stream of technology that’s coming together with advanced algorithms. Machine learning, reinforcement learning, convolutional neural networks, all of that going into being able to mimic the human sight capability from a mechanical and computational perspective. That’s come together to give us the ability to start seriously considering taking an operator out of the cab of the tractor.

One of the things that is different, though, for agriculture versus maybe the on-highway autonomous cars, is that tractors don’t just go from point A to point B. Their mission in life is not just to transport. It’s to do productive work. They’re pulling a tillage tool behind them or pulling a planter behind them planting seed. So we not only have to be able to automate the driving of the tractor, but we have to automate the function that it’s doing as well, and make sure that it’s doing a great job of doing the tillage operation that normally the farmer would be observing in the cab of the tractor. Now we have to do that and be able to ascertain whether or not that job quality that’s happening as a consequence of the tractor going through the field is meeting the requirements or not.

What’s the challenge there?

I think it’s the variety of jobs. In this case, let’s take the tractor example again — it’s not only is it doing the tillage right with this particular tillage tool, but a farmer might use three or four different tillage tools in their operation. They all have different use cases. They all require different artificial intelligence models to be trained and to be validated. So scaling out across all of those different conceivable operations, I think is the biggest challenge.

You mentioned GPUs. GPUs are hard to get right now.

Everything’s hard to get right now.

How is the chip shortage affecting you?

It’s impacting us. Weekly, I’m in conversations with semiconductor manufacturers trying to get the parts that we need. It is an ongoing battle. We had thought probably six or seven months ago, like everybody else, that it would be relatively short-term. But I think we’re into this for the next 12 to 18 months. I think we’ll come out of it as capacity comes online, but it’s going to take a little while before that happens.

I’ve talked to a few people about the chip shortage now. The best consensus I’ve gotten is that the problem isn’t at the state of the art. The problem is with older process nodes — five or 10-year-old technology. Is that where the problem is for you as well or are you thinking about moving beyond that?

It’s most acute with older tech. So we’ve got 16-bit chipsets that we’re still working with on legacy controllers that are a pain point. But that said, we’ve also got some really recent, modern stuff that is also a pain point. I was where your head is at three months ago. And then in the three months since, we’ve felt the pain everywhere.

When you say 18 months from now, is that you think there’s going to be more supply or you think the demand is going to tail off?

Supply is certainly coming online. [The] semiconductor industry is doing the right thing. They’re trying to bring capacity online to meet the demand. I would argue it’s just a classic bullwhip effect that’s happened in the marketplace. So I think that will happen. I think there’s certainly some behavior in the industry at the moment around what the demand side is. That’s made it hard for semiconductor manufacturers to understand what real demand is because there’s a panic situation in some respects in the marketplace at the moment.

That said, I think it’s clear there’s only one direction that semiconductor volume is going, and it’s going up. Everything is going to demand it moving forward and demand more of it. So I think once we work through the next 12 to 18 months and work through this sort of immediate and near-term issue, the semiconductor industry is going to have a better handle on things, but capacity has to go up in order to meet the demand. There’s no doubt about it. A lot of that demand is real.

Are you thinking, “Man, I have these 16-bit systems. We should rearchitect things to be more modular, to be more modern, and faster,” or are you saying, “Supply will catch up”?

No, very much the former. I would say two things. One, more prevalent in supply for sure. And then the second one is, easier to change when we need to change. There’s some tech debt that we’re continuing to battle against and pay off over time. And it’s times like these when it rises to the surface and you wish you’d made decisions a little bit differently 10 years ago or five years ago.

My father-in-law, my wife’s cousins, are all farmers up and down. A lot of John Deere hats in my family. I texted them all and asked what they wanted to know. All of them came back and said “right to repair” down the line. Every single one of them. That’s what they asked me to ask you about.

I set up this whole conversation to talk about these things as computers. We understand the problems of computers. It is notable to me that John Deere and Apple had the same effective position on right to repair, which is, we would prefer if you didn’t do it and you let us do it. But there’s a lot of pushback. There are right-to-repair bills in an ever-growing number of states. How do you see that playing out right now? People want to repair their tractors. It is getting harder and harder to do it because they’re computers and you control the parts.

It’s a complex topic, first and foremost. I think the first thing I would tell you is that we have and remain committed to enabling customers to repair the products that they buy. The reality is that 98 percent of the repairs that customers want to do on John Deere products today, they can do. There’s nothing that prohibits them from doing them. Their wrenches are the same size as our wrenches. That all works. If somebody wants to go repair a diesel engine in a tractor, they can tear it down and fix it. We make the service manuals available. We make the parts available, we make the how-to available for them to tear it down to the ground and build it back up again.

That is not really what I’ve heard. I hear that a sensor goes off, the tractor goes into what people call “limp mode.” They have to bring it into a service center. They need a John Deere-certified laptop to pull the codes and actually do that work.

The diagnostic trouble codes are pushed out onto the display. The customer can see what those diagnostic trouble codes are. They may not understand or be able to connect what that sensor issue is with a root cause. There may be an underlying root cause that’s not immediately obvious to the customer based upon the fault code, but the fault code information is there. There is expertise that exists within the John Deere dealer environment, because they’ve seen those issues over time that allows them to understand what the probable cause is for that particular issue. That said, anybody can go buy the sensor. Anybody can go replace it. That’s just a reality.

There is, though, this 2 percent-ish of the repairs that occur on equipment today [that] involve software. And to your point, they’re computer environments that are driving around on wheels. So there is a software component to them. Where we differ with the right-to-repair folks is that software, in many cases, it’s regulated. So let’s take the diesel engine example. We are required, because it’s a regulated emissions environment, to make sure that diesel engine performs at a certain emission output, nitrous oxide, particulate matter, etc., and so on. Modifying software changes that. It changes the output characteristics of the emissions of the engine and that’s a regulated device. So we’re pretty sensitive to changes that would impact that. And disproportionately, those are software changes. Like going in and changing governor gain scheduling, for example, on a diesel engine would have a negative consequence on the emissions that [an] engine produces.

The same argument would apply in brake-by-wire and steer-by-wire. Do you really want a tractor going down the road with software on it that has been modified for steering or modified for braking in some way that might have a consequence that nobody thought of? We know the rigorous nature of testing that we go through in order to push software out into a production landscape. We want to make sure that that product is as safe and reliable and performs to the intended expectations of the regulatory environment that we operate in.

But people are doing it anyway. That’s the real issue here. Again, these are computer problems. This is what I hear from Apple about repairing your own iPhone. Here’s the device with all your data on it that’s on the network. Do you really want to run unsupported software on it? The valence of the debate feels the same to me.

At the same time though, is it their tractor or is it your tractor? Shouldn’t I be allowed to run whatever software I want on my computer?

I think the difference with the Apple argument is that the iPhone isn’t driving down the road at 20 miles an hour with oncoming traffic coming at it. There’s a seriousness of the change that you could make to a product. These things are large. They cost a lot of money. It’s a 40,000-pound tractor going down the road at 20 miles an hour. Do you really want to expose untested, unplanned, unknown introductions of software into a product like that that’s out in the public landscape?

But they were doing it mechanically before. Making it computerized allows you to control that behavior in a way that you cannot on a purely mechanical tractor. I know there are a lot of farmers who did dumb stuff with their mechanical tractors and that was just part of the ecosystem.

Sure. I grew up on one of those. I think the difference there is that the system is so much more complicated today, in part because of software, that it’s not always evident immediately if I make a change here, what it’s going to produce over there. When it was all mechanical, I knew, if I changed the size of the tires or the steering linkage geometry, what was going to happen. I could physically see it and the system was self-contained because it was a mechanical-only system.

I think when we’re talking about a modern piece of equipment and the complexity of the system, it’s a ripple effect. You don’t know what a change that you make over here is going to impact over there any longer. It’s not intuitively obvious to somebody who would make a change in isolation to software, for example, over here. It is a tremendously complex problem. It’s one that we’ve got a tremendously large organization that’s responsible for understanding that complete system and making sure that when the product is produced, that it is reliable and it is safe and it does meet emissions and all of those things.

I look at some of the coverage and there are farmers who are downloading software of unknown provenance that can hack around some of the restrictions. Some of that software appears to be coming from groups in the Ukraine. They’re now using other software to get around the restrictions that, in some cases, could make it even worse, and lead to other unintended consequences, whereas providing the opportunities or making that more official might actually solve some of those problems in a more straightforward way.

I think we’ve taken steps to try to help. One of those is customer service. Service Advisor is the John Deere software that a dealership would use in order to diagnose and troubleshoot equipment. We’ve made available the customer version of Service Advisor as well in order to provide some of the ability for them to have insights — to your point about fault codes before — insights into what are those issues, and what can I learn about them as a customer? How might I go about fixing them? There have been efforts underway in order to try to bridge some of that gap to the extent possible.

We are, though, not in a position where we would ever condone or support a third-party software being put on products of ours, because we just don’t know what the consequences of that are going to be. It’s not something that we’ve tested. We don’t know what it might make the equipment do or not do. And we don’t know what the long-term impacts of that are.

I feel like a lot of people listening to the show own a car. I’ve got a pickup truck. I can go buy a device that will upload a new tune for my Ford pickup truck’s engine. Is that something you can do to a John Deere tractor?

There are third-party outfits that will do exactly that to a John Deere engine. Yep.

But can you do that yourself?

I suspect if you had the right technical knowledge, you could probably figure out a way to do it yourself. If a third-party company figured it out, there is a way for a consumer to do it too.

Where’s the line? Where do you think your control of the system ends and the consumer’s begins? I ask that because I think that might be the most important question in computing right now, just broadly across every kind of computer in our lives. At some point, the manufacturer is like, “I’m still right here with you and I’m putting a line in front of you.” Where’s your line?

We talked about the corner cases, the use cases I think that for us are the lines. They’re around the regulated environment from an emissions perspective. We’ve got a responsibility when we sell a piece of equipment to make sure that it’s meeting the regulatory environment that we sold it into. And then I think the other one is in and around safety, critical systems, things that they can impact others in the environment that, again, in a regulated fashion, we have a responsibility to produce a product that meets the requirements that the regulatory environment requires.

Not only that, but I think there’s a societal responsibility, frankly, that we make sure that the product is as safe as it can be for as long as it can be in operation. And those are where I think we spend a lot of time talking about what amounts to a very small part of the repair of a product. The statistics are real: 98 percent of the repairs that happen on a product can be done by a customer today. So we’re talking about a very small number of them, but they tend to be around those sort of sensitive use cases, regulatory and safety.

Right to Repair legislation is very bipartisan. You’re talking about big commercial operations in a lot of states. It’s America. It’s apple pie and corn farmers. They have a lot of political weight and they’re able to make a very bipartisan push, which is pretty rare in this country right now. Is that a signal you see as, “Oh man, if we don’t get this right, the government is coming for our products?”

I think the government’s certainly one voice in this, and it’s stemming from feedback from some customers. Obviously you’ve done your own bit of work across the farmers in your family. So it is a topic that is being discussed for sure. And we’re all in favor of that discussion, by the way. I think that what we want to make sure of is that it’s an objective discussion. There are ramifications across all dimensions of this. We want to make sure that those are well understood, because it’s such an important topic and has significant enough consequences, so we want to make sure we get it right. The unintended consequences of this are not small. They will impact the industry, some of them in a negative way. And so we just want to make sure that the discussion is objective.

The other signal I’d ask you about is that prices of pre-computer tractors are skyrocketing. Maybe you see that a different way, but I’m looking at some coverage that says old tractors, pre-1990 tractors, are selling for double what they were a year or two ago. There are incredible price hikes on these old tractors. And that the demand is there because people don’t want computers in their tractors. Is that a market signal to you, that you should change the way your products work? Or are you saying, “Well, eventually those tractors will die and you won’t have a choice except to buy one of the new products”?

I think the benefits that accrue from technology are significant enough for consumers. We see this happening with the consumer vote by dollar, by what they purchase. Consumers are continuing to purchase higher levels of technology as we go on. So while yes, the demand for older tractors has gone up, in part it’s because the demand for tractors has gone up completely. Our own technology solutions, we’ve seen upticks in take rates year over year over year over year. So if people were averse to technology, I don’t think you’d see that. At some point we have to recognize that the benefits that technology brings outweigh the downsides of the technology. I think that’s just this part of the technology adoption curve that we’re all on.

That’s the same conversation around smartphones. I get it with smartphones. Everyone has them in their pocket. They collect all this personal data. You may want a gatekeeper there because you don’t have a sophisticated user base.

Your customers are very self-interested, commercial customers.

Yep.

Do you think you have a different kind of responsibility than, I don’t know, the Xbox Live team has to the Xbox Live community? In terms of data, in terms of control, in terms of relinquishing control of the product once it’s sold.

It certainly is a different market. It’s a different customer base. It’s a different clientele. To your point, they are dependent upon the product for their livelihood. So we do everything we can to make sure that product is reliable. It produces when it needs to produce in order to make sure that their businesses are productive and sustainable. I do think the biggest difference from the consumer market that you referenced to our market is the technology life cycle that we’re on.

You brought up tractors that are 20 years old that don’t have a ton of computers on-board versus what we have today. But what we have today is significantly more efficient than what we had 20 years ago. The tractors that you referenced are still in the market. People are still using them. They’re still putting them to work, productive work. In fact, on my family farm, they’re still being used for productive work. And I think that’s what’s different between the consumer market and the ag market. We don’t have a disposable product. You don’t just pick it up and throw it away. We have to be able to plan for that technology use across decades as opposed to maybe single-digit years.

In terms of the benefits of technology and selling that through, one of the other questions I got from the folks in my family was about the next thing that technology can enable. It seems like the equipment can’t physically get much bigger. The next thing to tackle is speed — making things faster for increased productivity.

Is that how you think about selling the benefits of technology — now the combine is as big as it can be, and it’s efficient at this massive scale. Is the next step to make it more efficient in terms of speed?

You’ve seen the industry trend that way. You look at planting as a great example. Ten years ago, we planted at three miles an hour. Today, we plant at 10 miles an hour. And what enabled that was technology. It was electric motors on row units that can react really, really quickly, that are highly controllable and can place seed really, really accurately, right? I think that’s the trend. Wisconsin’s a great place to talk about it. Whether it’s a row crop farm, there’s a small window in the spring, a couple of weeks, where it’s optimal to get those crops in the ground. And so it’s an insurance policy to be able to go faster because the weather may not be great for both of those weeks that you’ve got that are optimal planning weeks. And so you may only have three days or four days in that 10-day window in order to plant all your crops.

And speed is one way to make sure that that happens. Size and the width of the machine is the other. I would agree that we’ve gotten to the point where there’s very little opportunity left in going bigger, and so going faster and, I would argue, going more intelligently, is the way that you improve productivity in the future.

So we’ve talked about a huge set of responsibilities, everything from the physical mechanical design of the machinery to building cloud services, to geopolitics. What is your decision-making process? What’s your framework for how you make decisions?

I think at the root of it, we try to drive everything back to a customer and what we can do to make that customer more productive and more sustainable. And that helps us triage. Of all the great ideas that are out there, all the things that we could work on, what are the things that can move the needle for a customer in their operation as much as possible? And I think that grounding in the customer and the customer’s business is important because, fundamentally, our business is dependent upon the farmer’s business. If the farmer does well, we do well. If the farmer doesn’t do well, we don’t do well. We’re intertwined. There’s a connection there that you can’t and shouldn’t separate.

So driving our decision-making process towards having an intimate knowledge of the customer’s business and what we can do to make their business better frames everything we do.

What’s next for John Deere? What is the short term future for precision farming? Give me a five-year prediction.

I’m super excited about what we’re calling “sense and act.” “See and spray” is the first down payment on that. It’s the ability to create, in software and through electronic and mechanical devices, the human sense of sight, and then act on it. So we’re separating, in this case, weeds from useful crop, and we’re only spraying the weeds. That reduces herbicide use within a field. It reduces the cost for the farmer, input cost into their operation. It’s a win-win-win. And it is step one in the sense-and-act trajectory or sense-and-act runway that we’re on.

There’s a lot more opportunity for us in agriculture to do more sensing and acting, and doing that in an optimal way so that we’re not painting the same picture across a complete field, but doing it more prescriptively and acting more prescriptively in areas of a field that demand different things. I think that sense-and-act type of vision is the roadmap that we’re on. There’s a ton of opportunity in there. It is technology-intensive because you’re talking sensors, you’re talking computers, and you’re talking acting with precision. All of those things require fundamental shifts in technology from where we’re at today.

Source: https://www.theverge.com/22533735/john-deere-cto-hindman-decoder-interview-right-to-repair-tractors

What iOS 14’s Hidden ‘Approximate Location’ Feature Is (and Why It’s Important)

Source: https://www.idropnews.com/news/what-ios-14s-hidden-approximate-location-feature-is-and-why-its-important/141938/

iOS 14 Approximate LocationCredit: JL IMAGES / Shutterstock

As iOS 14 betas continue to roll out and the software’s full release grows near, more people are noticing just how revolutionary some of its privacy and security features appear to be.

There’s some exciting stuff there, but one of the most interesting – and, until recently, overlooked – features is called “Approximate Location.”

It means enormous changes for location-based services on iOS, and could affect many third-party apps in ways that aren’t entirely clear yet. Here are the significant points all iPhone users should know.

Approximate Location Will Hide Your Exact Location

Based on the details that Apple has given, Approximate Location is a new tool that can be enabled in iOS. Instead of switching off location-based data, this feature will make it…fuzzy. Apple reports that it will limit the location data sent to apps to a general 10-mile region.

You could be anywhere in that 10 miles, doing anything, but apps will only be able to tell that your device is in that specific region. This is going to change several important things about apps that want to know your location, but is a big boon for privacy while still enabling various app services.

Limited Data About Movement Will Be Shared

Not all the details are certain yet, but we do know that apps will be able to track when a device moves from one region to another. Apps will probably be able to extrapolate on that data and know that you were somewhere along a particular border between one region and another.

However, companies still won’t be able to tell what exactly you were doing near the border, or how long you stayed near the border before crossing over. If you cross over the same borders a lot, then apps will probably be able to make some basic guesses, like you’re commuting to work, dropping kids off at school, or visiting a preferred shopping center, but that’s basically all they will be able to tell.

Some Apps Won’t Have a Problem with This

For many third-party app services, these new 10-mile Approximate Location Regions won’t pose much of a problem. Apps that are recommending nearby restaurants you might like, parks you can visit, available hotels, and similar suggestions don’t need to know your exact location to be accurate – the 10-mile zone should work fine. The same is true of weather apps, and a variety of other services.

But not all third-party apps are interested in location data just to offer services. They also want to use it for their own ends…and that’s where things get more complicated.

Location-Based Advertising Is up for a Challenge

A whole crowd of third-party apps want to track your exact location, not for services, but to collect important data about their users. Even common apps like Netflix tend to do this! They are tracking behavior and building user profiles that they can use for advertising purposes, or provide to advertisers interested in building these profiles themselves.

Apple has already changed other types of tracking to require permission from app users. But turning on Approximate Location is another hurdle that blocks apps from knowing exactly what users are doing. Not only does this make it more difficult to build behavioral profiles, but it also makes it hard or impossible to attribute a user visit to any specific online campaign.

There are solutions to this, but it will be a change of pace for advertisers. Apps can use Wi-Fi pings, check-in features, and purchase tracking to still get an idea of what people are doing, and where. That’ll require a lot more user involvement than before, which puts privacy in the hands of the customer.

It’s Not Clear How This Will Affect Apps That Depend on Location Tracking

Then there’s the class of apps that needs to know precise locations of users to work properly.

For example, what happens when an app wants to provide precise directions to an address after you have chosen it? Or – perhaps most likely – will alerts pop up when you try to use these services, requiring you to shut off Approximate Location to continue? We’ve already seen how this works with Apple Maps, which asks you to allow one “precise location” to help with navigation, or turn it on for the app entirely.

Then there’s the problem with ridesharing and food delivery apps. They can’t offer some their core services with Approximate Location turned on, so we can expect warnings or lockouts from these apps as well.

But even with this micromanaging, more privacy features are probably worth it.

Apple’s Ushering in a New Era of Mobile Ads (Here’s How It Affects Us)

Source: https://www.idropnews.com/news/apples-ushering-in-a-new-era-of-mobile-ads-heres-how-it-affects-us/138841/10/

Safari Private Browsing Mode On Iphone

While it may have slipped the attention of many consumers, online businesses around the world were rocked by Apple’s June 2020 decision to make the IDFA fully opt-in. What does that mean exactly?

Well, IDFA stands for Identifier for Advertisers, and it’s a protocol that creates an ID tag for every user device so that device activity can be tracked by advertisers for personalized marketing and ad offers.

While IDFA made it easy to track online behavior without actually knowing a user’s private info, the practice has come under some scrutiny as the importance of online privacy continues to increase.

While Apple still provides the IDFA, it’s now entirely based on direct permission granted by users. In other words, if an app wants to track what a device is doing through an IDFA, a big pop-up will show up that says, roughly, “This app wants to track what you’re doing on this device so it can send you ads. Do you want to allow that?” Users are broadly expected to answer no.

So, what does that mean for advertisers and for your personal user experience going forward? Continue reading to learn what it means for you.


You Will Still Get Online Ads

Apple’s change is a big one for mobile advertisers, but it doesn’t mean that ads will disappear from your iPhone. Consumers will still get ads in all the usual places on their phones. That includes in their internet browsers, and in some of the apps that they use.

The big difference is that those ads will be far less likely to be 1) personalized based on what you like doing on your phone and 2) retargeted based on the products and ads you’ve looked at before. So the ads will still appear, but they will tend to be more general in nature.

 


Big Platforms Will Need to Get More Creative with Tracking

Without the
IDFA option, advertising platforms face a need for more innovation. Advertising
lives off data, and Apple’s move encourages smarter data strategies.

What’s that going to look like? We’ll have to wait and see, but one potential solution is “fingerprinting” a device, or making a device profile, a lot like marketers make buyer personas. This involves gathering ancillary data about a device’s IP addresses, location, activity periods, Bluetooth, and other features, then combining it into a profile that shows how the device is being used and what that says about the user.

Another
option is to develop more ways to track “events” instead of devices. An app
event could be anything from logging on for the first time to reaching the
first level of a game, etc. By looking at events across the entire user base,
advertisers can divide users into different groups of behavior and target ads
based on what that behavior says about them.

 


Developers and Advertisers Will Design New Ways to Monitor Apps

Advertisers
still need app data from iOS to make effective decisions about ads. Since
individual device data is now largely out of reach for them, we’re going to
start seeing more innovation on this side, too. Companies are going to start
focusing on broad data that they do have to make plans based on what they do
know – in other words, what users are doing directly on the app itself, instead
of on the entire device.

Apple is helping with this, too: The company has announced a new SKAdNetwork platform that is essentially designed to replace some of what the IDFA program used to do. It doesn’t track individual device activity, but it does track overall interaction with apps, so creators will still know things like how many people are downloading apps, where they are downloading from, and what features are getting the most use, etc. The key will be finding ways to make intelligent ad decisions from that collective data, and looking for synergistic ways to share it with partners – something advertisers traditionally haven’t done much in the past.

 


Retargeting Will Refocus on Contact Information

Retargeting
is the ad tactic of showing a user products and ads they have already viewed in
the past, which makes a purchase more likely. It’s a very important part of the
sales process, but becomes more difficult when device activity can’t be
directly monitored. However, there’s another highly traditional option for retargeting:
Getting a customer’s contact information. Depending on how active someone is on
the Web, something like an email address or phone number can provide plenty of
useful retargeting data. Expect a renewed focus on web forms and collecting
contact information within apps.

 


Online Point of Sale Will Become Even More Important

Buying on eBay with Apple iPad Air

The online shopping cart is already a locus of valuable information: Every time you add a product, look at shipping prices, abandon a shopping cart, pick a payment method, choose an address, and complete an order – all of it provides companies with data they can use for retargeting, customer profiles, personalized ads and discounts, and so on.

Nothing Apple is doing will affect online POS data, so we can expect it to become even more important. However, most POS data currently stays in house, so the big question is if – and how – large ad platforms might use it in the future. Which brings us to another important point: auctioning data.

 


Auctioning Mobile User Data Is Less Viable Than Ever

A big secondary market for mobile advertising is selling device data to other advertisers (it’s also technically a black market when it happens on the dark web with stolen data, but there’s a legitimate version, too). Now bids for iOS data don’t really have anywhere to go – how can you bid on a list of device use information when that data isn’t being collected anymore? And if someone is selling that data, how do you know if it’s not outdated or just fake?

These secondary auction markets and “demand-side platforms” (DSPs) have been facing pressure in recent years over fears they aren’t exactly healthy for the industry. Apple nixing the IDFA won’t end them, but it will refocus the secondary selling on top-level data (the kind we discussed in the points above) and less on more personal user data.

 


This Is Just the Beginning

The era of
device tracking has only begun to change. Apple’s decision about IDFA was expected,
and is only the beginning of the shift away from this tactic. Google is also expected
to make a similar change with its own version of the technology, GAID (Google
Ad Identifier). Meanwhile, major web browsers like Safari and Chrome are
dropping support for third-party cookies as well.

This is great
for customer privacy, which is clearly a new core concern for the big tech
names. It’s also ushering in a new age of marketing where advertisers will have
to grapple with unseen data – and find new ways to move ahead. In some ways, it’s
an analyst’s dream come true.

Tech companies tried to help us spend less time on our phones. It didn’t work.

Last year, tech companies couldn’t get enough of letting you use their products less.

Executives at Apple and Google unveiled on-device features to help people monitor and restrict how much time they spent on their phones. Facebook and Instagram, two of the biggest time sucks on the planet, also rolled out time spent notifications and the ability to snooze their apps — new features meant to nudge people to scroll through their apps a little less mindlessly.

These companies all became fluent in the language of “time well spent,” a movement to design technology that respects users’ time and doesn’t exploit their vulnerabilities. Since the movement sprang up nearly seven years ago, it has invoked mass introspection and an ongoing debate over technology use, which people blame for a swath of societal ills including depression and suicide, diminished attention spans, and decreased productivity.

But a year after Big Tech rolled out their time-well-spent features, it doesn’t seem like they’re working: The time we spend on our devices just keeps increasing.

Fortunately, the problem might not be that bad in the first place. Though correlations exist, there’s no causal link between digital media usage and the myriad problems some speculate it causes.

“Every time new tech comes out, there’s a moral panic that this is going to melt our brains and destroy society,” Ethan Zuckerman, director of the Center for Civic Media at MIT, told Recode. “In almost every case, we sort of look back at these things and laugh.”

What “time well spent” has done is spurred a whole cottage industry to help people “digitally detox,” and it’s being led in part by the big tech companies responsible for — and that benefit from — our reliance on tech in the first place. As Quartz writer Simone Stolzoff put it, “‘Time well spent’ is having its Kendall Jenner Pepsi moment. What began as a social movement has become a marketing strategy.”

Politicians are also jumping on the dogpile. Sen. Josh Hawley (R-MO) has proposed a bill to reduce what some call social media addiction by banning infinite scrolling and autoplay and by automatically limiting users to spending a maximum of 30 minutes a day on each platform. The bill currently has no cosponsors and is unlikely to go to a vote, but does demonstrate that the topic is on lawmakers’ radar.

These efforts, however, have yet to dent our insatiable need for tech.

The data on device usage

By all accounts, the time we spend attached to our digital devices is growing.

American adults spent about 3 hours and 30 minutes a day using the mobile internet in 2019, an increase of about 20 minutes from a year earlier, according to measurement company Zenith. The firm expects that time to grow to over four hours in 2021. (Top smartphone users currently spend 4 hours and 30 minutes per day on those devices, according to productivity software company RescueTime, which estimates average phone usage to be 3 hours and 15 minutes per day).

We’re spending more time online because pastimes like socializing that used to happen offline are shifting online, and we’re generally ceding more of our days to digital activities.

The overall time Americans spend on various media is expected to grow to nearly 11 hours per day this year, after accounting for declines in time spent with other media like TV and newspapers that are increasingly moving online, according to Zenith. Mobile internet use is responsible for the entirety of that growth.

Nearly a third of Americans said they are online “almost constantly” in 2019, a statistic that has risen substantially across age groups since the study was conducted the year before.

Not all our online activities are on the uptick, however.

Online measurement company SimilarWeb has found that time spent with some of the most popular social media apps, like Facebook, Instagram, and Snapchat, has declined in the wake of “time well spent” efforts — though the decline could instead reflect the waning relevance of those social media behemoths. At least for now, the average amount of time on those apps is still near historic highs:

Since overall time spent online is going up, the data suggests we’re just finding other places online to spend our time, like with newer social media like TikTok or with online video games.

Some have argued that sheer time spent isn’t important psychologically, but rather it’s what we’re doing with that time online. And what we’re doing is very fragmented.

Rather than use our devices continually, we tend to check them throughout the day. On average, people open their phones 58 times a day (and 30 of those times are during the workday), according to RescueTime. Most of those phone sessions are under two minutes.

Even on our phones, we don’t stick to one thing. A recent study published in the journal Human-Computer Interaction found that people switched on average from one screen activity to another every 20 seconds.

And what’s the result of all these hours of fragmented activity? Just one in 10 people RescueTime surveyed said they felt in control of how they spend their day.

What to do with our growing smartphone usage

It’s tough to separate finger-wagging judgments about tech from valid concerns about how tech could be degrading our lives. But the perception, at least, that tech is harming our lives seems to be very real.

Numerous articles instruct people on how to put down their phones. And richer Americans — including the people making the technology in the first place — are desperately trying to find ways to have their kids spend less time with screens.

MIT’s Zuckerman suggests building better “pro-civic social media,” since he thinks it’s already clear we’re going to spend lots of our time online anyway.

“I am deeply worried about the effects of the internet on democracy. On the flip side, I was deeply worried about democracy before everyone was using the internet,” he said. “What we probably have to be doing is building social media that’s good for us as a democracy.”

This social media would emphasize the best aspects of social media and would better defend against scourges like content that promotes political polarization and misinformation. He gave the example of gell.com, which uses experts to outline arguments for and against major social issues, and then encourages user participation to further develop and challenge the ideas.

Nir Eyal, author of Indistractable: How to Control Your Attention and Choose Your Life, thinks we’re overusing the language of addiction when it comes to technology usage. If we really want to limit our technology usage, he told Recode, solutions are close at hand.

“We want to think that we’re getting addicted because an addiction involves a pusher, a dealer — someone’s doing it. Whereas when we call it what it really is, which is distraction — now in the US, we don’t like to face that fact — that means we have to do something that’s no fun,” Eyal said.

Instead of blaming tech companies, he asks people, “Have you tried to turn off notifications, for God’s sake? Have you planned your day so that you don’t have all this white space where you’re free to check your phone all the time?”

For those who are addicted — a percentage he says is probably in line with the portions of the population that are addicted to anything else, like alcohol or gambling — he thinks tech companies should notify users that they’re in the top percentiles of usership and offer them resources, such as software tools and professional assistance (and his book).

In the meantime, the time we spend on our digital devices will continue to increase, and there’s still a need for conclusive research about whether that actually matters. Perhaps while we wait for clarity, we can turn off our notifications about how much time we spend on our phones.

Source: https://www.vox.com/recode/2020/1/6/21048116/tech-companies-time-well-spent-mobile-phone-usage-data

 

Travel Blogging

„While travel blogging is a relatively young phenomenon, it has already evolved into a mature and sophisticated business model, with participants on both sides working hard to protect and promote their brands.

Those on the industry side say there’s tangible commercial benefit, provided influencers are carefully vetted.
„If people are actively liking and commenting on influencers‘ posts, it shows they’re getting inspired by the destination,“ Keiko Mastura, PR specialist at the Japan National Tourism Organization, tells CNN Travel.
„We monitor comments and note when users tag other accounts or comment about the destination, suggesting they’re adding it to their virtual travel bucket lists. Someone is influential if they have above a 3.5% engagement rate.“
For some tourism outlets, bloggers offer a way to promote products that might be overlooked by more conventional channels. Even those with just 40,000 followers can make a difference.
Kimron Corion, communications manager of Grenada’s Tourism Authority, says his organization has „had a lot of success engaging with micro-influencers who exposed some of our more niche offerings effectively.“
Such engagement doesn’t come cheap though.“

That means extra pressure in finding the right influencer to convey the relevant message — particularly when the aim is to deliver real-time social media exposure.
„We analyze each profile to make sure they’re an appropriate fit,“ says Florencia Grossi, director of international promotion for Visit Argentina. „We look for content with dynamic and interesting stories that invites followers to live the experience.“
One challenge is weeding out genuine influencers from the fake, a job that’s typically done by manually scrutinizing audience feedback for responses that betray automated followers. Bogus bloggers are another reason the market is becoming increasingly wary.“

As of 4/2018 smartphone users upgraded their phone every 35 months (on average)

The Silver Lining in Apple’s Very Bad iPhone News

David Paul Morris/Bloomberg/Getty Images

Apple on Wednesday warned investors that its revenue for the last three months of 2018 would not live up to previous estimates, or even come particularly close. The main culprit appears to be China, where the trade war and a broader economic slowdown contributed to plummeting iPhone sales. But CEO Tim Cook’s letter to investors pointed to a secondary thread as well, one that Apple customers, environmentalists, and even the company itself should view not as a liability but an asset: People are holding onto their iPhones longer.

That’s not just in China. Cook noted that iPhone upgrades were “not as strong as we thought they would be” in developed markets as well, citing “macroeconomic conditions,” a shift in how carriers price smartphones, a strong US dollar, and temporarily discounted battery replacements. He neglected to mention the simple fact that an iPhone can perform capably for years—and consumers are finally getting wise.

As recently as 2015, smartphone users on average upgraded their phone roughly every 24 months, says Cliff Maldonado, founder of BayStreet Research, which tracks the mobile industry. As of the fourth quarter of last year, that had jumped to at least 35 months. “You’re looking at people holding onto their devices an extra year,” Maldonado says. “It’s been considerable.”

A few factors contribute to the trend, chief among them the shift from buying phones on a two-year contract—heavily subsidized by the carriers—to installment plans in which the customer pays full freight. T-Mobile introduced the practice in the US in 2014, and by 2015 it had become the norm. The full effects, though, have only kicked in more recently. People still generally pay for their smartphone over two years; once they’re paid off, though, their monthly bill suddenly drops by, say, $25.

The shift has also caused a sharp drop-off in carrier incentives. They turn out not to be worth it. “They’re actually encouraging that dynamic of holding your smartphone longer. It’s in their best interest,” Maldonado says. “It actually costs them to get you into a new phone, to do those promotions, to run the transaction and put it on their books and finance it.”

Bottom line: If your service is reliable and your iPhone still works fine, why go through the hassle?

“There’s not as many subsidies as there used to be from a carrier point of view,” Cook told CNBC Wednesday. “And where that didn’t all happen yesterday, if you’ve been out of the market for two or three years and you come back, it looks like that to you.”

Meanwhile, older iPhones work better, for longer, thanks to Apple itself. When Apple vice president Craig Federighi introduced iOS 12 in June at Apple’s Worldwide Developers Conference, he emphasized how much it improved the performance of older devices. Among the numbers he cited: The 2014 iPhone 6 Plus opens apps 40 percent faster with iOS 12 than it had with iOS 11, and its keyboard appears up to 50 percent faster than before. And while Apple’s battery scandal of a year ago was a black mark for the company, it at least reminded Apple owners that they didn’t necessarily need a new iPhone. Eligible iPhone owners found that a $29 battery replacement—it normally costs $79—made their iPhone 6 feel something close to new.

“There definitely has been a major shift in customer perception, after all the controversy,” says Kyle Wiens, founder of online repair community iFixit. “What it really did more than anything else was remind you that the battery on your phone really can be replaced. Apple successfully brainwashing the public into thinking the battery was something they never needed to think about led people to prematurely buy these devices.”

Combine all of that with the fact that new model iPhones—and Android phones for that matter—have lacked a killer feature, much less one that would inspire someone to spend $1,000 or more if they didn’t absolutely have to. “Phones used to be toys, and shiny objects,” Maldonado says. “Now they’re utilities. You’ve got to have it, and the joy of getting a new one is pretty minor. Facebook and email looks the same; the camera’s still great.”

In the near term, these dynamics aren’t ideal for Apple; its stock dropped more than 7 percent in after-hours trading following Wednesday’s news. But it’s terrific news for consumers, who have apparently realized that a smartphone does not have a two-year expiration date. That saves money in the long run. And pulling the throttle back on iPhone sales may turn out to be equally welcome news for the planet.

According to Apple’s most recent sustainability report, the manufacture of each Apple device generates on average 90 pounds of carbon emissions. Wiens suggests that the creation of each iPhone requires hundreds of pounds of raw materials.

Manufacturing electronics is environmentally intense, Wiens says. “We can’t live in a world where we’re making 3 billion new smartphones a year. We don’t have the resources for it. We have to reduce how many overall devices we’re making. There are lots of ways to do it, but it gets down to demand, and how many we’re buying. That’s not what Apple wants, but it’s what the environment needs.”

Which raises a question: Why does Apple bother extending the lives of older iPhones? The altruistic answer comes from Lisa Jackson, who oversees the company’s environmental efforts.

“We also make sure to design and build durable products that last as long as possible,” Jackson said at Apple’s September hardware event. “Because they last longer, you can keep using them. And keeping using them is the best thing for the planet.”

Given a long enough horizon, Apple may see a financial benefit from less frequent upgrades as well. An iPhone that lasts longer keeps customers in the iOS ecosystem longer. That becomes even more important as the company places greater emphasis not on hardware but on services like Apple Music. It also offers an important point of differentiation from Android, whose fragmented ecosystem means even flagship devices rarely continue to be fully supported beyond two years.

“In reality, the big picture is still very good for Apple,” Maldonado says. Compared with Android, “Apple’s in a better spot, because the phones last longer.”

That’s cold comfort today and doesn’t help a whit with China. But news that people are holding onto their iPhones longer should be taken for what it really is: A sign of progress and a win for everyone. Even Apple.

Source: https://www.wired.com/story/silver-lining-apples-very-bad-iphone-news/

What is GDPR – General Data Protection Regulation

Source Techcrunch.com

European Union lawmakers proposed a comprehensive update to the bloc’s data protection and privacy rules in 2012.

Their aim: To take account of seismic shifts in the handling of information wrought by the rise of the digital economy in the years since the prior regime was penned — all the way back in 1995 when Yahoo was the cutting edge of online cool and cookies were still just tasty biscuits.

Here’s the EU’s executive body, the Commission, summing up the goal:

The objective of this new set of rules is to give citizens back control over of their personal data, and to simplify the regulatory environment for business. The data protection reform is a key enabler of the Digital Single Market which the Commission has prioritised. The reform will allow European citizens and businesses to fully benefit from the digital economy.

For an even shorter the EC’s theory is that consumer trust is essential to fostering growth in the digital economy. And it thinks trust can be won by giving users of digital services more information and greater control over how their data is used. Which is — frankly speaking — a pretty refreshing idea when you consider the clandestine data brokering that pervades the tech industry. Mass surveillance isn’t just something governments do.

The General Data Protection Regulation (aka GDPR) was agreed after more than three years of negotiations between the EU’s various institutions.

It’s set to apply across the 28-Member State bloc as of May 25, 2018. That means EU countries are busy transposing it into national law via their own legislative updates (such as the UK’s new Data Protection Bill — yes, despite the fact the country is currently in the process of (br)exiting the EU, the government has nonetheless committed to implementing the regulation because it needs to keep EU-UK data flowing freely in the post-brexit future. Which gives an early indication of the pulling power of GDPR.

Meanwhile businesses operating in the EU are being bombarded with ads from a freshly energized cottage industry of ‘privacy consultants’ offering to help them get ready for the new regs — in exchange for a service fee. It’s definitely a good time to be a law firm specializing in data protection.

GDPR is a significant piece of legislation whose full impact will clearly take some time to shake out. In the meanwhile, here’s our guide to the major changes incoming and some potential impacts.

Data protection + teeth

A major point of note right off the bat is that GDPR does not merely apply to EU businesses; any entities processing the personal data of EU citizens need to comply. Facebook, for example — a US company that handles massive amounts of Europeans’ personal data — is going to have to rework multiple business processes to comply with the new rules. Indeed, it’s been working on this for a long time already.

Last year the company told us it had assembled “the largest cross functional team” in the history of its family of companies to support GDPR compliance — specifying this included “senior executives from all product teams, designers and user experience/testing executives, policy executives, legal executives and executives from each of the Facebook family of companies”.

“Dozens of people at Facebook Ireland are working full time on this effort,” it said, noting too that the data protection team at its European HQ (in Dublin, Ireland) would be growing by 250% in 2017. It also said it was in the process of hiring a “top quality data protection officer” — a position the company appears to still be taking applications for.

The new EU rules require organizations to appoint a data protection officer if they process sensitive data on a large scale (which Facebook very clearly does). Or are collecting info on many consumers — such as by performing online behavioral tracking. But, really, which online businesses aren’t doing that these days?

The extra-territorial scope of GDPR casts the European Union as a global pioneer in data protection — and some legal experts suggest the regulation will force privacy standards to rise outside the EU too.

Sure, some US companies might prefer to swallow the hassle and expense of fragmenting their data handling processes, and treating personal data obtained from different geographies differently, i.e. rather than streamlining everything under a GDPR compliant process. But doing so means managing multiple data regimes. And at very least runs the risk of bad PR if you’re outed as deliberately offering a lower privacy standard to your home users vs customers abroad.

Ultimately, it may be easier (and less risky) for businesses to treat GDPR as the new ‘gold standard’ for how they handle all personal data, regardless of where it comes from.

And while not every company harvests Facebook levels of personal data, almost every company harvests some personal data. So for those with customers in the EU GDPR cannot be ignored. At very least businesses will need to carry out a data audit to understand their risks and liabilities.

Privacy experts suggest that the really big change here is around enforcement. Because while the EU has had long established data protection standards and rules — and treats privacy as a fundamental right — its regulators have lacked the teeth to command compliance.

But now, under GDPR, financial penalties for data protection violations step up massively.

The maximum fine that organizations can be hit with for the most serious infringements of the regulation is 4% of their global annual turnover (or €20M, whichever is greater). Though data protection agencies will of course be able to impose smaller fines too. And, indeed, there’s a tiered system of fines — with a lower level of penalties of up to 2% of global turnover (or €10M).

This really is a massive change. Because while data protection agencies (DPAs) in different EU Member States can impose financial penalties for breaches of existing data laws these fines are relatively small — especially set against the revenues of the private sector entities that are getting sanctioned.

In the UK, for example, the Information Commissioner’s Office (ICO) can currently impose a maximum fine of just £500,000. Compare that to the annual revenue of tech giant Google (~$90BN) and you can see why a much larger stick is needed to police data processors.

It’s not necessarily the case that individual EU Member States are getting stronger privacy laws as a consequence of GDPR (in some instances countries have arguably had higher standards in their domestic law). But the beefing up of enforcement that’s baked into the new regime means there’s a better opportunity for DPAs to start to bark and bite like proper watchdogs.

GDPR inflating the financial risks around handling personal data should naturally drive up standards — because privacy laws are suddenly a whole lot more costly to ignore.

More types of personal data that are hot to handle

So what is personal data under GDPR? It’s any information relating to an identified or identifiable person (in regulatorspeak people are known as ‘data subjects’).

While ‘processing’ can mean any operation performed on personal data — from storing it to structuring it to feeding it to your AI models. (GDPR also includes some provisions specifically related to decisions generated as a result of automated data processing but more on that below).

A new provision concerns children’s personal data — with the regulation setting a 16-year-old age limit on kids’ ability to consent to their data being processed. However individual Member States can choose (and some have) to derogate from this by writing a lower age limit into their laws.

GDPR sets a hard cap at 13-years-old — making that the defacto standard for children to be able to sign up to digital services. So the impact on teens’ social media habits seems likely to be relatively limited.

The new rules generally expand the definition of personal data — so it can include information such as location data, online identifiers (such as IP addresses) and other metadata. So again, this means businesses really need to conduct an audit to identify all the types of personal data they hold. Ignorance is not compliance.

GDPR also encourages the use of pseudonymization — such as, for example, encrypting personal data and storing the encryption key separately and securely — as a pro-privacy, pro-security technique that can help minimize the risks of processing personal data. Although pseudonymized data is likely to still be considered personal data; certainly where a risk of reidentification remains. So it does not get a general pass from requirements under the regulation.

Data has to be rendered truly anonymous to be outside the scope of the regulation. (And given how often ‘anonymized’ data-sets have been shown to be re-identifiable, relying on any anonymizing process to be robust enough to have zero risk of re-identification seems, well, risky.)

To be clear, given GDPR’s running emphasis on data protection via data security it is implicitly encouraging the use of encryption above and beyond a risk reduction technique — i.e. as a way for data controllers to fulfill its wider requirements to use “appropriate technical and organisational measures” vs the risk of the personal data they are processing.

The incoming data protection rules apply to both data controllers (i.e. entities that determine the purpose and means of processing personal data) and data processors (entities that are responsible for processing data on behalf of a data controller — aka subcontractors).

Indeed, data processors have some direct compliance obligations under GDPR, and can also be held equally responsible for data violations, with individuals able to bring compensation claims directly against them, and DPAs able to hand them fines or other sanctions.

So the intent for the regulation is there be no diminishing in responsibility down the chain of data handling subcontractors. GDPR aims to have every link in the processing chain be a robust one.

For companies that rely on a lot of subcontractors to handle data operations on their behalf there’s clearly a lot of risk assessment work to be done.

As noted above, there is a degree of leeway for EU Member States in how they implement some parts of the regulation (such as with the age of data consent for kids).

Consumer protection groups are calling for the UK government to include an optional GDPR provision on collective data redress to its DP bill, for example — a call the government has so far rebuffed.

But the wider aim is for the regulation to harmonize as much as possible data protection rules across all Member States to reduce the regulatory burden on digital businesses trading around the bloc.

On data redress, European privacy campaigner Max Schrems — most famous for his legal challenge to US government mass surveillance practices that resulted in a 15-year-old data transfer arrangement between the EU and US being struck down in 2015 — is currently running a crowdfunding campaign to set up a not-for-profit privacy enforcement organization to take advantage of the new rules and pursue strategic litigation on commercial privacy issues.

Schrems argues it’s simply not viable for individuals to take big tech giants to court to try to enforce their privacy rights, so thinks there’s a gap in the regulatory landscape for an expert organization to work on EU citizen’s behalf. Not just pursuing strategic litigation in the public interest but also promoting industry best practice.

The proposed data redress body — called noyb; short for: ‘none of your business’ — is being made possible because GDPR allows for collective enforcement of individuals’ data rights. And that provision could be crucial in spinning up a centre of enforcement gravity around the law. Because despite the position and role of DPAs being strengthened by GDPR, these bodies will still inevitably have limited resources vs the scope of the oversight task at hand.

Some may also lack the appetite to take on a fully fanged watchdog role. So campaigning consumer and privacy groups could certainly help pick up any slack.

Privacy by design and privacy by default

Another major change incoming via GDPR is ‘privacy by design’ no longer being just a nice idea; privacy by design and privacy by default become firm legal requirements.

This means there’s a requirement on data controllers to minimize processing of personal data — limiting activity to only what’s necessary for a specific purpose, carrying out privacy impact assessments and maintaining up-to-date records to prove out their compliance.

Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable. (And we’ve sure seen a whole lot of those hellish things in tech.) The core idea is that consent should be an ongoing, actively managed process; not a one-off rights grab.

As the UK’s ICO tells it, consent under GDPR for processing personal data means offering individuals “genuine choice and control” (for sensitive personal data the law requires a higher standard still — of explicit consent).

There are other legal bases for processing personal data under GDPR — such as contractual necessity; or compliance with a legal obligation under EU or Member State law; or for tasks carried out in the public interest — so it is not necessary to obtain consent in order to process someone’s personal data. But there must always be an appropriate legal basis for each processing.

Transparency is another major obligation under GDPR, which expands the notion that personal data must be lawfully and fairly processed to include a third principle of accountability. Hence the emphasis on data controllers needing to clearly communicate with data subjects — such as by informing them of the specific purpose of the data processing.

The obligation on data handlers to maintain scrupulous records of what information they hold, what they are doing with it, and how they are legally processing it, is also about being able to demonstrate compliance with GDPR’s data processing principles.

But — on the plus side for data controllers — GDPR removes the requirement to submit notifications to local DPAs about data processing activities. Instead, organizations must maintain detailed internal records — which a supervisory authority can always ask to see.

It’s also worth noting that companies processing data across borders in the EU may face scrutiny from DPAs in different Member States if they have users there (and are processing their personal data).

Although the GDPR sets out a so-called ‘one-stop-shop’ principle — that there should be a “lead” DPA to co-ordinate supervision between any “concerned” DPAs — this does not mean that, once it applies, a cross-EU-border operator like Facebook is only going to be answerable to the concerns of the Irish DPA.

Indeed, Facebook’s tactic of only claiming to be under the jurisdiction of a single EU DPA looks to be on borrowed time. And the one-stop-shop provision in the GDPR seems more about creating a co-operation mechanism to allow multiple DPAs to work together in instances where they have joint concerns, rather than offering a way for multinationals to go ‘forum shopping’ — which the regulation does not permit (per WP29 guidance).

Another change: Privacy policies that contain vague phrases like ‘We may use your personal data to develop new services’ or ‘We may use your personal data for research purposes’ will not pass muster under the new regime. So a wholesale rewriting of vague and/or confusingly worded T&Cs is something Europeans can look forward to this year.

Add to that, any changes to privacy policies must be clearly communicated to the user on an ongoing basis. Which means no more stale references in the privacy statement telling users to ‘regularly check for changes or updates’ — that just won’t be workable.

The onus is firmly on the data controller to keep the data subject fully informed of what is being done with their information. (Which almost implies that good data protection practice could end up tasting a bit like spam, from a user PoV.)

The overall intent behind GDPR is to inculcate an industry-wide shift in perspective regarding who ‘owns’ user data — disabusing companies of the notion that other people’s personal information belongs to them just because it happens to be sitting on their servers.

“Organizations should acknowledge they don’t exist to process personal data but they process personal data to do business,” is how analyst Gartner research director Bart Willemsen sums this up. “Where there is a reason to process the data, there is no problem. Where the reason ends, the processing should, too.”

The data protection officer (DPO) role that GDPR brings in as a requirement for many data handlers is intended to help them ensure compliance.

This officer, who must report to the highest level of management, is intended to operate independently within the organization, with warnings to avoid an internal appointment that could generate a conflict of interests.

Which types of organizations face the greatest liability risks under GDPR? “Those who deliberately seem to think privacy protection rights is inferior to business interest,” says Willemsen, adding: “A recent example would be Uber, regulated by the FTC and sanctioned to undergo 20 years of auditing. That may hurt perhaps similar, or even more, than a one-time financial sanction.”

“Eventually, the GDPR is like a speed limit: There not to make money off of those who speed, but to prevent people from speeding excessively as that prevents (privacy) accidents from happening,” he adds.

Another right to be forgotten

Under GDPR, people who have consented to their personal data being processed also have a suite of associated rights — including the right to access data held about them (a copy of the data must be provided to them free of charge, typically within a month of a request); the right to request rectification of incomplete or inaccurate personal data; the right to have their data deleted (another so-called ‘right to be forgotten’ — with some exemptions, such as for exercising freedom of expression and freedom of information); the right to restrict processing; the right to data portability (where relevant, a data subject’s personal data must be provided free of charge and in a structured, commonly used and machine readable form).

All these rights make it essential for organizations that process personal data to have systems in place which enable them to identify, access, edit and delete individual user data — and be able to perform these operations quickly, with a general 30 day time-limit for responding to individual rights requests.

GDPR also gives people who have consented to their data being processed the right to withdraw consent at any time. Let that one sink in.

Data controllers are also required to inform users about this right — and offer easy ways for them to withdraw consent. So no, you can’t bury a ‘revoke consent’ option in tiny lettering, five sub-menus deep. Nor can WhatsApp offer any more time-limit opt-outs for sharing user data with its parent multinational, Facebook. Users will have the right to change their mind whenever they like.

The EU lawmakers’ hope is that this suite of rights for consenting consumers will encourage respectful use of their data — given that, well, if you annoy consumers they can just tell you to sling yer hook and ask for a copy of their data to plug into your rival service to boot. So we’re back to that fostering trust idea.

Add in the ability for third party organizations to use GDPR’s provision for collective enforcement of individual data rights and there’s potential for bad actors and bad practice to become the target for some creative PR stunts that harness the power of collective action — like, say, a sudden flood of requests for a company to delete user data.

Data rights and privacy issues are certainly going to be in the news a whole lot more.

Getting serious about data breaches

But wait, there’s more! Another major change under GDPR relates to security incidents — aka data breaches (something else we’ve seen an awful, awful lot of in recent years) — with the regulation doing what the US still hasn’t been able to: Bringing in a universal standard for data breach disclosures.

GDPR requires that data controllers report any security incidents where personal data has been lost, stolen or otherwise accessed by unauthorized third parties to their DPA within 72 hours of them becoming aware of it. Yes, 72 hours. Not the best part of a year, like er Uber.

If a data breach is likely to result in a “high risk of adversely affecting individuals’ rights and freedoms” the regulation also implies you should ‘fess up even sooner than that — without “undue delay”.

Only in instances where a data controller assesses that a breach is unlikely to result in a risk to the rights and freedoms of “natural persons” are they exempt from the breach disclosure requirement (though they still need to document the incident internally, and record their reason for not informing a DPA in a document that DPAs can always ask to see).

“You should ensure you have robust breach detection, investigation and internal reporting procedures in place,” is the ICO’s guidance on this. “This will facilitate decision-making about whether or not you need to notify the relevant supervisory authority and the affected individuals.”

The new rules generally put strong emphasis on data security and on the need for data controllers to ensure that personal data is only processed in a manner that ensures it is safeguarded.

Here again, GDPR’s requirements are backed up by the risk of supersized fines. So suddenly sloppy security could cost your business big — not only in reputation terms, as now, but on the bottom line too. So it really must be a C-suite concern going forward.

Nor is subcontracting a way to shirk your data security obligations. Quite the opposite. Having a written contract in place between a data controller and a data processor was a requirement before GDPR but contract requirements are wider now and there are some specific terms that must be included in the contract, as a minimum.

Breach reporting requirements must also be set out in the contract between processor and controller. If a data controller is using a data processor and it’s the processor that suffers a breach, they’re required to inform the controller as soon as they become aware. The controller then has the same disclosure obligations as per usual.

Essentially, data controllers remain liable for their own compliance with GDPR. And the ICO warns they must only appoint processors who can provide “sufficient guarantees” that the regulatory requirements will be met and the rights of data subjects protected.

tl;dr, be careful who and how you subcontract.

Right to human review for some AI decisions

Article 22 of GDPR places certain restrictions on entirely automated decisions based on profiling individuals — but only in instances where these human-less acts have a legal or similarly significant effect on the people involved.

There are also some exemptions to the restrictions — where automated processing is necessary for entering into (or performance of) a contract between an organization and the individual; or where it’s authorized by law (e.g. for the purposes of detecting fraud or tax evasion); or where an individual has explicitly consented to the processing.

In its guidance, the ICO specifies that the restriction only applies where the decision has a “serious negative impact on an individual”.

Suggested examples of the types of AI-only decisions that will face restrictions are automatic refusal of an online credit application or an e-recruiting practices without human intervention.

Having a provision on automated decisions is not a new right, having been brought over from the 1995 data protection directive. But it has attracted fresh attention — given the rampant rise of machine learning technology — as a potential route for GDPR to place a check on the power of AI blackboxes to determine the trajectory of humankind.

The real-world impact will probably be rather more prosaic, though. And experts suggest it does not seem likely that the regulation, as drafted, equates to a right for people to be given detailed explanations of how algorithms work.

Though as AI proliferates and touches more and more decisions, and as its impacts on people and society become ever more evident, pressure may well grow for proper regulatory oversight of algorithmic blackboxes.

In the meanwhile, what GDPR does in instances where restrictions apply to automated decisions is require data controllers to provide some information to individuals about the logic of an automated decision.

They are also obliged to take steps to prevent errors, bias and discrimination. So there’s a whiff of algorithmic accountability. Though it may well take court and regulatory judgements to determine how stiff those steps need to be in practice.

Individuals do also have a right to challenge and request a (human) review of an automated decision in the restricted class.

Here again the intention is to help people understand how their data is being used. And to offer a degree of protection (in the form of a manual review) if a person feels unfairly and harmfully judged by an AI process.

The regulation also places some restrictions on the practice of using data to profile individuals if the data itself is sensitive data — e.g. health data, political belief, religious affiliation etc — requiring explicit consent for doing so. Or else that the processing is necessary for substantial public interest reasons (and lies within EU or Member State law).

While profiling based on other types of personal data does not require obtaining consent from the individuals concerned, it still needs a legal basis and there is still a transparency requirement — which means service providers will need to inform users they are being profiled, and explain what it means for them.

And people also always have the right to object to profiling activity based on their personal data.

 

Source: https://techcrunch.com/2018/01/20/wtf-is-gdpr/

Google introduces an ad blocker to Chrome – Filtering – Censorship?

Photo by David Ramos/Getty Images

Google will introduce an ad blocker to Chrome early next year and is telling publishers to get ready.

The warning is meant to let websites assess their ads and strip any particularly disruptive ones from their pages. That’s because Chrome’s ad blocker won’t block all ads from the web. Instead, it’ll only block ads on pages that are determined to have too many annoying or intrusive advertisements, like videos that autoplay with sound or interstitials that take up the entire screen.

Sridhar Ramaswamy, the executive in charge of Google’s ads, writes in a blog post that even ads “owned or served by Google” will be blocked on pages that don’t meet Chrome’s guidelines.

Instead of an ad “blocker,” Google is referring to the feature as an ad “filter,” according toThe Wall Street Journal, since it will still allow ads to be displayed on pages that meet the right requirements. The blocker will work on both desktop and mobile.

Google is providing a tool that publishers can run to find out if their sites’ ads are in violation and will be blocked in Chrome. Unacceptable ads are being determined by a group called the Coalition for Better Ads, which includes Google, Facebook, News Corp, and The Washington Post as members.

Google shows publishers which of their ads are considered disruptive.

The feature is certain to be controversial. On one hand, there are huge benefits for both consumers and publishers. But on the other, it gives Google immense power over what the web looks like, partly in the name of protecting its own revenue.

First, the benefits: bad ads slow down the web, make the web hard and annoying to browse, and have ultimately driven consumers to install ad blockers that remove all advertisements no matter what. A world where that continues and most users block all ads looks almost apocalyptic for publishers, since nearly all of your favorite websites rely on ads to stay afloat. (The Verge, as you have likely noticed, included.)

By implementing a limited blocking tool, Google can limit the spread of wholesale ad blocking, which ultimately benefits everyone. Users get a better web experience. And publishers get to continue using the ad model that’s served the web well for decades — though they may lose some valuable ad units in the process.

There’s also a good argument to be made that stripping out irritating ads is no different than blocking pop ups, which web browsers have done for years, as a way to improve the experience for consumers.

But there are drawbacks to building an ad blocker into Chrome: most notably, the amount of power it gives Google. Ultimately, it means Google gets to decide what qualifies as an acceptable ad (though it’s basing this on standards set collectively by the Coalition for Better Ads). That’s a good thing if you trust Google to remain benign and act in everyone’s interests. But keep in mind that Google is, at its core, an ad company. Nearly 89 percent of its revenue comes from displaying ads.

The Chrome ad blocker doesn’t just help publishers, it also helps Google maintain its dominance. And it advantages Google’s own ad units, which, it’s safe to say, will not be in violation of the bad ad rules.

This leaves publishers with fewer options to monetize their sites. And given that Chrome represents more than half of all web browsing on desktop and mobile, publishers will be hard pressed not to comply.

Google will also include an option for visitors to pay websites that they’re blocking ads on, through a program it’s calling Funding Choices. Publishers will have to enable support for this feature individually. But Google already tested a similar feature for more than two years, and it never really caught on. So it’s hard to imagine publishers seeing what’s essentially a voluntary tipping model as a viable alternative to ads.

Ramaswamy says that the goal of Chrome’s ad blocker is to make online ads better. “We believe these changes will ensure all content creators, big and small, can continue to have a sustainable way to fund their work with online advertising,” he writes.

And what Ramaswamy says is probably true: Chrome’s ad blocker likely will clean up the web and result in a better browsing experience. It just does that by giving a single advertising juggernaut a whole lot of say over what’s good and bad.

https://www.theverge.com/2017/6/1/15726778/chrome-ad-blocker-early-2018-announced-google

9 Steps to Get Millions of Views on Your YouTube Channel

9-steps-to-get-millions-of-views-on-your-youtube-channel

YouTube is the second most powerful search engine on the planet, and holds the top spot as the largest video network in existence.

The video site continues to grow more pervasive with the maturation of smartphone technology. Today, half of YouTube video views stem from mobile devices.

For this reason, and many others, YouTube is the master of reaching across generational boundaries to impact and engage members of GenX, GenY and GenZ. For example, YouTube currently reaches more 18-34 and 18-49 year-olds than any U.S cable network currently broadcasting.

Because of the popularity of the platform, influencers have spawned from the network and continually leave lasting impressions on their dedicated viewers. Studies suggest that recommendations from influencers are trusted 92% more than from celebrities or advertisements.

The trust factor brought forth by influencers is one of the most notable reasons as to why influencer marketing is so effective.

It’s not as simple as it looks

Leveraging influencers on YouTube is not as simple as it sounds. Because there are many performance and brand risks associated with YouTubers that need to be managed in order to deliver rockstar results.

YouTubers are legitimate masters of their craft and make their living by presenting themselves authentically. This means that brand interference regarding their voice or image is not normally welcomed.

Despite the challenges, brands and YouTubers can get along famously when the right partnership is forged.

The balancing act

By way of example, Google recently recruited famed YouTube influencer Lewis Hilsenteger from the channel Unbox Therapy to help make some noise about Android Pay.

The video depicted Lewis travelling throughout New York City, visiting destinations that accept the form of payment to prove that you could survive solely with Android Pay. This is a prime example of recruiting an influencer that expresses a brand’s message while maintaining their authenticity.

The video generated 1.7 million views while showing off the real-world capabilities of Android Pay.                     

unbox-therapy-for-views-on-your-youtube-channel

No doubt successful collaborations like these and the significant revenue generation potential spurred Google to recently acquire influencer marketplace Famebit.

The 9 key steps to get millions of views on YouTube

Below you’ll find nine steps that fast-casual restaurant chain Qdoba Mexican Eats took when engaging the YouTube audience for the first time.

The results were phenomenal (and, in full disclosure, delivered under the direction of, as well as executed by digital marketing agency Evolve!, Inc.).

If you’re planning on diving into YouTube to help grow your business, use this campaign as a model – it delivered 3 million views, 84K social engagements, and 200M potential impressions, all while adhering to strict brand guidelines and beating aggressive price targets.

1. Set your goals and success criteria

As with any marketing campaign, align your influencer marketing campaign with your overall marketing and sales goals.

Define success using quality metrics, such as messaging and how the brand is portrayed, as well as quantifiable targets such as cost per video view, average length of video view, number of targeted views, and cost per conversion.

2. Set a budget

The cost per view charged for YouTube sponsorships varies WIDELY, depending on factors such as audience size, reach, demographics, engagement, their industry vertical and genre, the type of sponsorship and length of integration, the YouTuber’s desire to work with a particular brand, and whether the talent is represented by an agency.

A good rule of thumb is to target a .04 – .07 cents cost per view (CPV) for video integrations and a .08 to .15 CPV for dedicated videos.

Brands should also set aside budget for content generation (landing pages, blog posts, prizes and and/or promotions), analytics software for tracking, a promotional ad budget, and manpower.

3. Create a theme and campaign messaging that supports your goals

It can be something as simple as capturing people’s excitement as they try delicious Qdoba entrees for the first time (#QdobaUnbox), or reveling in the occasions when More is Better (#MoreIsBetter), including indulging in Qdoba’s generous array of delicious toppings (#MoreFlavorIsBetter).

Evolve even created a contest celebrating Qdoba’s key differentiating factor: Free Guacamole (#FreeGuac).

Develop brand, and campaign-specific messaging, but leave ample room for YouTubers to exercise their creative license.

create-a-theme-for-views-on-your-youtube-channel

Remember, integrations are NOT advertisements.  Videos that come off as too commercial tend to get panned in the comments and generate lower-than-expected view counts.

4. Establish your selection criteria

What constitutes a brand match?

Start with genres, industries and channel demographics, including age, sex and geography.

Does the campaign theme fit their interests? Do they create content that would resonate with or offend your audience?

Identify any influencers that meet this criteria, fall within the audience size that you are looking to engage, and begin the outreach process.

5. Develop a pitch letter

Be clear about the campaign requirements, and set expectations: Are you looking for an integration or a dedicated video?  What four or five key messages do YouTubers need to address in the video?  And what is your timeline?

Basically, what are the promotional requirements and is there any additional information you need from them when they respond to your proposal.

But bear in mind that people who have built sizeable, engaged followings can afford to be choosy about which brands they want to work with. You may want to excite them with something that’s unique about your brand.

Qdoba offered vloggers a summer of free food, in addition to the paid sponsorship.

6. Recruit enthusiastic YouTubers

This is perhaps the most time-consuming step, and the most critical to the success of your campaign.

You know you’ve hit gold when you’ve identified YouTubers who meet your brand criteria, like your brand and offer creative story lines, and sometimes bonus promotions in their response.

There are 3 routes to recruit YouTubers:

  • Outreach directly to the people you want to work with via the email listed on their YouTube channel
  • Work with talent agencies you know and trust
  • Solicit proposals through influencer marketplaces like Famebit, Grapevine Logic or Reelio

recruit-enthusiastic-youtubers-for-views-on-your-youtube-channel

7. Spell out everything in the contract

Flush out the creative before finalizing the contract, and include the type of integration, key messages, project timeline, the reviews process and video promotions.

YouTubers tend NOT to want the brand to weigh in on things like the Video Title or storyline outside the integration. On the same token, it is vital to be somewhat flexible when working with influencers on the creative direction of the content. These folks have built substantial followings that are enchanted by their unique voice. Setting too rigid of a structure that is outside the norm for influencers could result in a deal going south or a video not receiving the attention it deserves.

8. A/B test everything. Measure, tweak and repeat

Test various genres, campaign themes, messaging, calls to action, and amplification strategies. At this stage, we generally prefer to partner with YouTubers that have small but engaged audiences. This will allow you to get the most bang for your buck while simultaneously minimizing any potential losses for creatives that do not resonate with audiences.

ab-test-everything-for-views-on-your-youtube-channel

Measure campaign performance, focusing on actual video views, social engagements and cost per conversions, if that’s relevant. Pivot as needed and update projected outcomes.

We use several tools simultaneously, including Simply Measured, to monitor multiple channels to gain the most clear and comprehensive picture possible.

ab3

ab2

9. Scale!

Once the campaign has been optimized, turn up the volume. Contract larger YouTube channels, and consider using contests or launching several videos at once to support product launches.

These introduce an added layer of complexity because they need to adhere to strict timelines and you potentially need to manage multiple videos at once. On the flipside, they also generally produce much more significant results, so while efforts will become more intricate, they will also become much more fruitful.

Qdoba A/B tested several concepts before running a two week #FreeGuac campaign, which drove 2.4 million video views. Participating YouTube vloggers invited their viewers to enter into a scavenger hunt contest for the chance to win cash prizes, free food and cool SWAG.

Contests like these are ideal for scaling a campaign as almost any marketing element that engages an audience on a participatory level is going to garner more attention compared to content that is merely observed through comments and shares. The contest subsequently resulted in Qdoba collecting over 10K contest submissions.

Wrap

As video continues to grow, YouTube is quickly transitioning into the premier influencer marketing channel. The power of video content is unmatched by its predecessors and influencer marketing, when managed properly, has the ability to permeate and engage an audience in unparalleled fashion.

The most challenging aspect of this discipline is that the rules of engagement are constantly in flux, meaning that for the best results, it is advisable to collaborate with specialty digital marketing agencies that work day-in and day-out crafting influencer strategies on YouTube that resonate, sell, and make a brand’s efforts worthwhile.

9 Steps to Get Millions of Views on Your YouTube Channel

Machines are becoming smarter marketers

artificial-intelligence-930x620

Marketing is only helpful when it’s meeting a need. It sounds simple, but those needs can be really tough to parse. Like any consumer, my needs evolve every day, if not every minute. I won’t stand for poorly targeted ads or messages that are irrelevant to me.

I work in marketing technology, and this industry has been talking about data-driven personalization for years. We’ve made a lot of progress, but we’re only just beginning to realize the potential of machine learning to match goods and services with a particular person in a specific situation.

Machines are changing how marketing is done. I’m not just talking about workflow automation or customer service bots. I’m talking about software that can help brands understand, meet, and even predict the subtlest of consumer needs.

It’s a new phase that I think of as Marketing 3.0. The 1.0 version, marketing in its early 20th century form, involved selling products to people who had demonstrated a need. The 1950s saw the rise of Marketing 2.0: ad men who shaped consumer desires to sell products. Machine learning allows marketers to move beyond this model and return to the original purpose of marketing, while adding speed and scale.

Marketing 1.0: Meeting needs as expressed
Marketing 2.0: Creating needs, then meeting them
Marketing 3.0: Machines analyzing needs, then meeting them

Marketing 3.0 uses machine learning to match product and consumer faster, more precisely, and in the right context; and to identify people who have an implied rather than overtly demonstrated need. Machines learn from a large pool of real-world examples, so they can predict future intent by observing past behavior. Marketers don’t have to comprehend the precise patterns that emerge from massive amounts of data or map out the rules that determine people’s behaviors.

In other words, machine learning shifts the role of the marketer from trying to manipulate customers’ needs to meeting the needs they actually have at a given moment.

Think about a BMW dealership looking to sell more of a particular model. They can use machine learning to identify indicators for people who bought a 5 Series in the past year: They researched similar cars like the Audi A6 and Mercedes E Class, they asked about mileage per gallon, and they had similar demographic traits.

Say I’m looking to buy a car and have a friend who recently bought a 5 Series. I’ve read about one of its new features: a 3D view of the car that I can see from my phone. When I search for “BMW 5 Series” on my iPhone, I’ll see a list of dealerships within a 10-mile radius of my regular commute. I call the dealership to ask about their inventory, and they know I’m ready to buy. I’m automatically matched with the sales rep who sold the same car to my friend, knows the specs I’m interested in, and can talk to me about 3D view.

I see massive opportunity to use predictive capabilities to link online and offline interactions — mobile ads, email campaigns, phone conversations, and in-person experiences. It’s becoming a reality as Google, Facebook, Apple, and Amazon continue investing in voice assistants and natural language processing technologies. Amazon is reportedly updating Alexa to be more emotionally intelligent. It’s not a huge leap to transition from making voice commands in my living room to calling a business and making a purchase directly through my Echo. A conversation is the most natural form of interaction, and the most conducive to forming relationships.

I think voice will be central to how marketers balance machine learning capabilities with the need to create human experiences. Even if machines can surface information and recommendations at exactly the right time, people still want human conversations, especially when it comes to buying complex or expensive products. I’m fine with Alexa ordering me a pizza, but not a car.

As I see it, the role of machines is to draw correlations between consumers’ behaviors and their ultimate intent. The role of the marketer is to figure out what can be automated (e.g., triggering an email after a purchase is made) and what can be augmented (e.g., predicting what products will most intrigue a customer) by using software. The next wave, Marketing 4.0, will take this a step further by meeting consumers’ expressed and unexpressed needs.

We’re moving toward a more predictive world in which machine learning powers the majority of interactions between consumers and brands. I don’t see this being at odds with human connection or authentic experiences. Marketing will be ambient and truly data-driven. It will catch up with consumer expectations and with the potential of technology to change how marketing is done

Machines are becoming smarter marketers