Apple told some developers that it will delay the enforcement of an anti-tracking feature that’s being implemented in iOS 14, reports The Information.
In iOS 14, Apple is requiring apps to seek customer consent before the IDFA (Identifier for Advertisers) can be used to track user behavior and preference across apps and websites for ad targeting purposes.
Major app developers and ad networks like Facebook have spoken out against the feature, with Facebook warning advertisers on its platform that the new feature could cause a more than 50 percent drop in Audience Network publisher revenue due to the loss of personalization from ads within apps.
Facebook and other advertisers expect that customers will not want to share their IDFA’s for ad targeting purposes and will therefore decline consent for the ad blocking popups that Apple has implemented in iOS 14.
Mobile developers that spoke to The Information said that they’ve had little time to prepare for Apple’s change, which was announced in June alongside iOS 14. Apple has also not provided a way for them to target ads without using the IDFA.
If Apple does end up delaying the anti-tracking features in iOS 14, customers who upgrade to iOS 14 will not see the prompts to decline sharing their device IDFA with third-party apps.
According to The Information, if Apple does decide to delay, the anti-tracking features could be held until next year.
Eric Seufert, an ads industry analyst, said it „simply wasn’t possible for developers to adapt their advertising infrastructure“ to Apple’s proposed IDFA change in time for the public release of iOS 14, which Apple usually makes available in September. He called delaying enforcement of the new IDFA prompt „the right thing for Apple to do, even if those privacy restrictions are well intentioned and ultimately best for consumers.“
Apple’s App Store team has apparently been asking gaming firms for details on how the change might impact their businesses, as these kinds of targeted ads are important to free-to-play games, and their responses may determine Apple’s plan to implement or delay the feature.
Update 10:02 a.m.: In a statement to TechCrunch, Apple confirms that it is pushing back the change to „early next year.“
We believe technology should protect users’ fundamental right to privacy, and that means giving users tools to understand which apps and websites may be sharing their data with other companies for advertising or advertising measurement purposes, as well as the tools to revoke permission for this tracking. When enabled, a system prompt will give users the ability to allow or reject that tracking on an app-by-app basis. We want to give developers the time they need to make the necessary changes, and as a result, the requirement to use this tracking permission will go into effect early next year.
I pretty frequently get requests for help from someone who has been impersonated—or whose child has been impersonated—via email. Even when you know how to „view headers“ or „view source“ in your email client, the spew of diagnostic wharrgarbl can be pretty overwhelming if you don’t know what you’re looking at. Today, we’re going to step through a real-world set of (anonymized) email headers and describe the process of figuring out what’s what.
Before we get started with the actual headers, though, we’re going to take a quick detour through an overview of what the overall path of an email message looks like in the first place. (More experienced sysadmin types who already know what stuff like „MTA“ and „SPF“ stand for can skip a bit ahead to the fun part!)
From MUA to MTA, and back to MUA again
The basic components involved in sending and receiving email are the Mail User Agent and Mail Transfer Agent. In the briefest possible terms, an MUA is the program you use to read and send mail from your own personal computer (like Thunderbird, or Mail.app, or even a webmail interface like Gmail or Outlook), and MTAs are programs that accept messages from senders and route them along to their final recipients.
Traditionally, mail was sent to a mail server using the Simple Mail Transfer Protocol (SMTP) and downloaded from the server using the Post Office Protocol (abbreviated as POP3, since version 3 is the most commonly used version of the protocol). A traditional Mail User Agent—such as Mozilla Thunderbird—would need to know both protocols; it would send all of its user’s messages to the user’s mail server using SMTP, and it would download messages intended for the user from the user’s mail server using POP3.
As time went by, things got a bit more complex. IMAP largely superseded POP3 since it allowed the user to leave the actual email on the server. This meant that you could read your mail from multiple machines (perhaps a desktop PC and a laptop) and have all of the same messages, organized the same way, everywhere you might check them.
Finally, as time went by, webmail became more and more popular. If you use a website as your MUA, you don’t need to know any pesky SMTP or IMAP server settings; you just need your email address and password, and you’re ready to read.
Ultimately, any message from one human user to another follows the path of MUA ⟶ MTA(s) ⟶ MUA. The analysis of email headers involves tracing that flow and looking for any funny business.
TLS in-flight encryption
The original SMTP protocol had absolutely no thought toward security—any server was expected to accept any message, from any sender, and pass the message along to any other server it thought might know how to get to the recipient in the To: field. That was fine and dandy in email’s earliest days of trusted machines and people, but it rapidly turned into a nightmare as the Internet scaled exponentially and became more commercially valuable.
It’s still possible to send email with absolutely no thought toward encryption or authentication, but such messages will very likely get rejected along the way by anti-spam defenses. Modern email typically is encrypted in-flight, and signed and authenticated at-rest. In-flight encryption is accomplished by TLS, which helps keep the content of a message from being captured or altered in-flight from one server to another. That’s great, so far as it goes, but in-flight TLS is only applied when mail is being relayed from one MTA to another MTA along the delivery path.
If an email travels from the sender through three MTAs before reaching its recipient, any server along the way can alter the content of the message—TLS encrypts the transmission from point to point but does nothing to verify the authenticity of the content itself or the path through which it’s traveling.
SPF—the Sender Policy Framework
The owner of a domain can set a TXT record in its DNS that states what servers are allowed to send mail on behalf of that domain. For a very simple example, Ars Technica’s SPF record says that email from arstechnica.com should only come from the servers specified in Google’s SPF record. Any other source should be met with a SoftFail error; this effectively means „trust it less, but don’t necessarily yeet it into the sun based on this alone.“
SPF headers in an email can’t be completely trusted after they’re generated, because there is no encryption involved. SPF is really only useful to the servers themselves, in real time. If a server knows that it’s at the outside boundary edge of a network, it also knows that any message it receives should be coming from a server specified in the sender’s domain’s SPF record. This makes SPF a great tool for getting rid of spam quickly.
DKIM—DomainKeys Identified Mail
Similarly to SPF, DKIM is set in TXT records in a sending domain’s DNS. Unlike SPF, DKIM is an authentication technique that validates the content of the message itself.
The owner of the sending domain generates a public/private key pair and stores the public key in a TXT record on the domain’s DNS. Mail servers on the outer boundary of the domain’s infrastructure use the private DKIM key to generate a signature (properly an encrypted hash) of the entire message body, including all headers accumulated on the way out of the sender’s infrastructure. Recipients can decrypt the DKIM signature using the public DKIM key retrieved from DNS, then make sure the hash matches the entire message body, including headers, as they received it.
If the decrypted DKIM signature is a matching hash for the entire body, the message is likely to be legitimate and unaltered—at least as verified by a private key belonging only to the domain owner (the end user does not have or need this key). If the DKIM signature is invalid, you know that the message either did not originate from the purported sender’s domain or has been altered (even if only by adding extra headers) by some other server in between. Or both!
This becomes extremely useful when trying to decide whether a set of headers is legitimate or spoofed—a matching DKIM signature means that the sender’s infrastructure vouches for all headers below the signature line. (And that’s all it means, too—DKIM is merely one tool in the mail server admin’s toolbox.)
DMARC—Domain-based Message Authentication, Reporting, and Conformance
DMARC extends SPF and DKIM. It’s not particularly exciting from the perspective of someone trying to trace a possibly fraudulent email; it boils down to a simple set of instructions for mail servers about how to handle SPF and DKIM records. DMARC can be used to request that a mail server pass, quarantine, or reject a message based on the outcome of SPF and DKIM verification, but it does not add any additional checks on its own.
„While travel blogging is a relatively young phenomenon, it has already evolved into a mature and sophisticated business model, with participants on both sides working hard to protect and promote their brands.
Those on the industry side say there’s tangible commercial benefit, provided influencers are carefully vetted.
„If people are actively liking and commenting on influencers‘ posts, it shows they’re getting inspired by the destination,“ Keiko Mastura, PR specialist at the Japan National Tourism Organization, tells CNN Travel.
„We monitor comments and note when users tag other accounts or comment about the destination, suggesting they’re adding it to their virtual travel bucket lists. Someone is influential if they have above a 3.5% engagement rate.“
For some tourism outlets, bloggers offer a way to promote products that might be overlooked by more conventional channels. Even those with just 40,000 followers can make a difference.
Kimron Corion, communications manager of Grenada’s Tourism Authority, says his organization has „had a lot of success engaging with micro-influencers who exposed some of our more niche offerings effectively.“
Such engagement doesn’t come cheap though.“
That means extra pressure in finding the right influencer to convey the relevant message — particularly when the aim is to deliver real-time social media exposure.
„We analyze each profile to make sure they’re an appropriate fit,“ says Florencia Grossi, director of international promotion for Visit Argentina. „We look for content with dynamic and interesting stories that invites followers to live the experience.“
One challenge is weeding out genuine influencers from the fake, a job that’s typically done by manually scrutinizing audience feedback for responses that betray automated followers. Bogus bloggers are another reason the market is becoming increasingly wary.“
No one questions the good intent behind the EU’s General Data Protection Regulation (GDPR) legislation, or the need for companies to be more careful with the proprietary information they have about clients, patients, and other individuals they interact with regularly. While the provisions within the GDPR do help, they have also created new opportunities for hackers and identity thieves to exploit that data.
There’s no doubt that seeking to be fully GDPR compliant is more than just a good idea. Along the way, just make sure your organization doesn’t fall victim to one of the various scams that are surfacing. Let’s take a quick review of GDPR and then dive into the dirty tricks hackers have been playing.
Understanding the Basics of GDPR
In 2018, the GDPR established a set of guidelines for managing the collection and storage of consumer and proprietary data. Much of it pertains to personal information provided by individuals to an entity.
That entity may be a banking institution, insurance company, investing service, or even a health care facility. The primary goal is to ensure adequate protections are in place so that an ill-intentioned third party can’t exploit the personal information of those organizations’ employees, clients, and patients.
The GDPR addresses key areas of data security:
Explicit consent to collect and maintain personal data
Notification in the event of a data breach
Dedicated data security personnel within the organization
Data encryption that protects personal information in the event of a breach
Access to personal information for review of accuracy (integrity), and to set limitations on the intended use
While there has been pushback about some of the provisions within the GDPR (especially the need for additional data security personnel outside of the usual IT team), many organizations have been eager to adopt the measures. After all, being GDPR compliant can decrease the risk of a breach and would prove helpful if lawsuits resulted after a breach.
GDPR and Appropriate Security
There is an ongoing discussion about what represents adequate and appropriate security in terms of GDPR compliance. To some degree, the exact approach to security will vary, based on the type of organization involved and the nature of the data that is collected and maintained.
Even so, there is some overlap that would apply in every case. Compliance involves identifying and reinforcing every point in the network where some type of intrusion could possibly take place. Using Artificial Intelligence technology to reinforce points of vulnerability while also monitoring them for possible cyberattacks is another element. Even having an escalation plan in place to handle a major data breach within a short period of time is something any organization could enact.
One point that is sometimes lost in the entire discussion about GDPR security is that the guidelines set minimum standards. Entities are free to go above and beyond in terms of protecting proprietary data like customer lists. Viewing compliance as the starting point and continuing to refine network security will serve a company well in the long run.
So What Have Hackers Been Doing Since the Launch of GDPR?
There’s no doubt that hackers and others with less than honorable intentions have been doing their best to work around the GDPR guidelines even as they use them to their advantage. Some news reports claim that GDPR has made it easier for hackers to gain access to data. So what exactly have these ethically challenged individuals concocted?
Here are some examples:
Introducing Reverse Ransomware
As far as we know, it’s not really called reverse ransomware but that seems to be a pretty good way to describe this evil little scheme. As a review, a ransomware attack is when a hacker gets into your system and encrypts data so you can’t see or use it. Only with the payment of a ransom, typically in untraceable Bitcoin or other cryptocurrencies, will the hacker make your data usable again.
The sad ending to the ransomware saga is that more times than not, the data is never released even if the ransom is paid.
But GDPR has provided the inspiration for the bad guys to put a sneaky spin on the data drama. In this case, they penetrate the network by whatever means available to collect the customer lists, etc., which the EU has worked so hard to protect with the new regulations.
The threat with this variation, however, is that the data will be released publicly, which would put the organization in immediate violation of GDPR and make it liable for what could be a hefty fine — one that is substantially larger than the ransom the criminals are demanding.
Of course, the hacker promises not to release the data if the hostage company pays a ransom and might even further promise to destroy the data afterward. If you believe they’ll actually do that, I’d like to introduce you to the Easter Bunny and Tooth Fairy.
The attacker has already demonstrated a strong amoral streak. What’s to stop them from demanding another payment a month down the road? If you guessed nothing, you’re right. But wait, there’s more.
Doing a Lot of Phishing
Many organizations have seen a continual flow of unsolicited emails offering to help them become GDPR compliant. These range from offering free consultations that can be conducted remotely to conducting online training sessions to explain GDPR and suggest ways to increase security.
Typically, this type of phishing scheme offers a way to remit payments for services in advance, with the understanding that the client pays a portion now and the rest later.
Unsurprisingly, anyone who clicks on the link may lose more than whatever payment is rendered. Wherever the individual lands, the site is likely to be infected with spyware or worse. And if the email is forwarded throughout an organization or outside of it? The infection spreads.
I believe we need to be savvier with emails. That means training employees to never click on links in unsolicited emails, and to report suspicious emails to the security team at once.
What Can You Do?
As you can see, GDPR has provided a variety of crime opportunities for an enterprising hacker. These are just two examples of how they use GDPR for profit at the expense of hardworking business owners. The best first step when confronted with any of these types of threats is to not act on it. Instead, forward it to an agency that can properly evaluate the communication.
At the risk of sounding like Captain Obvious, have you done everything possible to fortify your network against advanced threats? Here are the basic preventive steps:
Web security software: The first line of defense is a firewall (updated regularly of course) that prowls the perimeter, looking to prevent any outside threat’s attempt to penetrate. In addition, be sure to implement network security software that detects malicious network activity resulting from a threat that manages to bypass your perimeter controls. It used to be that you could survive with a haphazard philosophy towards security, but those days are long gone. Get good security software and put it to work.
Encrypt that data: While the firewall and security software protects a network from outside penetration attempts, your data doesn’t always stay at home safe and sound. Any time a remote worker connects back to your network or an employee on premises ventures out to the open Internet, data is at risk. That’s why a virtual private network (VPN) should be a mandatory preventive security measure.
It’s a simple but strong idea. Using military grade protocols, a properly configured VPN service encrypts the flow of data between a network device and the Internet or between a remote device and the company network. The big idea here is that even if a hacker manages to siphon off data, they will be greeted with an indecipherable mess that would take the world’s strongest computers working in unison a few billion years to crack. They’ll probably move onto an easier game.
And while a VPN should be a frontline tool to combat hackers, there’s something else that might even be more important.
Education and Training: Through ignorance or inattention, employees can be the biggest threat to cybersecurity. It’s not enough to simply sit them down when you hire them and warn dire consequences if they let malware in the building. Owners need a thorough, ongoing education program related to online security that emphasizes its importance as being only slightly below breathing.
The Bottom Line
The GDPR does not have to be a stumbling block for you or an opportunity for a hacker. Stay proactive with your security measures and keep your antenna tuned for signs of trouble.
Germany’s Federal Cartel Office, or Bundeskartellamt, on Thursday banned Facebook from combining user data from its various platforms such as WhatsApp and Instagram without explicit user permission.
The decision, which comes as the result of a nearly three-year antitrust investigation into Facebook’s data gathering practices, also bans the social media company from gleaning user data from third-party sites unless they voluntarily consent.
“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Bundeskartellamt President Andreas Mundt said in a release. “In [the] future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.”
Mundt noted that combining user data from various sources “substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power.”
Experts agreed with the decision. “It is high time to regulate the internet giants effectively!” said Marc Al-Hames, general manager of German data protection technologies developer Cliqz GmbH. “Unregulated data capitalism inevitably creates unfair conditions.”
Al-Hames noted that apps like WhatsApp have become “indispensable for many young people,” who feel compelled to join if they want to be part of the social scene. “Social media create social pressure,” he said. “And Facebook exploits this mercilessly: Give me your data or you’re an outsider.”
He called the practice an abuse of dominant market position. “But that’s not all: Facebook monitors our activities regardless of whether we are a member of one of its networks or not. Even those who consciously renounce the social networks for the sake of privacy will still be spied out,” he said, adding that Cliqz and Ghostery stats show that “every fourth of our website visits are monitored by Facebook’s data collection technologies, so-called trackers.”
The Bundeskartellamt’s decision will prevent Facebook from collecting and using data without restriction. “Voluntary consent means that the use of Facebook’s services must [now] be subject to the users’ consent to their data being collected and combined in this way,” said Mundt. “If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”
The ban drew support and calls for it to be expanded to other companies.
“This latest move by Germany’s competition regulator is welcome,” said Morten Brøgger, CEO of secure collaboration platform Wire. “Compromising user privacy for profit is a risk no exec should be willing to take.”
Brøgger contends that Facebook has not fully understood digital privacy’s importance. “From emails suggesting cashing in on user data for money, to the infamous Cambridge Analytica scandal, the company is taking steps back in a world which is increasingly moving towards the protection of everyone’s data,” he said.
“The lesson here is that you cannot simply trust firms that rely on the exchange of data as its main offering, Brøgger added, “and firms using Facebook-owned applications should have a rethink about the platforms they use to do business.”
Al-Hames said regulators shouldn’t stop with Facebook, which he called the number-two offender. “By far the most important data monopolist is Alphabet. With Google search, the Android operating system, the Play Store app sales platform and the Chrome browser, the internet giant collects data on virtually everyone in the Western world,” Al-Hames said. “And even those who want to get free by using alternative services stay trapped in Alphabet’s clutches: With a tracker reach of nearly 80 percent of all page loads Alphabet probably knows more about them than their closest friends or relatives. When it comes to our data, the top priority of the market regulators shouldn’t be Facebook, it should be Alphabet!”
The millions of dots on the map trace highways, side streets and bike trails — each one following the path of an anonymous cellphone user.
One path tracks someone from a home outside Newark to a nearby Planned Parenthood, remaining there for more than an hour. Another represents a person who travels with the mayor of New York during the day and returns to Long Island at night.
Yet another leaves a house in upstate New York at 7 a.m. and travels to a middle school 14 miles away, staying until late afternoon each school day. Only one person makes that trip: Lisa Magrin, a 46-year-old math teacher. Her smartphone goes with her.
An app on the device gathered her location information, which was then sold without her knowledge. It recorded her whereabouts as often as every two seconds, according to a database of more than a million phones in the New York area that was reviewed by The New York Times. While Ms. Magrin’s identity was not disclosed in those records, The Times was able to easily connect her to that dot.
The app tracked her as she went to a Weight Watchers meeting and to her dermatologist’s office for a minor procedure. It followed her hiking with her dog and staying at her ex-boyfriend’s home, information she found disturbing.
“It’s the thought of people finding out those intimate details that you don’t want people to know,” said Ms. Magrin, who allowed The Times to review her location data.
Like many consumers, Ms. Magrin knew that apps could track people’s movements. But as smartphones have become ubiquitous and technology more accurate, an industry of snooping on people’s daily habits has spread and grown more intrusive.
At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States — about half those in use last year. The database reviewed by The Times — a sample of information gathered in 2017 and held by one company — reveals people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day.
These companies sell, use or analyze the data to cater to advertisers, retail outlets and even hedge funds seeking insights into consumer behavior. It’s a hot market, with sales of location-targeted advertising reaching an estimated $21 billion this year. IBM has gotten into the industry, with its purchase of the Weather Channel’s apps. The social network Foursquare remade itself as a location marketing company. Prominent investors in location start-ups include Goldman Sachs and Peter Thiel, the PayPal co-founder.
Businesses say their interest is in the patterns, not the identities, that the data reveals about consumers. They note that the information apps collect is tied not to someone’s name or phone number but to a unique ID. But those with access to the raw data — including employees or clients — could still identify a person without consent. They could follow someone they knew, by pinpointing a phone that regularly spent time at that person’s home address. Or, working in reverse, they could attach a name to an anonymous dot, by seeing where the device spent nights and using public records to figure out who lived there.
“Location information can reveal some of the most intimate details of a person’s life — whether you’ve visited a psychiatrist, whether you went to an A.A. meeting, who you might date,” said Senator Ron Wyden, Democrat of Oregon, who has proposed bills to limit the collection and sale of such data, which are largely unregulated in the United States.
“It’s not right to have consumers kept in the dark about how their data is sold and shared and then leave them unable to do anything about it,” he added.
Mobile Surveillance Devices
After Elise Lee, a nurse in Manhattan, saw that her device had been tracked to the main operating room at the hospital where she works, she expressed concern about her privacy and that of her patients.
“It’s very scary,” said Ms. Lee, who allowed The Times to examine her location history in the data set it reviewed. “It feels like someone is following me, personally.”
The mobile location industry began as a way to customize apps and target ads for nearby businesses, but it has morphed into a data collection and analysis machine.
Retailers look to tracking companies to tell them about their own customers and their competitors’. For a web seminar last year, Elina Greenstein, an executive at the location company GroundTruth, mapped out the path of a hypothetical consumer from home to work to show potential clients how tracking could reveal a person’s preferences. For example, someone may search online for healthy recipes, but GroundTruth can see that the person often eats at fast-food restaurants.
“We look to understand who a person is, based on where they’ve been and where they’re going, in order to influence what they’re going to do next,” Ms. Greenstein said.
Financial firms can use the information to make investment decisions before a company reports earnings — seeing, for example, if more people are working on a factory floor, or going to a retailer’s stores.
Health care facilities are among the more enticing but troubling areas for tracking, as Ms. Lee’s reaction demonstrated. Tell All Digital, a Long Island advertising firm that is a client of a location company, says it runs ad campaigns for personal injury lawyers targeting people anonymously in emergency rooms.
“The book ‘1984,’ we’re kind of living it in a lot of ways,” said Bill Kakis, a managing partner at Tell All.
Jails, schools, a military base and a nuclear power plant — even crime scenes — appeared in the data set The Times reviewed. One person, perhaps a detective, arrived at the site of a late-night homicide in Manhattan, then spent time at a nearby hospital, returning repeatedly to the local police station.
Two location firms, Fysical and SafeGraph, mapped people attending the 2017 presidential inauguration. On Fysical’s map, a bright red box near the Capitol steps indicated the general location of President Trump and those around him, cellphones pinging away. Fysical’s chief executive said in an email that the data it used was anonymous. SafeGraph did not respond to requests for comment.
More than 1,000 popular apps contain location-sharing code from such companies, according to 2018 data from MightySignal, a mobile analysis firm. Google’s Android system was found to have about 1,200 apps with such code, compared with about 200 on Apple’s iOS.
The most prolific company was Reveal Mobile, based in North Carolina, which had location-gathering code in more than 500 apps, including many that provide local news. A Reveal spokesman said that the popularity of its code showed that it helped app developers make ad money and consumers get free services.
To evaluate location-sharing practices, The Times tested 20 apps, most of which had been flagged by researchers and industry insiders as potentially sharing the data. Together, 17 of the apps sent exact latitude and longitude to about 70 businesses. Precise location data from one app, WeatherBug on iOS, was received by 40 companies. When contacted by The Times, some of the companies that received that data described it as “unsolicited” or “inappropriate.”
WeatherBug, owned by GroundTruth, asks users’ permission to collect their location and tells them the information will be used to personalize ads. GroundTruth said that it typically sent the data to ad companies it worked with, but that if they didn’t want the information they could ask to stop receiving it.
The Times also identified more than 25 other companies that have said in marketing materials or interviews that they sell location data or services, including targeted advertising.
The spread of this information raises questions about how securely it is handled and whether it is vulnerable to hacking, said Serge Egelman, a computer security and privacy researcher affiliated with the University of California, Berkeley.
“There are really no consequences” for companies that don’t protect the data, he said, “other than bad press that gets forgotten about.”
A Question of Awareness
Companies that use location data say that people agree to share their information in exchange for customized services, rewards and discounts. Ms. Magrin, the teacher, noted that she liked that tracking technology let her record her jogging routes.
Brian Wong, chief executive of Kiip, a mobile ad firm that has also sold anonymous data from some of the apps it works with, says users give apps permission to use and share their data. “You are receiving these services for free because advertisers are helping monetize and pay for it,” he said, adding, “You would have to be pretty oblivious if you are not aware that this is going on.”
But Ms. Lee, the nurse, had a different view. “I guess that’s what they have to tell themselves,” she said of the companies. “But come on.”
Ms. Lee had given apps on her iPhone access to her location only for certain purposes — helping her find parking spaces, sending her weather alerts — and only if they did not indicate that the information would be used for anything else, she said. Ms. Magrin had allowed about a dozen apps on her Android phone access to her whereabouts for services like traffic notifications.
But it is easy to share information without realizing it. Of the 17 apps that The Times saw sending precise location data, just three on iOS and one on Android told users in a prompt during the permission process that the information could be used for advertising. Only one app, GasBuddy, which identifies nearby gas stations, indicated that data could also be shared to “analyze industry trends.”
More typical was theScore, a sports app: When prompting users to grant access to their location, it said the data would help “recommend local teams and players that are relevant to you.” The app passed precise coordinates to 16 advertising and location companies.
Even industry insiders acknowledge that many people either don’t read those policies or may not fully understand their opaque language. Policies for apps that funnel location information to help investment firms, for instance, have said the data is used for market analysis, or simply shared for business purposes.
“Most people don’t know what’s going on,” said Emmett Kilduff, the chief executive of Eagle Alpha, which sells data to financial firms and hedge funds. Mr. Kilduff said responsibility for complying with data-gathering regulations fell to the companies that collected it from people.
Many location companies say they voluntarily take steps to protect users’ privacy, but policies vary widely.
For example, Sense360, which focuses on the restaurant industry, says it scrambles data within a 1,000-foot square around the device’s approximate home location. Another company, Factual, says that it collects data from consumers at home, but that its database doesn’t contain their addresses.
Some companies say they delete the location data after using it to serve ads, some use it for ads and pass it along to data aggregation companies, and others keep the information for years.
Several people in the location business said that it would be relatively simple to figure out individual identities in this kind of data, but that they didn’t do it. Others suggested it would require so much effort that hackers wouldn’t bother.
It “would take an enormous amount of resources,” said Bill Daddi, a spokesman for Cuebiq, which analyzes anonymous location data to help retailers and others, and raised more than $27 million this year from investors including Goldman Sachs and Nasdaq Ventures. Nevertheless, Cuebiq encrypts its information, logs employee queries and sells aggregated analysis, he said.
There is no federal law limiting the collection or use of such data. Still, apps that ask for access to users’ locations, prompting them for permission while leaving out important details about how the data will be used, may run afoul of federal rules on deceptive business practices, said Maneesha Mithal, a privacy official at the Federal Trade Commission.
Following the Money
Apps form the backbone of this new location data economy.
The app developers can make money by directly selling their data, or by sharing it for location-based ads, which command a premium. Location data companies pay half a cent to two cents per user per month, according to offer letters to app makers reviewed by The Times.
Targeted advertising is by far the most common use of the information.
Google and Facebook, which dominate the mobile ad market, also lead in location-based advertising. Both companies collect the data from their own apps. They say they don’t sell it but keep it for themselves to personalize their services, sell targeted ads across the internet and track whether the ads lead to sales at brick-and-mortar stores. Google, which also receives precise location information from apps that use its ad services, said it modified that data to make it less exact.
Smaller companies compete for the rest of the market, including by selling data and analysis to financial institutions. This segment of the industry is small but growing, expected to reach about $250 million a year by 2020, according to the market research firm Opimas.
Apple and Google have a financial interest in keeping developers happy, but both have taken steps to limit location data collection. In the most recent version of Android, apps that are not in use can collect locations “a few times an hour,” instead of continuously.
Apple has been stricter, for example requiring apps to justify collecting location details in pop-up messages. But Apple’s instructions for writing these pop-ups do not mention advertising or data sale, only features like getting “estimated travel times.”
A spokesman said the company mandates that developers use the data only to provide a service directly relevant to the app, or to serve advertising that met Apple’s guidelines.
Apple recently shelved plans that industry insiders say would have significantly curtailed location collection. Last year, the company said an upcoming version of iOS would show a blue bar onscreen whenever an app not in use was gaining access to location data.
The discussion served as a “warning shot” to people in the location industry, David Shim, chief executive of the location company Placed, said at an industry event last year.
After examining maps showing the locations extracted by their apps, Ms. Lee, the nurse, and Ms. Magrin, the teacher, immediately limited what data those apps could get. Ms. Lee said she told the other operating-room nurses to do the same.
“I went through all their phones and just told them: ‘You have to turn this off. You have to delete this,’” Ms. Lee said. “Nobody knew.”
In the data set reviewed by The Times, phone locations are recorded in sensitive areas including the Indian Point nuclear plant near New York City. By Michael H. Keller | Satellite imagery by Mapbox and DigitalGlobe
It’s not just Facebook: Android and iOS’s App Stores have incentivized an app economy where free apps make money by selling your personal data and location history to advertisers.
Monday morning, the New York Times published a horrifying investigation in which the publication reviewed a huge, “anonymized” dataset of smartphone location data from a third-party vendor, de-anonymized it, and tracked ordinary people through their day-to-day lives—including sensitive stops at places like Planned Parenthood, their homes, and their offices.
The article lays bare what the privacy-conscious have suspected for years: The apps on your smartphone are tracking you, and that for all the talk about “anonymization” and claims that the data is collected only in aggregate, our habits are so specific—and often unique—so that anonymized identifiers can often be reverse engineered and used to track individual people.
Along with the investigation, the New York Timespublished a guide to managing and restricting location data on specific apps. This is easier on iOS than it is Android, and is something everyone should be periodically doing. But the main takeaway, I think, is not just that we need to be more scrupulous about our location data settings. It’s that we need to be much, much more restrictive about the apps that we install on our phones.
Everywhere we go, we are carrying a device that not only has a GPS chip designed to track our location, but an internet or LTE connection designed to transmit that information to third parties, many of whom have monetized that data. Rough location data can be gleaned by tracking the cell phone towers your phone connects to, and the best way to guarantee privacy would be to have a dumb phone, an iPod Touch, or no phone at all. But for most people, that’s not terribly practical, and so I think it’s worth taking a look at the types of apps that we have installed on our phone, and their value propositions—both to us, and to their developers.
A good question to ask yourself when evaluating your apps is “why does this app exist?”
The early design decisions of Apple, Google, and app developers continue to haunt us all more than a decade later. Broadly and historically speaking, we have been willing to spend hundreds of dollars on a smartphone, but balk at the idea of spending $.99 on an app. Our reluctance to pay any money up front for apps has come at an unknowable but massive cost to our privacy. Even a lowly flashlight or fart noise app is not free to make, and the overwhelming majority of “free” apps are not altruistic—they are designed to make money, which usually means by harvesting and reselling your data.
A good question to ask yourself when evaluating your apps is “why does this app exist?” If it exists because it costs money to buy, or because it’s the free app extension of a service that costs money, then it is more likely to be able to sustain itself without harvesting and selling your data. If it’s a free app that exists for the sole purpose of amassing a large amount of users, then chances are it has been monetized by selling data to advertisers.
The New York Times noted that much of the data used in its investigation came from free weather and sports scores apps that turned around and sold their users’ data; hundreds of free games, flashlight apps, and podcast apps ask for permissions they don’t actually need for the express purpose of monetizing your data.
Even apps that aren’t blatantly sketchy data grabs often function that way: Facebook and its suite of apps (Instagram, Messenger, etc) collect loads of data about you both from your behavior on the app itself but also directly from your phone (Facebook went to great lengths to hide the fact that its Android app was collecting call log data.) And Android itself is a smartphone ecosystem that also serves as yet another data collection apparatus for Google. Unless you feel particularly inclined to read privacy policies that are dozens of pages long for every app you download, who knows what information bespoke apps for news, podcasts, airlines, ticket buying, travel, and social media are collecting and selling.
This problem is getting worse, not better: Facebook made WhatsApp, an app that managed to be profitable with a $1 per year subscription fee, into a “free” service because it believed it could make more money with an advertising-based business model.
What this means is that the dominant business model on our smartphones is one that’s predicated on monetizing you, and only through paying obsessive attention to your app permissions and seeking paid alternatives can you hope to minimize these impacts on yourself. If this bothers you, your only options are to get rid of your smartphone altogether or to rethink what apps you want installed on your phone and act accordingly.
It might be time to get rid of all the free single-use apps that are essentially re-sized websites. Generally speaking, it is safer, privacywise, to access your data on a browser, even if it’s more inconvenient. On second thought, it may be time to delete all your apps and start over using only apps that respect your privacy and that have sustainable business models that don’t rely on monetizing your data. On iOS, this might mean using more of Apple’s first party apps, even if they don’t work as well as free third-party versions.
It’s onto me, anyway. I am merely one anecdata point among billions, but I’m sure I’m not the only Facebook user who has found herself shying away from the very public, often performative, and even tiring habit of posting regular updates to Facebook and Instagram. Over the past year I’ve found myself thinking not about quitting social networks, but about redefining them. For me, that process has involved a lot more private messaging.
Facebook, it seems, has noticed. Last week, The New York Times reported that Facebook chief executive Mark Zuckerberg plans to unify Facebook Messenger, WhatsApp, and Instagram messaging on the backend of the services. This would make it possible for people relying on different flavors of Facebook apps to all gorge at the same messaging table. On the one hand, the move is truly Facebookian—just try to extricate yourself from Facebook, and it will try every which way to pull you back in. On the other hand, it makes sense for Facebook for a few reasons.
My personal relationship with Facebook is multi-faceted. I have a personal account and a journalist’s page. I also use Instagram and WhatsApp. But last year, I let my professional page languish. I stopped posting to my personal feed as frequently. Instead I turned to private messaging.
During a trip to Europe last fall, I shared everything I felt compelled to share with a small group of people on Apple Messages. The excursion to see one of the largest waves ever surfed by a human? I shared the photo in a private Slack message with coworkers, instead of posting on Facebook. Wedding photos no longer go up on Instagram. During the holidays, I happily embrace my role as halfway-decent photographer, but when I share the photos with friends and family, it’s only through Messages, WhatsApp, or private photo albums.
These tools have become my preferred method of communicating. It’s not some big revelation, or even anything that’s new; peer-to-peer messaging, or at least the guise of “private” messaging, is as old as the consumer internet itself. When our worlds expand in a way that feels unmanageable, our instinct is sometimes to shrink them until we’re comfortable again, for better or worse. Remember Path, the social network limited to just your closest circle? That didn’t work out, but the entire app was built upon the Dunbar theory that our species can’t handle more than 150 close friends. There just might have been something to that.
“I think a lot of people experience this,” says Margaret Morris, a clinical psychologist and author of Left to Our Own Devices. “When you post something in such a public way, the question is: What are the motivations? But when it’s in a private thread, it’s: Why am I sharing this? Oh, it’s because I think you’ll like this. I think we’ll connect over this. The altruistic motivations can be far more clear in private messaging.”
Of course, “altruism” in this case only applies to the friends exchanging messages and not the messaging service providers. Facebook’s efforts to unify its messaging platforms are at least partly rooted in a desire to monetize our activity, whether that’s by keeping us engaged in an outward-facing News Feed or within a little chat box. And there’s a major distinction between so-called private messages and what Morris calls “Privacy with a capital P.”
“There’s one kind of privacy, which is: what does my cousin know, or what does my co-worker know,” Morris says, “And then there’s the kind of privacy that’s about the data Facebook has.” Facebook’s plan is reportedly to offer end-to-end encryption on all of its messaging apps once the backend systems have been merged. As my WIRED colleague Lily Newman writes, cryptographers and privacy advocates already see obvious hurdles in making this work.
That’s why I often use Apple’s Messages and even iCloud photo sharing. There’s an explicit agreement that exists between the service provider and user: Buy our ridiculously expensive hardware, and we won’t sell your data. (While iCloud has been hacked before, Apple swears by the end-to-end encryption between iPhone users and says it doesn’t share Messages data with third-party apps). But just using Messages isn’t realistic, either. The platform is only functional between two iPhones. Not everyone can afford Apple products, and in other parts of the world, such as China or India, apps like WeChat and WhatsApp dominate private messaging. That means you’re going to end up using other apps if you plan to communicate outside of a bubble of iPhone lovers.
But beyond privacy with a capital P—which is, for many people, the most important consideration when it comes to social media—there’s the psychology of privacy when it comes to sharing updates about our personal lives, and connecting with other humans. Social networks have made human connections infinitely more possible and also turned the whole notion upside down on its head.
Morris, for example, sees posting something publicly to a Facebook feed as a yearning for interconnectedness, while a private messaging thread is a quest for what she calls attunement, a way to strengthen a bond between two people. But, she notes, some people take a screenshot from a private message and then, having failed in their quest for attunement, publish an identity-stripped version of it to their feed. Guilty as charged. Social networking is no longer just a feed or an app or a chat box or SMS, but some amalgamation of it all.
Posting private messages publicly is not something I plan to make a habit of, but there is still the urge sometimes to share. I’m still on Twitter. I’ll likely still post to Facebook and Instagram from time to time. At some point I may be looking for a sense of community that exists beyond my own small private messaging groups, for a tantalizing blend of familiarity and anonymity in a Facebook group of like-minded hobbyists. For some people, larger social networking communities are lifelines as they struggle with health, with family, with job worries, with life.
But right now, “private” messages are the way to share my life with the people who matter most, an attempt to splinter off my social interactions into something more satisfying—especially when posting to Facebook has never seemed less appealing.
Apple on Wednesday warned investors that its revenue for the last three months of 2018 would not live up to previous estimates, or even come particularly close. The main culprit appears to be China, where the trade war and a broader economic slowdown contributed to plummeting iPhone sales. But CEO Tim Cook’s letter to investors pointed to a secondary thread as well, one that Apple customers, environmentalists, and even the company itself should view not as a liability but an asset: People are holding onto their iPhones longer.
That’s not just in China. Cook noted that iPhone upgrades were “not as strong as we thought they would be” in developed markets as well, citing “macroeconomic conditions,” a shift in how carriers price smartphones, a strong US dollar, and temporarily discounted battery replacements. He neglected to mention the simple fact that an iPhone can perform capably for years—and consumers are finally getting wise.
As recently as 2015, smartphone users on average upgraded their phone roughly every 24 months, says Cliff Maldonado, founder of BayStreet Research, which tracks the mobile industry. As of the fourth quarter of last year, that had jumped to at least 35 months. “You’re looking at people holding onto their devices an extra year,” Maldonado says. “It’s been considerable.”
A few factors contribute to the trend, chief among them the shift from buying phones on a two-year contract—heavily subsidized by the carriers—to installment plans in which the customer pays full freight. T-Mobile introduced the practice in the US in 2014, and by 2015 it had become the norm. The full effects, though, have only kicked in more recently. People still generally pay for their smartphone over two years; once they’re paid off, though, their monthly bill suddenly drops by, say, $25.
The shift has also caused a sharp drop-off in carrier incentives. They turn out not to be worth it. “They’re actually encouraging that dynamic of holding your smartphone longer. It’s in their best interest,” Maldonado says. “It actually costs them to get you into a new phone, to do those promotions, to run the transaction and put it on their books and finance it.”
Bottom line: If your service is reliable and your iPhone still works fine, why go through the hassle?
“There’s not as many subsidies as there used to be from a carrier point of view,” Cook told CNBC Wednesday. “And where that didn’t all happen yesterday, if you’ve been out of the market for two or three years and you come back, it looks like that to you.”
Meanwhile, older iPhones work better, for longer, thanks to Apple itself. When Apple vice president Craig Federighi introduced iOS 12 in June at Apple’s Worldwide Developers Conference, he emphasized how much it improved the performance of older devices. Among the numbers he cited: The 2014 iPhone 6 Plus opens apps 40 percent faster with iOS 12 than it had with iOS 11, and its keyboard appears up to 50 percent faster than before. And while Apple’s battery scandal of a year ago was a black mark for the company, it at least reminded Apple owners that they didn’t necessarily need a new iPhone. Eligible iPhone owners found that a $29 battery replacement—it normally costs $79—made their iPhone 6 feel something close to new.
“There definitely has been a major shift in customer perception, after all the controversy,” says Kyle Wiens, founder of online repair community iFixit. “What it really did more than anything else was remind you that the battery on your phone really can be replaced. Apple successfully brainwashing the public into thinking the battery was something they never needed to think about led people to prematurely buy these devices.”
Combine all of that with the fact that new model iPhones—and Android phones for that matter—have lacked a killer feature, much less one that would inspire someone to spend $1,000 or more if they didn’t absolutely have to. “Phones used to be toys, and shiny objects,” Maldonado says. “Now they’re utilities. You’ve got to have it, and the joy of getting a new one is pretty minor. Facebook and email looks the same; the camera’s still great.”
In the near term, these dynamics aren’t ideal for Apple; its stock dropped more than 7 percent in after-hours trading following Wednesday’s news. But it’s terrific news for consumers, who have apparently realized that a smartphone does not have a two-year expiration date. That saves money in the long run. And pulling the throttle back on iPhone sales may turn out to be equally welcome news for the planet.
According to Apple’s most recent sustainability report, the manufacture of each Apple device generates on average 90 pounds of carbon emissions. Wiens suggests that the creation of each iPhone requires hundreds of pounds of raw materials.
Manufacturing electronics is environmentally intense, Wiens says. “We can’t live in a world where we’re making 3 billion new smartphones a year. We don’t have the resources for it. We have to reduce how many overall devices we’re making. There are lots of ways to do it, but it gets down to demand, and how many we’re buying. That’s not what Apple wants, but it’s what the environment needs.”
Which raises a question: Why does Apple bother extending the lives of older iPhones? The altruistic answer comes from Lisa Jackson, who oversees the company’s environmental efforts.
“We also make sure to design and build durable products that last as long as possible,” Jackson said at Apple’s September hardware event. “Because they last longer, you can keep using them. And keeping using them is the best thing for the planet.”
Given a long enough horizon, Apple may see a financial benefit from less frequent upgrades as well. An iPhone that lasts longer keeps customers in the iOS ecosystem longer. That becomes even more important as the company places greater emphasis not on hardware but on services like Apple Music. It also offers an important point of differentiation from Android, whose fragmented ecosystem means even flagship devices rarely continue to be fully supported beyond two years.
“In reality, the big picture is still very good for Apple,” Maldonado says. Compared with Android, “Apple’s in a better spot, because the phones last longer.”
That’s cold comfort today and doesn’t help a whit with China. But news that people are holding onto their iPhones longer should be taken for what it really is: A sign of progress and a win for everyone. Even Apple.
Companies around the world are scrambling to get their business and its practices into compliance – a significant task for many of them. While technically, the deadline to get everything in order passed on May 25, for many companies the process will continue well into June and possibly beyond. Some companies are even shutting down in Europe for good, or for as long as it takes them to get in compliance.
Even with the deadline behind us, the GDPR continues to be a top story for the tech world and may remain so for some time to come.
2. Amazon Provides Facial Recognition Tech to Law Enforcement
Civil rights groups have called for the company to stop allowing law enforcement access to the tech out of concerns that increased government surveillance can pose a threat to vulnerable communities in the country. In spite of the public criticism, Amazon hasn’t backed off on providing the tech to authorities, at least as of this time.
3. Apple Looks Into Self-Driving Employee Shuttles
Of the many problems facing our world, the frustrating work commute is one that many of the brightest minds in tech deal with just like the rest of us. Which makes it a problem the biggest tech companies have a strong incentive to try to solve.
Apple is one of many companies that’s invested in developing self-driving cars as a possible solution, but while that goal is still (probably) years away, they’ve narrowed their focus to teaming up with VW to create self-driving shuttles just for their employees. Even that project is moving slower than the company had hoped, but they’re aiming to have some shuttles ready by the end of the year.
4. Court Weighs in on President’s Tendency to Block Critics on Twitter
Three years ago no one would have imagined that Twitter would be a president’s go-to source for making announcements, but today it’s used to that effect more frequently than official press conferences or briefings.
In a court battle that may sound surreal to many of us, a judge just found that the president can no longer legally block other users on Twitter. The court asserted that blocking users on a public forum like Twitter amounts to a violation of their First Amendment rights. The judgment does still allow for the president and other public officials to mute users they don’t agree with, though.
5. YouTube Launches Music Streaming Service
YouTube joined the ranks of Spotify, Pandora, and Amazon this past month with their own streaming music service. Consumers can use a free version of the service that includes ads, or can pay $9.99 for the ad-free version.
With so many similar services already on the market, people weren’t exactly clamoring for another music streaming option. But since YouTube is likely to remain the reigning source for videos, it doesn’t necessarily need to unseat Spotify to still be okay. And with access to Google’s extensive user data, it may be able to provide more useful recommendations than its main competitors in the space, which is one way the service could differentiate itself.
6. Facebook Institutes Political Ad Rules
Facebook hasn’t yet left behind the controversies of the last election. The company is still working to proactively respond to criticism of its role in the spread of political propaganda many believe influenced election results. One of the solutions they’re trying is a new set of rules for any political ads run on the platform.
Any campaign that intends to run Facebook ads is now required to verify their identity with a card Facebook mails to their address that has a verification code. While Facebook has been promoting these new rules for a few weeks to politicians active on the platform, some felt blindsided when they realized, right before their primaries no less, that they could no longer place ads without waiting 12 to 15 days for a verification code to come in the mail. Politicians in this position blame the company for making a change that could affect their chances in the upcoming election.
Even in their efforts to avoid swaying elections, Facebook has found themselves criticized for doing just that. They’re probably feeling at this point like they just can’t win.
7. Another Big Month for Tech IPOs
This year has seen one tech IPO after another and this month is no different. Chinese smartphone company Xiaomi has a particularly large IPO in the works. The company seeks to join the Hong Kong stock exchange on June 7 with an initial public offering that experts anticipate could reach $10 billion.
The online lending platform Greensky started trading on the New York Stock Exchange on May 23 and sold 38 million shares in its first day, 4 million more than expected. This month continues 2018’s trend of tech companies going public, largely to great success.
8. StumbleUpon Shuts Down
In the internet’s ongoing evolution, there will always be tech companies that win and those that fall by the wayside. StumbleUpon, a content discovery platform that had its heyday in the early aughts, is officially shutting down on June 30.
Since its 2002 launch, the service has helped over 40 million users “stumble upon” 60 billion new websites and pieces of content. The company behind StumbleUpon plans to create a new platform that serves a similar purpose that may be more useful to former StumbleUpon users called Mix.
9. Uber and Lyft Invest in Driver Benefits
In spite of their ongoing success, the popular ridesharing platforms Uber and Lyft have faced their share of criticism since they came onto the scene. One of the common complaints critics have made is that the companies don’t provide proper benefits to their drivers. And in fact, the companies have fought to keep drivers classified legally as contractors so they’re off the hook for covering the cost of employee taxes and benefits.
Recently both companies have taken steps to make driving for them a little more attractive. Uber has begun offering Partner Protection to its drivers in Europe, which includes health insurance, sick pay, and parental leave – so far nothing similar in the U.S. though. For its part, Lyft is investing $100 million in building driver support centers where their drivers can stop to get discounted car maintenance, tax help, and customer support help in person from Lyft staff. It’s not the same as getting full employee benefits (in the U.S. at least), but it’s something.