Archiv der Kategorie: Web Development

DKIM, SPF, DMARC demystified

What all the stuff in email headers means—and how to sniff out spoofing

Parsing email headers needs care and knowledge—but it requires no special tools.

Come to think of it, maybe you shouldn't open this one at all.
Enlarge / Come to think of it, maybe you shouldn’t open this one at all.
Aurich / Thinkstock

I pretty frequently get requests for help from someone who has been impersonated—or whose child has been impersonated—via email. Even when you know how to „view headers“ or „view source“ in your email client, the spew of diagnostic wharrgarbl can be pretty overwhelming if you don’t know what you’re looking at. Today, we’re going to step through a real-world set of (anonymized) email headers and describe the process of figuring out what’s what.

Before we get started with the actual headers, though, we’re going to take a quick detour through an overview of what the overall path of an email message looks like in the first place. (More experienced sysadmin types who already know what stuff like „MTA“ and „SPF“ stand for can skip a bit ahead to the fun part!)

From MUA to MTA, and back to MUA again

The basic components involved in sending and receiving email are the Mail User Agent and Mail Transfer Agent. In the briefest possible terms, an MUA is the program you use to read and send mail from your own personal computer (like Thunderbird, or Mail.app, or even a webmail interface like Gmail or Outlook), and MTAs are programs that accept messages from senders and route them along to their final recipients.

Traditionally, mail was sent to a mail server using the Simple Mail Transfer Protocol (SMTP) and downloaded from the server using the Post Office Protocol (abbreviated as POP3, since version 3 is the most commonly used version of the protocol). A traditional Mail User Agent—such as Mozilla Thunderbird—would need to know both protocols; it would send all of its user’s messages to the user’s mail server using SMTP, and it would download messages intended for the user from the user’s mail server using POP3.

As time went by, things got a bit more complex. IMAP largely superseded POP3 since it allowed the user to leave the actual email on the server. This meant that you could read your mail from multiple machines (perhaps a desktop PC and a laptop) and have all of the same messages, organized the same way, everywhere you might check them.

Finally, as time went by, webmail became more and more popular. If you use a website as your MUA, you don’t need to know any pesky SMTP or IMAP server settings; you just need your email address and password, and you’re ready to read.

Ultimately, any message from one human user to another follows the path of MUA ⟶ MTA(s) ⟶ MUA. The analysis of email headers involves tracing that flow and looking for any funny business.

TLS in-flight encryption

The original SMTP protocol had absolutely no thought toward security—any server was expected to accept any message, from any sender, and pass the message along to any other server it thought might know how to get to the recipient in the To: field. That was fine and dandy in email’s earliest days of trusted machines and people, but it rapidly turned into a nightmare as the Internet scaled exponentially and became more commercially valuable.

It’s still possible to send email with absolutely no thought toward encryption or authentication, but such messages will very likely get rejected along the way by anti-spam defenses. Modern email typically is encrypted in-flight, and signed and authenticated at-rest. In-flight encryption is accomplished by TLS, which helps keep the content of a message from being captured or altered in-flight from one server to another. That’s great, so far as it goes, but in-flight TLS is only applied when mail is being relayed from one MTA to another MTA along the delivery path.

If an email travels from the sender through three MTAs before reaching its recipient, any server along the way can alter the content of the message—TLS encrypts the transmission from point to point but does nothing to verify the authenticity of the content itself or the path through which it’s traveling.

SPF—the Sender Policy Framework

The owner of a domain can set a TXT record in its DNS that states what servers are allowed to send mail on behalf of that domain. For a very simple example, Ars Technica’s SPF record says that email from arstechnica.com should only come from the servers specified in Google’s SPF record. Any other source should be met with a SoftFail error; this effectively means „trust it less, but don’t necessarily yeet it into the sun based on this alone.“

SPF headers in an email can’t be completely trusted after they’re generated, because there is no encryption involved. SPF is really only useful to the servers themselves, in real time. If a server knows that it’s at the outside boundary edge of a network, it also knows that any message it receives should be coming from a server specified in the sender’s domain’s SPF record. This makes SPF a great tool for getting rid of spam quickly.

DKIM—DomainKeys Identified Mail

Similarly to SPF, DKIM is set in TXT records in a sending domain’s DNS. Unlike SPF, DKIM is an authentication technique that validates the content of the message itself.

The owner of the sending domain generates a public/private key pair and stores the public key in a TXT record on the domain’s DNS. Mail servers on the outer boundary of the domain’s infrastructure use the private DKIM key to generate a signature (properly an encrypted hash) of the entire message body, including all headers accumulated on the way out of the sender’s infrastructure. Recipients can decrypt the DKIM signature using the public DKIM key retrieved from DNS, then make sure the hash matches the entire message body, including headers, as they received it.

If the decrypted DKIM signature is a matching hash for the entire body, the message is likely to be legitimate and unaltered—at least as verified by a private key belonging only to the domain owner (the end user does not have or need this key). If the DKIM signature is invalid, you know that the message either did not originate from the purported sender’s domain or has been altered (even if only by adding extra headers) by some other server in between. Or both!

This becomes extremely useful when trying to decide whether a set of headers is legitimate or spoofed—a matching DKIM signature means that the sender’s infrastructure vouches for all headers below the signature line. (And that’s all it means, too—DKIM is merely one tool in the mail server admin’s toolbox.)

DMARC—Domain-based Message Authentication, Reporting, and Conformance

DMARC extends SPF and DKIM. It’s not particularly exciting from the perspective of someone trying to trace a possibly fraudulent email; it boils down to a simple set of instructions for mail servers about how to handle SPF and DKIM records. DMARC can be used to request that a mail server pass, quarantine, or reject a message based on the outcome of SPF and DKIM verification, but it does not add any additional checks on its own.

Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret

Dozens of companies use smartphone locations to help advertisers and even hedge funds. They say it’s anonymous, but the data shows how personal it is.

The millions of dots on the map trace highways, side streets and bike trails — each one following the path of an anonymous cellphone user.

One path tracks someone from a home outside Newark to a nearby Planned Parenthood, remaining there for more than an hour. Another represents a person who travels with the mayor of New York during the day and returns to Long Island at night.

Yet another leaves a house in upstate New York at 7 a.m. and travels to a middle school 14 miles away, staying until late afternoon each school day. Only one person makes that trip: Lisa Magrin, a 46-year-old math teacher. Her smartphone goes with her.

An app on the device gathered her location information, which was then sold without her knowledge. It recorded her whereabouts as often as every two seconds, according to a database of more than a million phones in the New York area that was reviewed by The New York Times. While Ms. Magrin’s identity was not disclosed in those records, The Times was able to easily connect her to that dot.

The app tracked her as she went to a Weight Watchers meeting and to her dermatologist’s office for a minor procedure. It followed her hiking with her dog and staying at her ex-boyfriend’s home, information she found disturbing.

“It’s the thought of people finding out those intimate details that you don’t want people to know,” said Ms. Magrin, who allowed The Times to review her location data.

Like many consumers, Ms. Magrin knew that apps could track people’s movements. But as smartphones have become ubiquitous and technology more accurate, an industry of snooping on people’s daily habits has spread and grown more intrusive.

 

At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States — about half those in use last year. The database reviewed by The Times — a sample of information gathered in 2017 and held by one company — reveals people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day.

[Learn how to stop apps from tracking your location.]

These companies sell, use or analyze the data to cater to advertisers, retail outlets and even hedge funds seeking insights into consumer behavior. It’s a hot market, with sales of location-targeted advertising reaching an estimated $21 billion this year. IBM has gotten into the industry, with its purchase of the Weather Channel’s apps. The social network Foursquare remade itself as a location marketing company. Prominent investors in location start-ups include Goldman Sachs and Peter Thiel, the PayPal co-founder.

Businesses say their interest is in the patterns, not the identities, that the data reveals about consumers. They note that the information apps collect is tied not to someone’s name or phone number but to a unique ID. But those with access to the raw data — including employees or clients — could still identify a person without consent. They could follow someone they knew, by pinpointing a phone that regularly spent time at that person’s home address. Or, working in reverse, they could attach a name to an anonymous dot, by seeing where the device spent nights and using public records to figure out who lived there.

Many location companies say that when phone users enable location services, their data is fair game. But, The Times found, the explanations people see when prompted to give permission are often incomplete or misleading. An app may tell users that granting access to their location will help them get traffic information, but not mention that the data will be shared and sold. That disclosure is often buried in a vague privacy policy.

“Location information can reveal some of the most intimate details of a person’s life — whether you’ve visited a psychiatrist, whether you went to an A.A. meeting, who you might date,” said Senator Ron Wyden, Democrat of Oregon, who has proposed bills to limit the collection and sale of such data, which are largely unregulated in the United States.

“It’s not right to have consumers kept in the dark about how their data is sold and shared and then leave them unable to do anything about it,” he added.

Mobile Surveillance Devices

After Elise Lee, a nurse in Manhattan, saw that her device had been tracked to the main operating room at the hospital where she works, she expressed concern about her privacy and that of her patients.

“It’s very scary,” said Ms. Lee, who allowed The Times to examine her location history in the data set it reviewed. “It feels like someone is following me, personally.”

The mobile location industry began as a way to customize apps and target ads for nearby businesses, but it has morphed into a data collection and analysis machine.

Retailers look to tracking companies to tell them about their own customers and their competitors’. For a web seminar last year, Elina Greenstein, an executive at the location company GroundTruth, mapped out the path of a hypothetical consumer from home to work to show potential clients how tracking could reveal a person’s preferences. For example, someone may search online for healthy recipes, but GroundTruth can see that the person often eats at fast-food restaurants.

“We look to understand who a person is, based on where they’ve been and where they’re going, in order to influence what they’re going to do next,” Ms. Greenstein said.

Financial firms can use the information to make investment decisions before a company reports earnings — seeing, for example, if more people are working on a factory floor, or going to a retailer’s stores.

 

Health care facilities are among the more enticing but troubling areas for tracking, as Ms. Lee’s reaction demonstrated. Tell All Digital, a Long Island advertising firm that is a client of a location company, says it runs ad campaigns for personal injury lawyers targeting people anonymously in emergency rooms.

“The book ‘1984,’ we’re kind of living it in a lot of ways,” said Bill Kakis, a managing partner at Tell All.

Jails, schools, a military base and a nuclear power plant — even crime scenes — appeared in the data set The Times reviewed. One person, perhaps a detective, arrived at the site of a late-night homicide in Manhattan, then spent time at a nearby hospital, returning repeatedly to the local police station.

Two location firms, Fysical and SafeGraph, mapped people attending the 2017 presidential inauguration. On Fysical’s map, a bright red box near the Capitol steps indicated the general location of President Trump and those around him, cellphones pinging away. Fysical’s chief executive said in an email that the data it used was anonymous. SafeGraph did not respond to requests for comment.

 

More than 1,000 popular apps contain location-sharing code from such companies, according to 2018 data from MightySignal, a mobile analysis firm. Google’s Android system was found to have about 1,200 apps with such code, compared with about 200 on Apple’s iOS.

The most prolific company was Reveal Mobile, based in North Carolina, which had location-gathering code in more than 500 apps, including many that provide local news. A Reveal spokesman said that the popularity of its code showed that it helped app developers make ad money and consumers get free services.

To evaluate location-sharing practices, The Times tested 20 apps, most of which had been flagged by researchers and industry insiders as potentially sharing the data. Together, 17 of the apps sent exact latitude and longitude to about 70 businesses. Precise location data from one app, WeatherBug on iOS, was received by 40 companies. When contacted by The Times, some of the companies that received that data described it as “unsolicited” or “inappropriate.”

WeatherBug, owned by GroundTruth, asks users’ permission to collect their location and tells them the information will be used to personalize ads. GroundTruth said that it typically sent the data to ad companies it worked with, but that if they didn’t want the information they could ask to stop receiving it.

The Times also identified more than 25 other companies that have said in marketing materials or interviews that they sell location data or services, including targeted advertising.

[Read more about how The Times analyzed location tracking companies.]

The spread of this information raises questions about how securely it is handled and whether it is vulnerable to hacking, said Serge Egelman, a computer security and privacy researcher affiliated with the University of California, Berkeley.

“There are really no consequences” for companies that don’t protect the data, he said, “other than bad press that gets forgotten about.”

A Question of Awareness

Companies that use location data say that people agree to share their information in exchange for customized services, rewards and discounts. Ms. Magrin, the teacher, noted that she liked that tracking technology let her record her jogging routes.

Brian Wong, chief executive of Kiip, a mobile ad firm that has also sold anonymous data from some of the apps it works with, says users give apps permission to use and share their data. “You are receiving these services for free because advertisers are helping monetize and pay for it,” he said, adding, “You would have to be pretty oblivious if you are not aware that this is going on.”

But Ms. Lee, the nurse, had a different view. “I guess that’s what they have to tell themselves,” she said of the companies. “But come on.”

Ms. Lee had given apps on her iPhone access to her location only for certain purposes — helping her find parking spaces, sending her weather alerts — and only if they did not indicate that the information would be used for anything else, she said. Ms. Magrin had allowed about a dozen apps on her Android phone access to her whereabouts for services like traffic notifications.

But it is easy to share information without realizing it. Of the 17 apps that The Times saw sending precise location data, just three on iOS and one on Android told users in a prompt during the permission process that the information could be used for advertising. Only one app, GasBuddy, which identifies nearby gas stations, indicated that data could also be shared to “analyze industry trends.”

More typical was theScore, a sports app: When prompting users to grant access to their location, it said the data would help “recommend local teams and players that are relevant to you.” The app passed precise coordinates to 16 advertising and location companies.

A spokesman for theScore said that the language in the prompt was intended only as a “quick introduction to certain key product features” and that the full uses of the data were described in the app’s privacy policy.

The Weather Channel app, owned by an IBM subsidiary, told users that sharing their locations would let them get personalized local weather reports. IBM said the subsidiary, the Weather Company, discussed other uses in its privacy policy and in a separate “privacy settings” section of the app. Information on advertising was included there, but a part of the app called “location settings” made no mention of it.

The app did not explicitly disclose that the company had also analyzed the data for hedge funds — a pilot program that was promoted on the company’s website. An IBM spokesman said the pilot had ended. (IBM updated the app’s privacy policy on Dec. 5, after queries from The Times, to say that it might share aggregated location data for commercial purposes such as analyzing foot traffic.)

Even industry insiders acknowledge that many people either don’t read those policies or may not fully understand their opaque language. Policies for apps that funnel location information to help investment firms, for instance, have said the data is used for market analysis, or simply shared for business purposes.

“Most people don’t know what’s going on,” said Emmett Kilduff, the chief executive of Eagle Alpha, which sells data to financial firms and hedge funds. Mr. Kilduff said responsibility for complying with data-gathering regulations fell to the companies that collected it from people.

Many location companies say they voluntarily take steps to protect users’ privacy, but policies vary widely.

For example, Sense360, which focuses on the restaurant industry, says it scrambles data within a 1,000-foot square around the device’s approximate home location. Another company, Factual, says that it collects data from consumers at home, but that its database doesn’t contain their addresses.

Some companies say they delete the location data after using it to serve ads, some use it for ads and pass it along to data aggregation companies, and others keep the information for years.

Several people in the location business said that it would be relatively simple to figure out individual identities in this kind of data, but that they didn’t do it. Others suggested it would require so much effort that hackers wouldn’t bother.

It “would take an enormous amount of resources,” said Bill Daddi, a spokesman for Cuebiq, which analyzes anonymous location data to help retailers and others, and raised more than $27 million this year from investors including Goldman Sachs and Nasdaq Ventures. Nevertheless, Cuebiq encrypts its information, logs employee queries and sells aggregated analysis, he said.

There is no federal law limiting the collection or use of such data. Still, apps that ask for access to users’ locations, prompting them for permission while leaving out important details about how the data will be used, may run afoul of federal rules on deceptive business practices, said Maneesha Mithal, a privacy official at the Federal Trade Commission.

“You can’t cure a misleading just-in-time disclosure with information in a privacy policy,” Ms. Mithal said.

Following the Money

Apps form the backbone of this new location data economy.

The app developers can make money by directly selling their data, or by sharing it for location-based ads, which command a premium. Location data companies pay half a cent to two cents per user per month, according to offer letters to app makers reviewed by The Times.

Targeted advertising is by far the most common use of the information.

Google and Facebook, which dominate the mobile ad market, also lead in location-based advertising. Both companies collect the data from their own apps. They say they don’t sell it but keep it for themselves to personalize their services, sell targeted ads across the internet and track whether the ads lead to sales at brick-and-mortar stores. Google, which also receives precise location information from apps that use its ad services, said it modified that data to make it less exact.

Smaller companies compete for the rest of the market, including by selling data and analysis to financial institutions. This segment of the industry is small but growing, expected to reach about $250 million a year by 2020, according to the market research firm Opimas.

Apple and Google have a financial interest in keeping developers happy, but both have taken steps to limit location data collection. In the most recent version of Android, apps that are not in use can collect locations “a few times an hour,” instead of continuously.

Apple has been stricter, for example requiring apps to justify collecting location details in pop-up messages. But Apple’s instructions for writing these pop-ups do not mention advertising or data sale, only features like getting “estimated travel times.”

A spokesman said the company mandates that developers use the data only to provide a service directly relevant to the app, or to serve advertising that met Apple’s guidelines.

Apple recently shelved plans that industry insiders say would have significantly curtailed location collection. Last year, the company said an upcoming version of iOS would show a blue bar onscreen whenever an app not in use was gaining access to location data.

The discussion served as a “warning shot” to people in the location industry, David Shim, chief executive of the location company Placed, said at an industry event last year.

After examining maps showing the locations extracted by their apps, Ms. Lee, the nurse, and Ms. Magrin, the teacher, immediately limited what data those apps could get. Ms. Lee said she told the other operating-room nurses to do the same.

“I went through all their phones and just told them: ‘You have to turn this off. You have to delete this,’” Ms. Lee said. “Nobody knew.”

Source: https://www.nytimes.com/interactive/2018/12/10/business/location-data-privacy-apps.html?action=click&module=Top%20Stories&pgtype=Homepage

Planned Parenthood
Records show a device entering Gracie Mansion, the mayor’s residence, before traveling to a Y.M.C.A. in Brooklyn that the mayor frequents.
It travels to an event on Staten Island that the mayor attended. Later, it returns to a home on Long Island.

An app on Lisa Magrin’s cellphone collected her location information, which was then shared with other companies. The data revealed her daily habits, including hikes with her dog, Lulu. Nathaniel Brooks for The New York Times

A notice that Android users saw when theScore, a sports app, asked for access to their location data.

The Weather Channel app showed iPhone users this message when it first asked for their location data.

Nuclear plant

In the data set reviewed by The Times, phone locations are recorded in sensitive areas including the Indian Point nuclear plant near New York City. By Michael H. Keller | Satellite imagery by Mapbox and DigitalGlobe

Megachurch

Private Messages Are the New (Old) Social Network

Private Messages Are the New (Old) Social Network

Getty Images

Facebook is onto us.

It’s onto me, anyway. I am merely one anecdata point among billions, but I’m sure I’m not the only Facebook user who has found herself shying away from the very public, often performative, and even tiring habit of posting regular updates to Facebook and Instagram. Over the past year I’ve found myself thinking not about quitting social networks, but about redefining them. For me, that process has involved a lot more private messaging.

Facebook, it seems, has noticed. Last week, The New York Times reported that Facebook chief executive Mark Zuckerberg plans to unify Facebook Messenger, WhatsApp, and Instagram messaging on the backend of the services. This would make it possible for people relying on different flavors of Facebook apps to all gorge at the same messaging table. On the one hand, the move is truly Facebookian—just try to extricate yourself from Facebook, and it will try every which way to pull you back in. On the other hand, it makes sense for Facebook for a few reasons.

My personal relationship with Facebook is multi-faceted. I have a personal account and a journalist’s page. I also use Instagram and WhatsApp. But last year, I let my professional page languish. I stopped posting to my personal feed as frequently. Instead I turned to private messaging.

During a trip to Europe last fall, I shared everything I felt compelled to share with a small group of people on Apple Messages. The excursion to see one of the largest waves ever surfed by a human? I shared the photo in a private Slack message with coworkers, instead of posting on Facebook. Wedding photos no longer go up on Instagram. During the holidays, I happily embrace my role as halfway-decent photographer, but when I share the photos with friends and family, it’s only through Messages, WhatsApp, or private photo albums.

These tools have become my preferred method of communicating. It’s not some big revelation, or even anything that’s new; peer-to-peer messaging, or at least the guise of “private” messaging, is as old as the consumer internet itself. When our worlds expand in a way that feels unmanageable, our instinct is sometimes to shrink them until we’re comfortable again, for better or worse. Remember Path, the social network limited to just your closest circle? That didn’t work out, but the entire app was built upon the Dunbar theory that our species can’t handle more than 150 close friends. There just might have been something to that.

“I think a lot of people experience this,” says Margaret Morris, a clinical psychologist and author of Left to Our Own Devices. “When you post something in such a public way, the question is: What are the motivations? But when it’s in a private thread, it’s: Why am I sharing this? Oh, it’s because I think you’ll like this. I think we’ll connect over this. The altruistic motivations can be far more clear in private messaging.”

Of course, “altruism” in this case only applies to the friends exchanging messages and not the messaging service providers. Facebook’s efforts to unify its messaging platforms are at least partly rooted in a desire to monetize our activity, whether that’s by keeping us engaged in an outward-facing News Feed or within a little chat box. And there’s a major distinction between so-called private messages and what Morris calls “Privacy with a capital P.”

“There’s one kind of privacy, which is: what does my cousin know, or what does my co-worker know,” Morris says, “And then there’s the kind of privacy that’s about the data Facebook has.” Facebook’s plan is reportedly to offer end-to-end encryption on all of its messaging apps once the backend systems have been merged. As my WIRED colleague Lily Newman writes, cryptographers and privacy advocates already see obvious hurdles in making this work.

That’s why I often use Apple’s Messages and even iCloud photo sharing. There’s an explicit agreement that exists between the service provider and user: Buy our ridiculously expensive hardware, and we won’t sell your data. (While iCloud has been hacked before, Apple swears by the end-to-end encryption between iPhone users and says it doesn’t share Messages data with third-party apps). But just using Messages isn’t realistic, either. The platform is only functional between two iPhones. Not everyone can afford Apple products, and in other parts of the world, such as China or India, apps like WeChat and WhatsApp dominate private messaging. That means you’re going to end up using other apps if you plan to communicate outside of a bubble of iPhone lovers.

But beyond privacy with a capital P—which is, for many people, the most important consideration when it comes to social media—there’s the psychology of privacy when it comes to sharing updates about our personal lives, and connecting with other humans. Social networks have made human connections infinitely more possible and also turned the whole notion upside down on its head.

Morris, for example, sees posting something publicly to a Facebook feed as a yearning for interconnectedness, while a private messaging thread is a quest for what she calls attunement, a way to strengthen a bond between two people. But, she notes, some people take a screenshot from a private message and then, having failed in their quest for attunement, publish an identity-stripped version of it to their feed. Guilty as charged. Social networking is no longer just a feed or an app or a chat box or SMS, but some amalgamation of it all.

Posting private messages publicly is not something I plan to make a habit of, but there is still the urge sometimes to share. I’m still on Twitter. I’ll likely still post to Facebook and Instagram from time to time. At some point I may be looking for a sense of community that exists beyond my own small private messaging groups, for a tantalizing blend of familiarity and anonymity in a Facebook group of like-minded hobbyists. For some people, larger social networking communities are lifelines as they struggle with health, with family, with job worries, with life.

But right now, “private” messages are the way to share my life with the people who matter most, an attempt to splinter off my social interactions into something more satisfying—especially when posting to Facebook has never seemed less appealing.

 

Source: https://www.wired.com/story/private-messages-new-social-networks/

June 2018 Tech News & Trends to Watch

1. Companies Worldwide Strive for GDPR Compliance

By now, everyone with an email address has seen a slew of emails announcing privacy policy updates. You have Europe’s GDPR legislation to thank for your overcrowded inbox. GDPR creates rules around how much data companies are allowed to collect, how they’re able to use that data, and how clear they have to be with consumers about it all.

Companies around the world are scrambling to get their business and its practices into compliance – a significant task for many of them. While technically, the deadline to get everything in order passed on May 25, for many companies the process will continue well into June and possibly beyond. Some companies are even shutting down in Europe for good, or for as long as it takes them to get in compliance.

Even with the deadline behind us, the GDPR continues to be a top story for the tech world and may remain so for some time to come.

 

2. Amazon Provides Facial Recognition Tech to Law Enforcement

Amazon can’t seem to go a whole month without showing up in a tech news roundup. This month it’s for a controversial story: selling use of Rekognition, their facial recognition software, to law enforcement agencies on the cheap.

Civil rights groups have called for the company to stop allowing law enforcement access to the tech out of concerns that increased government surveillance can pose a threat to vulnerable communities in the country. In spite of the public criticism, Amazon hasn’t backed off on providing the tech to authorities, at least as of this time.

 

3. Apple Looks Into Self-Driving Employee Shuttles

Of the many problems facing our world, the frustrating work commute is one that many of the brightest minds in tech deal with just like the rest of us. Which makes it a problem the biggest tech companies have a strong incentive to try to solve.

Apple is one of many companies that’s invested in developing self-driving cars as a possible solution, but while that goal is still (probably) years away, they’ve narrowed their focus to teaming up with VW to create self-driving shuttles just for their employees.  Even that project is moving slower than the company had hoped, but they’re aiming to have some shuttles ready by the end of the year.

 

4. Court Weighs in on President’s Tendency to Block Critics on Twitter

Three years ago no one would have imagined that Twitter would be a president’s go-to source for making announcements, but today it’s used to that effect more frequently than official press conferences or briefings.

In a court battle that may sound surreal to many of us, a judge just found that the president can no longer legally block other users on Twitter.  The court asserted that blocking users on a public forum like Twitter amounts to a violation of their First Amendment rights. The judgment does still allow for the president and other public officials to mute users they don’t agree with, though.

 

5. YouTube Launches Music Streaming Service

YouTube joined the ranks of Spotify, Pandora, and Amazon this past month with their own streaming music service. Consumers can use a free version of the service that includes ads, or can pay $9.99 for the ad-free version.

youtube music service

With so many similar services already on the market, people weren’t exactly clamoring for another music streaming option. But since YouTube is likely to remain the reigning source for videos, it doesn’t necessarily need to unseat Spotify to still be okay. And with access to Google’s extensive user data, it may be able to provide more useful recommendations than its main competitors in the space, which is one way the service could differentiate itself.

 

6. Facebook Institutes Political Ad Rules

Facebook hasn’t yet left behind the controversies of the last election. The company is still working to proactively respond to criticism of its role in the spread of political propaganda many believe influenced election results. One of the solutions they’re trying is a new set of rules for any political ads run on the platform.

Any campaign that intends to run Facebook ads is now required to verify their identity with a card Facebook mails to their address that has a verification code. While Facebook has been promoting these new rules for a few weeks to politicians active on the platform, some felt blindsided when they realized, right before their primaries no less, that they could no longer place ads without waiting 12 to 15 days for a verification code to come in the mail. Politicians in this position blame the company for making a change that could affect their chances in the upcoming election.

Even in their efforts to avoid swaying elections, Facebook has found themselves criticized for doing just that. They’re probably feeling at this point like they just can’t win.

 

7. Another Big Month for Tech IPOs

This year has seen one tech IPO after another and this month is no different. Chinese smartphone company Xiaomi has a particularly large IPO in the works. The company seeks to join the Hong Kong stock exchange on June 7 with an initial public offering that experts anticipate could reach $10 billion.

The online lending platform Greensky started trading on the New York Stock Exchange on May 23 and sold 38 million shares in its first day, 4 million more than expected. This month continues 2018’s trend of tech companies going public, largely to great success.

 

8. StumbleUpon Shuts Down

In the internet’s ongoing evolution, there will always be tech companies that win and those that fall by the wayside. StumbleUpon, a content discovery platform that had its heyday in the early aughts, is officially shutting down on June 30.

Since its 2002 launch, the service has helped over 40 million users “stumble upon” 60 billion new websites and pieces of content. The company behind StumbleUpon plans to create a new platform that serves a similar purpose that may be more useful to former StumbleUpon users called Mix.

 

9. Uber and Lyft Invest in Driver Benefits

In spite of their ongoing success, the popular ridesharing platforms Uber and Lyft have faced their share of criticism since they came onto the scene. One of the common complaints critics have made is that the companies don’t provide proper benefits to their drivers. And in fact, the companies have fought to keep drivers classified legally as contractors so they’re off the hook for covering the cost of employee taxes and benefits.

Recently both companies have taken steps to make driving for them a little more attractive. Uber has begun offering Partner Protection to its drivers in Europe, which includes health insurance, sick pay, and parental leave ­ ­– so far nothing similar in the U.S. though. For its part, Lyft is investing $100 million in building driver support centers where their drivers can stop to get discounted car maintenance, tax help, and customer support help in person from Lyft staff. It’s not the same as getting full employee benefits (in the U.S. at least), but it’s something.

Source: https://www.hostgator.com/blog/june-tech-trends-to-watch/

Forget Facebook

Forget Facebook

Photo Credits: oe24.at – Copyrights of oe24.at reserved

Source: Techcrunch.com

Cambridge Analytica may have used Facebook’s data to influence your political opinions. But why does least-liked tech company Facebook have all this data about its users in the first place?

Let’s put aside Instagram, WhatsApp and other Facebook products for a minute. Facebook has built the world’s biggest social network. But that’s not what they sell. You’ve probably heard the internet saying “if a product is free, it means that you are the product.”

And it’s particularly true in this case because Facebook is the world’s second biggest advertising company in the world behind Google. During the last quarter of 2017, Facebook reported $12.97 billion in revenue, including $12.78 billion from ads.

That’s 98.5 percent of Facebook’s revenue coming from ads.

Ads aren’t necessarily a bad thing. But Facebook has reached ad saturation in the newsfeed. So the company has two options — creating new products and ad formats, or optimizing those sponsored posts.

Facebook has reached ad saturation in the newsfeed

This isn’t a zero-sum game — Facebook has been doing both at the same time. That’s why you’re seeing more ads on Instagram and Messenger. And that’s also why ads on Facebook seem more relevant than ever.

If Facebook can show you relevant ads and you end up clicking more often on those ads, then advertisers will pay Facebook more money.

So Facebook has been collecting as much personal data about you as possible — it’s all about showing you the best ad. The company knows your interests, what you buy, where you go and who you’re sleeping with.

You can’t hide from Facebook

Facebook’s terms and conditions are a giant lie. They are purposely misleading, too long and too broad. So you can’t just read the company’s terms of service and understand what it knows about you.

That’s why some people have been downloading their Facebook data. You can do it too, it’s quite easy. Just head over to your Facebook settings and click the tiny link that says “Download a copy of your Facebook data.”

In that archive file, you’ll find your photos, your posts, your events, etc. But if you keep digging, you’ll also find your private messages on Messenger (by default, nothing is encrypted).

And if you keep digging a bit more, chances are you’ll also find your entire address book and even metadata about your SMS messages and phone calls.

All of this is by design and you agreed to it. Facebook has unified terms of service and share user data across all its apps and services (except WhatsApp data in Europe for now). So if you follow a clothing brand on Instagram, you could see an ad from this brand on Facebook.com.

Messaging apps are privacy traps

But Facebook has also been using this trick quite a lot with Messenger. You might not remember, but the on-boarding experience on Messenger is really aggressive.

On iOS, the app shows you a fake permission popup to access your address book that says “Ok” or “Learn More”. The company is using a fake popup because you can’t ask for permission twice.

There’s a blinking arrow below the OK button.

If you click on “Learn More”, you get a giant blue button that says “Turn On”. Everything about this screen is misleading and Messenger tries to manipulate your emotions.

“Messenger only works when you have people to talk to,” it says. Nobody wants to be lonely, that’s why Facebook implies that turning on this option will give you friends.

Even worse, it says “if you skip this step, you’ll need to add each contact one-by-one to message them.” This is simply a lie as you can automatically talk to your Facebook friends using Messenger without adding them one-by-one.

The next time you pay for a burrito with your credit card, Facebook will learn about this transaction and match this credit card number with the one you added in Messenger

If you tap on “Not Now”, Messenger will show you a fake notification every now and then to push you to enable contact syncing. If you tap on yes and disable it later, Facebook still keeps all your contacts on its servers.

On Android, you can let Messenger manage your SMS messages. Of course, you guessed it, Facebook uploads all your metadata. Facebook knows who you’re texting, when, how often.

Even if you disable it later, Facebook will keep this data for later reference.

But Facebook doesn’t stop there. The company knows a lot more about you than what you can find in your downloaded archive. The company asks you to share your location with your friends. The company tracks your web history on nearly every website on earth using embedded JavaScript.

But my favorite thing is probably peer-to-peer payments. In some countries, you can pay back your friends using Messenger. It’s free! You just have to add your card to the app.

It turns out that Facebook also buys data about your offline purchases. The next time you pay for a burrito with your credit card, Facebook will learn about this transaction and match this credit card number with the one you added in Messenger.

In other words, Messenger is a great Trojan horse designed to learn everything about you.

And the next time an app asks you to share your address book, there’s a 99-percent chance that this app is going to mine your address book to get new users, spam your friends, improve ad targeting and sell email addresses to marketing companies.

I could say the same thing about all the other permission popups on your phone. Be careful when you install an app from the Play Store or open an app for the first time on iOS. It’s easier to enable something if a feature doesn’t work without it than to find out that Facebook knows everything about you.

GDPR to the rescue

There’s one last hope. And that hope is GDPR. I encourage you to read TechCrunch’s Natasha Lomas excellent explanation of GDPR to understand what the European regulation is all about.

Many of the misleading things that are currently happening at Facebook will have to change. You can’t force people to opt in like in Messenger. Data collection should be minimized to essential features. And Facebook will have to explain why it needs all this data to its users.

If Facebook doesn’t comply, the company will have to pay up to 4 percent of its global annual turnover. But that doesn’t stop you from actively reclaiming your online privacy right now.

You can’t be invisible on the internet, but you have to be conscious about what’s happening behind your back. Every time a company asks you to tap OK, think about what’s behind this popup. You can’t say that nobody told you.

Source: Techcrunch.com

What is GDPR – General Data Protection Regulation

Source Techcrunch.com

European Union lawmakers proposed a comprehensive update to the bloc’s data protection and privacy rules in 2012.

Their aim: To take account of seismic shifts in the handling of information wrought by the rise of the digital economy in the years since the prior regime was penned — all the way back in 1995 when Yahoo was the cutting edge of online cool and cookies were still just tasty biscuits.

Here’s the EU’s executive body, the Commission, summing up the goal:

The objective of this new set of rules is to give citizens back control over of their personal data, and to simplify the regulatory environment for business. The data protection reform is a key enabler of the Digital Single Market which the Commission has prioritised. The reform will allow European citizens and businesses to fully benefit from the digital economy.

For an even shorter the EC’s theory is that consumer trust is essential to fostering growth in the digital economy. And it thinks trust can be won by giving users of digital services more information and greater control over how their data is used. Which is — frankly speaking — a pretty refreshing idea when you consider the clandestine data brokering that pervades the tech industry. Mass surveillance isn’t just something governments do.

The General Data Protection Regulation (aka GDPR) was agreed after more than three years of negotiations between the EU’s various institutions.

It’s set to apply across the 28-Member State bloc as of May 25, 2018. That means EU countries are busy transposing it into national law via their own legislative updates (such as the UK’s new Data Protection Bill — yes, despite the fact the country is currently in the process of (br)exiting the EU, the government has nonetheless committed to implementing the regulation because it needs to keep EU-UK data flowing freely in the post-brexit future. Which gives an early indication of the pulling power of GDPR.

Meanwhile businesses operating in the EU are being bombarded with ads from a freshly energized cottage industry of ‘privacy consultants’ offering to help them get ready for the new regs — in exchange for a service fee. It’s definitely a good time to be a law firm specializing in data protection.

GDPR is a significant piece of legislation whose full impact will clearly take some time to shake out. In the meanwhile, here’s our guide to the major changes incoming and some potential impacts.

Data protection + teeth

A major point of note right off the bat is that GDPR does not merely apply to EU businesses; any entities processing the personal data of EU citizens need to comply. Facebook, for example — a US company that handles massive amounts of Europeans’ personal data — is going to have to rework multiple business processes to comply with the new rules. Indeed, it’s been working on this for a long time already.

Last year the company told us it had assembled “the largest cross functional team” in the history of its family of companies to support GDPR compliance — specifying this included “senior executives from all product teams, designers and user experience/testing executives, policy executives, legal executives and executives from each of the Facebook family of companies”.

“Dozens of people at Facebook Ireland are working full time on this effort,” it said, noting too that the data protection team at its European HQ (in Dublin, Ireland) would be growing by 250% in 2017. It also said it was in the process of hiring a “top quality data protection officer” — a position the company appears to still be taking applications for.

The new EU rules require organizations to appoint a data protection officer if they process sensitive data on a large scale (which Facebook very clearly does). Or are collecting info on many consumers — such as by performing online behavioral tracking. But, really, which online businesses aren’t doing that these days?

The extra-territorial scope of GDPR casts the European Union as a global pioneer in data protection — and some legal experts suggest the regulation will force privacy standards to rise outside the EU too.

Sure, some US companies might prefer to swallow the hassle and expense of fragmenting their data handling processes, and treating personal data obtained from different geographies differently, i.e. rather than streamlining everything under a GDPR compliant process. But doing so means managing multiple data regimes. And at very least runs the risk of bad PR if you’re outed as deliberately offering a lower privacy standard to your home users vs customers abroad.

Ultimately, it may be easier (and less risky) for businesses to treat GDPR as the new ‘gold standard’ for how they handle all personal data, regardless of where it comes from.

And while not every company harvests Facebook levels of personal data, almost every company harvests some personal data. So for those with customers in the EU GDPR cannot be ignored. At very least businesses will need to carry out a data audit to understand their risks and liabilities.

Privacy experts suggest that the really big change here is around enforcement. Because while the EU has had long established data protection standards and rules — and treats privacy as a fundamental right — its regulators have lacked the teeth to command compliance.

But now, under GDPR, financial penalties for data protection violations step up massively.

The maximum fine that organizations can be hit with for the most serious infringements of the regulation is 4% of their global annual turnover (or €20M, whichever is greater). Though data protection agencies will of course be able to impose smaller fines too. And, indeed, there’s a tiered system of fines — with a lower level of penalties of up to 2% of global turnover (or €10M).

This really is a massive change. Because while data protection agencies (DPAs) in different EU Member States can impose financial penalties for breaches of existing data laws these fines are relatively small — especially set against the revenues of the private sector entities that are getting sanctioned.

In the UK, for example, the Information Commissioner’s Office (ICO) can currently impose a maximum fine of just £500,000. Compare that to the annual revenue of tech giant Google (~$90BN) and you can see why a much larger stick is needed to police data processors.

It’s not necessarily the case that individual EU Member States are getting stronger privacy laws as a consequence of GDPR (in some instances countries have arguably had higher standards in their domestic law). But the beefing up of enforcement that’s baked into the new regime means there’s a better opportunity for DPAs to start to bark and bite like proper watchdogs.

GDPR inflating the financial risks around handling personal data should naturally drive up standards — because privacy laws are suddenly a whole lot more costly to ignore.

More types of personal data that are hot to handle

So what is personal data under GDPR? It’s any information relating to an identified or identifiable person (in regulatorspeak people are known as ‘data subjects’).

While ‘processing’ can mean any operation performed on personal data — from storing it to structuring it to feeding it to your AI models. (GDPR also includes some provisions specifically related to decisions generated as a result of automated data processing but more on that below).

A new provision concerns children’s personal data — with the regulation setting a 16-year-old age limit on kids’ ability to consent to their data being processed. However individual Member States can choose (and some have) to derogate from this by writing a lower age limit into their laws.

GDPR sets a hard cap at 13-years-old — making that the defacto standard for children to be able to sign up to digital services. So the impact on teens’ social media habits seems likely to be relatively limited.

The new rules generally expand the definition of personal data — so it can include information such as location data, online identifiers (such as IP addresses) and other metadata. So again, this means businesses really need to conduct an audit to identify all the types of personal data they hold. Ignorance is not compliance.

GDPR also encourages the use of pseudonymization — such as, for example, encrypting personal data and storing the encryption key separately and securely — as a pro-privacy, pro-security technique that can help minimize the risks of processing personal data. Although pseudonymized data is likely to still be considered personal data; certainly where a risk of reidentification remains. So it does not get a general pass from requirements under the regulation.

Data has to be rendered truly anonymous to be outside the scope of the regulation. (And given how often ‘anonymized’ data-sets have been shown to be re-identifiable, relying on any anonymizing process to be robust enough to have zero risk of re-identification seems, well, risky.)

To be clear, given GDPR’s running emphasis on data protection via data security it is implicitly encouraging the use of encryption above and beyond a risk reduction technique — i.e. as a way for data controllers to fulfill its wider requirements to use “appropriate technical and organisational measures” vs the risk of the personal data they are processing.

The incoming data protection rules apply to both data controllers (i.e. entities that determine the purpose and means of processing personal data) and data processors (entities that are responsible for processing data on behalf of a data controller — aka subcontractors).

Indeed, data processors have some direct compliance obligations under GDPR, and can also be held equally responsible for data violations, with individuals able to bring compensation claims directly against them, and DPAs able to hand them fines or other sanctions.

So the intent for the regulation is there be no diminishing in responsibility down the chain of data handling subcontractors. GDPR aims to have every link in the processing chain be a robust one.

For companies that rely on a lot of subcontractors to handle data operations on their behalf there’s clearly a lot of risk assessment work to be done.

As noted above, there is a degree of leeway for EU Member States in how they implement some parts of the regulation (such as with the age of data consent for kids).

Consumer protection groups are calling for the UK government to include an optional GDPR provision on collective data redress to its DP bill, for example — a call the government has so far rebuffed.

But the wider aim is for the regulation to harmonize as much as possible data protection rules across all Member States to reduce the regulatory burden on digital businesses trading around the bloc.

On data redress, European privacy campaigner Max Schrems — most famous for his legal challenge to US government mass surveillance practices that resulted in a 15-year-old data transfer arrangement between the EU and US being struck down in 2015 — is currently running a crowdfunding campaign to set up a not-for-profit privacy enforcement organization to take advantage of the new rules and pursue strategic litigation on commercial privacy issues.

Schrems argues it’s simply not viable for individuals to take big tech giants to court to try to enforce their privacy rights, so thinks there’s a gap in the regulatory landscape for an expert organization to work on EU citizen’s behalf. Not just pursuing strategic litigation in the public interest but also promoting industry best practice.

The proposed data redress body — called noyb; short for: ‘none of your business’ — is being made possible because GDPR allows for collective enforcement of individuals’ data rights. And that provision could be crucial in spinning up a centre of enforcement gravity around the law. Because despite the position and role of DPAs being strengthened by GDPR, these bodies will still inevitably have limited resources vs the scope of the oversight task at hand.

Some may also lack the appetite to take on a fully fanged watchdog role. So campaigning consumer and privacy groups could certainly help pick up any slack.

Privacy by design and privacy by default

Another major change incoming via GDPR is ‘privacy by design’ no longer being just a nice idea; privacy by design and privacy by default become firm legal requirements.

This means there’s a requirement on data controllers to minimize processing of personal data — limiting activity to only what’s necessary for a specific purpose, carrying out privacy impact assessments and maintaining up-to-date records to prove out their compliance.

Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable. (And we’ve sure seen a whole lot of those hellish things in tech.) The core idea is that consent should be an ongoing, actively managed process; not a one-off rights grab.

As the UK’s ICO tells it, consent under GDPR for processing personal data means offering individuals “genuine choice and control” (for sensitive personal data the law requires a higher standard still — of explicit consent).

There are other legal bases for processing personal data under GDPR — such as contractual necessity; or compliance with a legal obligation under EU or Member State law; or for tasks carried out in the public interest — so it is not necessary to obtain consent in order to process someone’s personal data. But there must always be an appropriate legal basis for each processing.

Transparency is another major obligation under GDPR, which expands the notion that personal data must be lawfully and fairly processed to include a third principle of accountability. Hence the emphasis on data controllers needing to clearly communicate with data subjects — such as by informing them of the specific purpose of the data processing.

The obligation on data handlers to maintain scrupulous records of what information they hold, what they are doing with it, and how they are legally processing it, is also about being able to demonstrate compliance with GDPR’s data processing principles.

But — on the plus side for data controllers — GDPR removes the requirement to submit notifications to local DPAs about data processing activities. Instead, organizations must maintain detailed internal records — which a supervisory authority can always ask to see.

It’s also worth noting that companies processing data across borders in the EU may face scrutiny from DPAs in different Member States if they have users there (and are processing their personal data).

Although the GDPR sets out a so-called ‘one-stop-shop’ principle — that there should be a “lead” DPA to co-ordinate supervision between any “concerned” DPAs — this does not mean that, once it applies, a cross-EU-border operator like Facebook is only going to be answerable to the concerns of the Irish DPA.

Indeed, Facebook’s tactic of only claiming to be under the jurisdiction of a single EU DPA looks to be on borrowed time. And the one-stop-shop provision in the GDPR seems more about creating a co-operation mechanism to allow multiple DPAs to work together in instances where they have joint concerns, rather than offering a way for multinationals to go ‘forum shopping’ — which the regulation does not permit (per WP29 guidance).

Another change: Privacy policies that contain vague phrases like ‘We may use your personal data to develop new services’ or ‘We may use your personal data for research purposes’ will not pass muster under the new regime. So a wholesale rewriting of vague and/or confusingly worded T&Cs is something Europeans can look forward to this year.

Add to that, any changes to privacy policies must be clearly communicated to the user on an ongoing basis. Which means no more stale references in the privacy statement telling users to ‘regularly check for changes or updates’ — that just won’t be workable.

The onus is firmly on the data controller to keep the data subject fully informed of what is being done with their information. (Which almost implies that good data protection practice could end up tasting a bit like spam, from a user PoV.)

The overall intent behind GDPR is to inculcate an industry-wide shift in perspective regarding who ‘owns’ user data — disabusing companies of the notion that other people’s personal information belongs to them just because it happens to be sitting on their servers.

“Organizations should acknowledge they don’t exist to process personal data but they process personal data to do business,” is how analyst Gartner research director Bart Willemsen sums this up. “Where there is a reason to process the data, there is no problem. Where the reason ends, the processing should, too.”

The data protection officer (DPO) role that GDPR brings in as a requirement for many data handlers is intended to help them ensure compliance.

This officer, who must report to the highest level of management, is intended to operate independently within the organization, with warnings to avoid an internal appointment that could generate a conflict of interests.

Which types of organizations face the greatest liability risks under GDPR? “Those who deliberately seem to think privacy protection rights is inferior to business interest,” says Willemsen, adding: “A recent example would be Uber, regulated by the FTC and sanctioned to undergo 20 years of auditing. That may hurt perhaps similar, or even more, than a one-time financial sanction.”

“Eventually, the GDPR is like a speed limit: There not to make money off of those who speed, but to prevent people from speeding excessively as that prevents (privacy) accidents from happening,” he adds.

Another right to be forgotten

Under GDPR, people who have consented to their personal data being processed also have a suite of associated rights — including the right to access data held about them (a copy of the data must be provided to them free of charge, typically within a month of a request); the right to request rectification of incomplete or inaccurate personal data; the right to have their data deleted (another so-called ‘right to be forgotten’ — with some exemptions, such as for exercising freedom of expression and freedom of information); the right to restrict processing; the right to data portability (where relevant, a data subject’s personal data must be provided free of charge and in a structured, commonly used and machine readable form).

All these rights make it essential for organizations that process personal data to have systems in place which enable them to identify, access, edit and delete individual user data — and be able to perform these operations quickly, with a general 30 day time-limit for responding to individual rights requests.

GDPR also gives people who have consented to their data being processed the right to withdraw consent at any time. Let that one sink in.

Data controllers are also required to inform users about this right — and offer easy ways for them to withdraw consent. So no, you can’t bury a ‘revoke consent’ option in tiny lettering, five sub-menus deep. Nor can WhatsApp offer any more time-limit opt-outs for sharing user data with its parent multinational, Facebook. Users will have the right to change their mind whenever they like.

The EU lawmakers’ hope is that this suite of rights for consenting consumers will encourage respectful use of their data — given that, well, if you annoy consumers they can just tell you to sling yer hook and ask for a copy of their data to plug into your rival service to boot. So we’re back to that fostering trust idea.

Add in the ability for third party organizations to use GDPR’s provision for collective enforcement of individual data rights and there’s potential for bad actors and bad practice to become the target for some creative PR stunts that harness the power of collective action — like, say, a sudden flood of requests for a company to delete user data.

Data rights and privacy issues are certainly going to be in the news a whole lot more.

Getting serious about data breaches

But wait, there’s more! Another major change under GDPR relates to security incidents — aka data breaches (something else we’ve seen an awful, awful lot of in recent years) — with the regulation doing what the US still hasn’t been able to: Bringing in a universal standard for data breach disclosures.

GDPR requires that data controllers report any security incidents where personal data has been lost, stolen or otherwise accessed by unauthorized third parties to their DPA within 72 hours of them becoming aware of it. Yes, 72 hours. Not the best part of a year, like er Uber.

If a data breach is likely to result in a “high risk of adversely affecting individuals’ rights and freedoms” the regulation also implies you should ‘fess up even sooner than that — without “undue delay”.

Only in instances where a data controller assesses that a breach is unlikely to result in a risk to the rights and freedoms of “natural persons” are they exempt from the breach disclosure requirement (though they still need to document the incident internally, and record their reason for not informing a DPA in a document that DPAs can always ask to see).

“You should ensure you have robust breach detection, investigation and internal reporting procedures in place,” is the ICO’s guidance on this. “This will facilitate decision-making about whether or not you need to notify the relevant supervisory authority and the affected individuals.”

The new rules generally put strong emphasis on data security and on the need for data controllers to ensure that personal data is only processed in a manner that ensures it is safeguarded.

Here again, GDPR’s requirements are backed up by the risk of supersized fines. So suddenly sloppy security could cost your business big — not only in reputation terms, as now, but on the bottom line too. So it really must be a C-suite concern going forward.

Nor is subcontracting a way to shirk your data security obligations. Quite the opposite. Having a written contract in place between a data controller and a data processor was a requirement before GDPR but contract requirements are wider now and there are some specific terms that must be included in the contract, as a minimum.

Breach reporting requirements must also be set out in the contract between processor and controller. If a data controller is using a data processor and it’s the processor that suffers a breach, they’re required to inform the controller as soon as they become aware. The controller then has the same disclosure obligations as per usual.

Essentially, data controllers remain liable for their own compliance with GDPR. And the ICO warns they must only appoint processors who can provide “sufficient guarantees” that the regulatory requirements will be met and the rights of data subjects protected.

tl;dr, be careful who and how you subcontract.

Right to human review for some AI decisions

Article 22 of GDPR places certain restrictions on entirely automated decisions based on profiling individuals — but only in instances where these human-less acts have a legal or similarly significant effect on the people involved.

There are also some exemptions to the restrictions — where automated processing is necessary for entering into (or performance of) a contract between an organization and the individual; or where it’s authorized by law (e.g. for the purposes of detecting fraud or tax evasion); or where an individual has explicitly consented to the processing.

In its guidance, the ICO specifies that the restriction only applies where the decision has a “serious negative impact on an individual”.

Suggested examples of the types of AI-only decisions that will face restrictions are automatic refusal of an online credit application or an e-recruiting practices without human intervention.

Having a provision on automated decisions is not a new right, having been brought over from the 1995 data protection directive. But it has attracted fresh attention — given the rampant rise of machine learning technology — as a potential route for GDPR to place a check on the power of AI blackboxes to determine the trajectory of humankind.

The real-world impact will probably be rather more prosaic, though. And experts suggest it does not seem likely that the regulation, as drafted, equates to a right for people to be given detailed explanations of how algorithms work.

Though as AI proliferates and touches more and more decisions, and as its impacts on people and society become ever more evident, pressure may well grow for proper regulatory oversight of algorithmic blackboxes.

In the meanwhile, what GDPR does in instances where restrictions apply to automated decisions is require data controllers to provide some information to individuals about the logic of an automated decision.

They are also obliged to take steps to prevent errors, bias and discrimination. So there’s a whiff of algorithmic accountability. Though it may well take court and regulatory judgements to determine how stiff those steps need to be in practice.

Individuals do also have a right to challenge and request a (human) review of an automated decision in the restricted class.

Here again the intention is to help people understand how their data is being used. And to offer a degree of protection (in the form of a manual review) if a person feels unfairly and harmfully judged by an AI process.

The regulation also places some restrictions on the practice of using data to profile individuals if the data itself is sensitive data — e.g. health data, political belief, religious affiliation etc — requiring explicit consent for doing so. Or else that the processing is necessary for substantial public interest reasons (and lies within EU or Member State law).

While profiling based on other types of personal data does not require obtaining consent from the individuals concerned, it still needs a legal basis and there is still a transparency requirement — which means service providers will need to inform users they are being profiled, and explain what it means for them.

And people also always have the right to object to profiling activity based on their personal data.

 

Source: https://techcrunch.com/2018/01/20/wtf-is-gdpr/

Google introduces an ad blocker to Chrome – Filtering – Censorship?

Photo by David Ramos/Getty Images

Google will introduce an ad blocker to Chrome early next year and is telling publishers to get ready.

The warning is meant to let websites assess their ads and strip any particularly disruptive ones from their pages. That’s because Chrome’s ad blocker won’t block all ads from the web. Instead, it’ll only block ads on pages that are determined to have too many annoying or intrusive advertisements, like videos that autoplay with sound or interstitials that take up the entire screen.

Sridhar Ramaswamy, the executive in charge of Google’s ads, writes in a blog post that even ads “owned or served by Google” will be blocked on pages that don’t meet Chrome’s guidelines.

Instead of an ad “blocker,” Google is referring to the feature as an ad “filter,” according toThe Wall Street Journal, since it will still allow ads to be displayed on pages that meet the right requirements. The blocker will work on both desktop and mobile.

Google is providing a tool that publishers can run to find out if their sites’ ads are in violation and will be blocked in Chrome. Unacceptable ads are being determined by a group called the Coalition for Better Ads, which includes Google, Facebook, News Corp, and The Washington Post as members.

Google shows publishers which of their ads are considered disruptive.

The feature is certain to be controversial. On one hand, there are huge benefits for both consumers and publishers. But on the other, it gives Google immense power over what the web looks like, partly in the name of protecting its own revenue.

First, the benefits: bad ads slow down the web, make the web hard and annoying to browse, and have ultimately driven consumers to install ad blockers that remove all advertisements no matter what. A world where that continues and most users block all ads looks almost apocalyptic for publishers, since nearly all of your favorite websites rely on ads to stay afloat. (The Verge, as you have likely noticed, included.)

By implementing a limited blocking tool, Google can limit the spread of wholesale ad blocking, which ultimately benefits everyone. Users get a better web experience. And publishers get to continue using the ad model that’s served the web well for decades — though they may lose some valuable ad units in the process.

There’s also a good argument to be made that stripping out irritating ads is no different than blocking pop ups, which web browsers have done for years, as a way to improve the experience for consumers.

But there are drawbacks to building an ad blocker into Chrome: most notably, the amount of power it gives Google. Ultimately, it means Google gets to decide what qualifies as an acceptable ad (though it’s basing this on standards set collectively by the Coalition for Better Ads). That’s a good thing if you trust Google to remain benign and act in everyone’s interests. But keep in mind that Google is, at its core, an ad company. Nearly 89 percent of its revenue comes from displaying ads.

The Chrome ad blocker doesn’t just help publishers, it also helps Google maintain its dominance. And it advantages Google’s own ad units, which, it’s safe to say, will not be in violation of the bad ad rules.

This leaves publishers with fewer options to monetize their sites. And given that Chrome represents more than half of all web browsing on desktop and mobile, publishers will be hard pressed not to comply.

Google will also include an option for visitors to pay websites that they’re blocking ads on, through a program it’s calling Funding Choices. Publishers will have to enable support for this feature individually. But Google already tested a similar feature for more than two years, and it never really caught on. So it’s hard to imagine publishers seeing what’s essentially a voluntary tipping model as a viable alternative to ads.

Ramaswamy says that the goal of Chrome’s ad blocker is to make online ads better. “We believe these changes will ensure all content creators, big and small, can continue to have a sustainable way to fund their work with online advertising,” he writes.

And what Ramaswamy says is probably true: Chrome’s ad blocker likely will clean up the web and result in a better browsing experience. It just does that by giving a single advertising juggernaut a whole lot of say over what’s good and bad.

https://www.theverge.com/2017/6/1/15726778/chrome-ad-blocker-early-2018-announced-google