Archiv des Autors: innovation

Two-factor authentication explained: How to choose the right level of security for every account

f you aren’t already protecting your most personal accounts with two-factor or two-step authentication, you should be. An extra line of defense that’s tougher than the strongest password, 2FA is extremely important to blocking hacks and attacks on your personal data. If you don’t quite understand what it is, we’ve broken it all down for you.

Two-factor-authentication: What it is

Two-factor authentication is basically a combination of two of the following factors:

  1. Something you know
  2. Something you have
  3. Something you are

Something you know is your password, so 2FA always starts there. Rather than let you into your account once your password is entered, however, two-factor authentication requires a second set of credentials, like when the DMV wants your license and a utility bill. So that’s where factors 2 and 3 come into play. Something you have is your phone or another device, while something you are is your face, irises, or fingerprint. If you can’t provide authentication beyond the password alone, you won’t be allowed into the service you’re trying to log into.

So there are several options for the second factor: SMS, authenticator apps, Bluetooth-, USB-, and NFC-based security keys, and biometrics. So let’s take a look at your options so you can decide which is best for you.

Two-factor-authentication: SMS

2fa sms Michael Simon/IDG
When you choose SMS-based 2FA, all you need is a mobile phone number.

What it is: The most common “something you have” second authentication method is SMS. A service will send a text to your phone with a numerical code, which then needs to be typed into the field provided. If the codes match, your identification is verified and access is granted.

How to set it up: Nearly every two-factor authentication system uses SMS by default, so there isn’t much to do beyond flipping the toggle or switch to turn on 2FA on the chosen account. Depending on the app or service, you’ll find it somewhere in settings, under Security if the tab exists. Once activated you’ll need to enter your password and a mobile phone number.

How it works: When you turn on SMS-based authentication, you’ll receive a code via text that you’ll need to enter after you type your password. That protects you against someone randomly logging into your account from somewhere else, since your password alone in useless without the code. While some apps and services solely rely on SMS-based 2FA, many of them offer numerous options, even if SMS is selected by default.

2fa sms setup IDG
With SMS-based authentication, you’ll get a code via text that will allow access to your account.

How secure it is: By definition, SMS authentication is the least secure method of two-factor authentication. Your phone can be cloned or just plain stolen, SMS messages can be intercepted, and by nature most default messaging apps aren’t encrypted. So the code that’s sent to you could possibly fall into someone’s hands other than yours. It’s unlikely to be an issue unless you’re a valuable target, however.

How convenient it is: Very. You’re likely to always have your phone within reach, so the second authentication is super convenient, especially if the account you’re signing into is on your phone.

Should you use it? Any two-factor authentication is better than none, but if you’re serious about security, SMS won’t cut it.

Two-factor-authentication: Authenticator apps

2fa authenticator Authenticator apps
Authenticator apps generate random codes that aren’t delivered over SMS.

What it is: Like SMS-based two-factor authentication, authenticator apps generate codes that need to be inputted when prompted. However, rather than sending them over unencrypted SMS, they’re generated within an app, and you don’t even need an Internet connection to get one.

How to set it up: To get started with an authentication app, you’ll need to download one from the Play Store or the App Store. Google Authenticator works great for your Google account and anything you use it to log into, but there are other great one’s as well, including Authy, LastPass, Microsoft and a slew of other individual companies, such as Blizzard, Sophos, and Salesforce. If an app or service supports authenticator apps, it’ll supply a QR code that you can scan or enter on your phone.

How it works: When you open your chosen authenticator app and scan the code, a 6-figure code will appear, just like with SMS 2FA. Input that code into the app and you’re good to go. After the initial setup, you’ll be able to go into the app to get a code without scanning a QR code whenever you need one.

2fa authenticator setup IDG
Authenticator apps generate randome codes every 30 seconds and can be used offline.

How secure it is: Unless someone has access to your phone or whatever device is running your authenticator app, it’s completely secure. Since codes are randomized within the app and aren’t delivered over SMS, there’s no way for prying eyes to steal them. For extra security, Authy allows you to set pin and password protection, too, something Google doesn’t offer on its authenticator app.

How convenient it is: While opening an app is slightly less convenient than receiving a text message, authenticator apps don’t take more than few seconds to use. They’re far more secure than SMS, and you can use them offline if you ever run into an issue where you need a code but have no connection.

Should you use it? An authenticator app strikes the sweet spot between security and convenience. While you might find some services that don’t support authenticator apps, the vast majority do.

Two-factor authentication: Universal second factor (security key)

2fa security key Michael Simon/IDG
As their name implies, Security keys are the most secure way to lock down your account.

What it is: Unlike SMS- and authenticator-based 2FA, universal second factor is truly a “something you have” method of protecting your accounts. Instead of a digital code, the second factor is a hardware-based security key. You’ll need to order a physical key to use it, which will connect to your phone or PC via USB, NFC, or Bluetooth.

You can buy a Titan Security Key bundle from Google for $50, which includes a USB-A security key and a Bluetooth security key along with a USB-A-to-USB-C adapter, or buy one from Yubico. An NFC-enabled key is recommended if you’re going to be using it with a phone.

How to set it up: Setting up a security key is basically the same as the other methods, except you’ll need a computer. You’ll need to turn on two-factor authentication, and then select the “security key” option, if it’s available. Most popular accounts, such as Twitter, Facebook, and Google all support security keys, so your most vulnerable accounts should be all set. However, while Chrome, Firefox, and Microsoft’s Edge browser all support security keys, Apple’s Safari browser does not, so you’ll be prompted to switch during setup.

Once you reach the security settings page for the service you’re enabling 2FA with, select security key, and follow the prompts. You’ll be asked to insert your key (so make sure you have an USB-C adapter on hand if you have a MacBook) and press the button on it. That will initiate the connection with your computer, pair your key, and in a few seconds your account will be ready to go.

How it works: When an account requests 2FA verification, you’ll need to plug your security key into your phone or PC’s USB-C port or (if supported) tap it to the back of your NFC-enabled phone. Then it’s only a matter of pressing the button on the key to establish the connection and you’re in.

2fa security key steps IDG
Setting up your security key with your Google account is a multi-step process.

How secure it is: Extremely. Since all of the login authentication is stored on a physical key that is either on your person or stored somewhere safe, the odds of someone accessing your account are extremely low. To do so, they would need to steal your password and the key to access your account, which is very unlikely.

How convenient it is: Not very. When you log into one of your accounts on a new device, you’ll need to type your password and then authenticate it via the hardware key, either by inserting it into your PC’s USB port or pressing it against the back of an NFC-enabled phone. Neither method takes more than a few seconds, though, provided you have your security key within reach.

Two-factor authentication: Google Advanced Protection Program

What it is: If you want to completely lock down your most important data, Google offers the Advanced Protection Program for your Google account, which disables everything except security key-based 2FA. It also limits access your emails and Drive files to Google apps and select third-party apps, and shuts down web access to browsers other than Chrome and Firefox.

How to set it up: You’ll need to make a serious commitment. To enroll in Google Advanced Protection, you’ll need to purchase two Security Keys: one as your main key and one as your backup key. Google sells its own Titan Security Key bundle, but you can also buy a set from Yubico or Feitian.

Once you get your keys, you’ll need to register them with your Google account and then agree to turn off all other forms of authentication. But here’s the rub: To ensure that every one of your devices is properly protected, Google will log you out of every account on every device you own so you can log in again using Advanced Protection.

How it works: Advanced Protection works just like a security except you won’t be able to choose a different method if you forgot or lost your security key.

How secure it is: Google Advanced Protection is basically impenetrable. By relying solely on security keys, it makes sure that no one will be able to access your account without both your password and physical key, which is extremely unlikely.

How convenient it is: By nature, Google Advanced Protection is supposed to make it difficult for hackers to access your Google account and anything associated with it, so naturally it’s not so easy for the user either. Since there’s no fallback authentication method, you’ll need to remember your key whenever you leave the house. And when you run into a roadback—like the Safari browser on a Mac—you’re pretty much out of luck. But if you want your account to have the best possible protection, accept no substitute.

Two-factor authentication: Biometrics

op6t fingerprint Christopher Hebert/IDG
Nearly every smartphone made today has some form of secure biometrics built into it.

What it is: A password-free world where all apps and services are authenticated by a fingerprint or facial scan.

How to set it up: You can see biometrics at work when you opt to use the fingerprint scanner on your phone or Face ID on the iPhone XS, but at the moment, biometric security is little more than a replacement for your password after you login in and verify via another 2FA method.

How it works: Like the way you use your fingerprint or face to unlock your smartphone, biometric 2FA uses your body’s unique characteristics as your password. So your Google account would know it was you based on your scan when you set up your account, and it would automatically allow access when it recognized you.

How secure it is: Since it’s extremely difficult to clone your fingerprint or face, biometric authentication is the closest thing to a digital vault.

How convenient it is: You can’t go anywhere without your fingerprint or your face, so it doesn’t get more convenient than that.

Two-factor authentication: iCloud

2fa icloud Michael Simon/IDG
Apple sends a code to one of your trusted devices when it needs authentication to access an account.

What it is: Apple has its own method of two-factor authentication or your iCloud and iTunes accounts that involves setting up trusted Apple devices (iPhone, iPad, or Mac—Apple Watch isn’t supported) that can receive verification codes. You can also set up trusted numbers to receive SMS codes or get verification codes via an authenticator app built into the Settings app.

How to set it up: As long as you’re logged into into your iCloud account, you can turn on two-factor authentication from pretty much anywhere. Just go into Settings on your iOS device or System Preferences on your Mac, PC, or Android phone, then Security, and Turn On Two-Factor Authentication. From there, you can follow the prompts to set up your trusted phone number and devices.

How it works: When you need to access an account protected by 2FA, Apple will send a code to one of your trusted devices. If you don’t have a second Apple device, Apple will send you a code via SMS or you can get one from the Settings app on your iPhone or System preferences on your Mac.

2fa apple id code IDG
When Apple needs a code to log into an account, it sends it to one of your trusted devices.

How secure it is: It depends on how many Apple devices you own. If you own more than one Apple device, it’s very secure. Apple will send a code to one of your other devices whenever you or someone else tries to log into your account or one of Apple’s services on a new device. It even tells you the location of the request, so if you don’t recognize it you can instantly reject it, before the code even appears.

If you only have one device, you’ll have to use SMS or Apple’s built-in authenticator, neither of which is all that secure, especially since it’s likely to both be done using the same device. Also, Apple has a weird snafu that sends the 2FA access code to the same device when you manage your account using a browser, which also defeats the purpose of 2FA.

How convenient it is: If you’re using an iPhone and have an iPad or Mac nearby, the process takes seconds, but if you don’t have an Apple device within reach or are away from your keyboard, it can be tedious.

Source: https://www.pcworld.com/article/3387420/two-factor-authentication-faq-sms-authenticator-security-key-icloud.html

Werbeanzeigen

SS7 contains back doors open like Swiss Cheese

The outages hit in the summer of 1991. Over several days, phone lines in major metropolises went dead without warning, disrupting emergency services and even air traffic control, often for hours. Phones went down one day in Los Angeles, then on another day in Washington, DC and Baltimore, and then in Pittsburgh. Even after service was restored to an area, there was no guarantee the lines would not fail again—and sometimes they did. The outages left millions of Americans disconnected.

The culprit? A computer glitch. A coding mistake in software used to route calls for a piece of telecom infrastructure known as Signaling System No. 7 (SS7) caused network-crippling overloads. It was an early sign of the fragility of the digital architecture that binds together the nation’s phone systems.

Leaders on Capitol Hill called on the one agency with the authority to help: the Federal Communications Commission (FCC). The FCC made changes, including new outage reporting requirements for phone carriers. To help the agency respond to digital network stability concerns, the FCC also launched an outside advisory group—then known as the Network Reliability Council but now called the Communications Security, Reliability, and Interoperability Council (CSRIC, pronounced “scissor-ick”).

Yet decades later, SS7 and other components of the nation’s digital backbone remain flawed, leaving calls and texts vulnerable to interception and disruption. Instead of facing the challenges of our hyper-connected age, the FCC is stumbling, according to documents obtained by the Project On Government Oversight (POGO) and through extensive interviews with current and former agency employees. The agency is hampered by a lack of leadership on cybersecurity issues and a dearth of in-house technical expertise that all too often leaves it relying on security advice from the very companies it is supposed to oversee.

Captured

CSRIC is a prime example of this so-called “agency capture”—the group was set up to help supplement FCC expertise and craft meaningful rules for emerging technologies. But instead, the FCC’s reliance on security advice from industry representatives creates an inherent conflict of interest. The result is weakened regulation and enforcement that ultimately puts all Americans at risk, according to former agency staff.

While the agency took steps to improve its oversight of digital security issues under the Obama administration, many of these reforms have been walked back under current Chairman Ajit Pai. Pai, a former Verizon lawyer, has consistently signaled that he doesn’t want his agency to play a significant role in the digital security of Americans’ communications—despite security being a core agency responsibility since the FCC’s inception in 1934.

The FCC’s founding statute charges it with crafting regulations that promote the “safety of life and property through the use of wire and radio communications,” giving it broad authority to secure communications. Former FCC Chairman Tom Wheeler and many legal experts argue that this includes cyber threats.

As a regulator, the FCC carries a stick: it can hit communications companies with fines if they don’t comply with its rules. That responsibility is even more important now that “smart” devices are networking almost every aspect of our lives.

But not everyone thinks the agency’s mandate is quite so clear, especially in the telecom industry. Telecom companies fight back hard against regulation; over the last decade, they spent nearly a billion dollars lobbying Congress and federal agencies, according to data from OpenSecrets. The industry argues that the FCC’s broad mandate to secure communications doesn’t extend to cybersecurity, and it has pushed for oversight of cybersecurity to come instead from other parts of government, typically the Department of Homeland Security (DHS) or the Federal Trade Commission (FTC)—neither of which is vested with the same level of rule-making powers as the FCC.

To Wheeler, himself the former head of industry trade group CTIA, the push toward DHS seemed like an obvious ploy. “The people and companies the FCC was charged with regulating wanted to see if they could get their jurisdiction moved to someone with less regulatory authority,” he told POGO.

But Chairman Pai seems to agree with industry. In a November 2018 letter to Senator Ron Wyden (D-Ore.) about the agency’s handling of SS7 problems, provided to POGO by the senator’s office, Pai wrote that the FCC “plays a supporting role, as a partner with DHS, in identifying vulnerabilities and working with stakeholders to increase security and resiliency in communications network infrastructure.”

The FCC declined to comment for this story.

The current FCC declined comment, but POGO spoke with former chairman Tom Wheeler, seen here <a href="https://arstechnica.com/information-technology/2016/03/how-a-former-lobbyist-became-the-broadband-industrys-worst-nightmare/">speaking with Ars</a> back in 2016.
Enlarge / The current FCC declined comment, but POGO spoke with former chairman Tom Wheeler, seen here speaking with Ars back in 2016.
Jon Brodkin

Failing to protect the “crown jewels” of telecom

How the telecom industry leveraged lawmakers’ calls for FCC reform in the wake of the SS7 outages is a case study in how corporate influence can overcome even the best of the government’s intentions.

From the beginning, industry representatives dominated membership of the advisory group now known as CSRIC—though, initially, the group only provided input on a small subset of digital communications issues. Over time, as innovations in communications raced forward with the expansion of cellular networks and the Internet, the FCC’s internal technical capabilities didn’t keep up: throughout the 1990s and early 2000s, the agency’s technical expertise was largely limited to telephone networks while the world shifted to data networks, former staffers told POGO. The few agency staffers with expertise on new technologies were siloed in different offices, making it hard to coordinate a comprehensive response to the paradigm shift in communication systems. That gap left the agency increasingly dependent on advice from CSRIC.

During the early 1990s, the SS7-based software system was just coming into wide use. Today, though, it is considered outdated and insecure. Despite that, carriers still use the technology as a backup in their networks. This leaves the people who rely on those networks vulnerable to the technology’s problems, as Jonathan Mayer, a Princeton computer science and public affairs professor and former FCC Enforcement Bureau chief technologist, explained during a Congressional hearing in June 2018.

Unlike in the 1990s, the risks now go much deeper than just service disruption. Researchers have long warned that flaws in the system allow cybercriminals or hackers—sometimes working on behalf of foreign adversaries—to turn cell phones into sophisticated geo-tracking devices or to intercept calls and text messages. Security problems with SS7 are so severe that some government agencies and some major companies like Google are moving away from using codes sent via text to help secure important accounts, such as those for email or online banking.

A panel advising President Bill Clinton raised the alarm back in 1997, saying that SS7 was among America’s networking “crown jewels” and warning that if those crown jewels were “attacked or exploited, [it] could result in a situation that threatened the security and reliability of the telecommunications infrastructure.” By 2001, security researchers argued that risks associated with SS7 were multiplying thanks to “deregulation” and “the Internet and wireless networks.” They were proved right in 2008 when other researchers demonstrated ways that hackers could use flaws in SS7 to pinpoint the location of unsuspecting cell phone users.

By 2014, it was clear that foreign governments had caught on to the disruptive promise of the problem. That year, Russian intelligence used SS7 vulnerabilities to attack a Ukrainian telecommunications company, according to a report published by NATO’s Cooperative Cyber Defence Centre of Excellence, and more research about SS7 call interception made headlines in The Washington Post and elsewhere.

Despite the increasingly dire stakes, the FCC didn’t pay much attention to the issue until the summer of 2016, after Rep. Ted Lieu (D-Calif.) allowed 60 Minutes to demonstrate how researchers could use security flaws in the SS7 protocol to spy on his phone. The FCC—then led by Wheeler—responded by essentially passing the buck to CSRIC. It created a working group to study and make security recommendations about SS7 and other so-called “legacy systems.” The result was a March 2017 report with non-binding guidance about best practices for securing against SS7 vulnerabilities, a non-public report, and the eventual creation of yet another CSRIC working group to study similar security issues.

A POGO analysis of CSRIC membership in recent years shows that its membership, which is solely appointed by the FCC chairman, leans heavily toward industry. And the authorship of the March 2017 report was even more lopsided than CSRIC, overall. Of the twenty working-group members listed in the final report, only five were from the government, including four from the Department of Homeland Security. The remaining fifteen represented private-sector interests. None were academics or consumer advocates.

The working group’s leadership was drawn entirely from industry. The group’s co-chairs came from networking infrastructure company Verisign and iconectiv, a subsidiary of Swedish telecom company Ericsson. The lead editor of the group’s final report was CTIA Vice President for Technology and Cyber Security John Marinho.

Emails from 2016 between working group members, obtained by POGO via a Freedom of Information Act request, show that the group dragged its feet on resolving SS7 security vulnerabilities despite urging from FCC officials to move quickly. The group also repeatedly ignored input from DHS technical experts.

The problem wasn’t figuring out a fix, however, according to David Simpson, a retired rear-admiral who led the FCC’s Public Safety and Homeland Security Bureau at the time. The group was quickly able to discern some best practices—primarily through using different filtering systems—that some major carriers had already deployed and that others could use to mitigate the risks associated with SS7.

“We knew the answer within the first couple months from the technical experts in the working groups,” said Simpson, who consulted with the Working Group. But ultimately, the “consensus orientation of the CSRIC unfortunately allowed” the final report to be pushed from the lame-duck session into the Trump administration—which is not generally inclined toward introducing new federal regulations.

Overall, POGO’s analysis of emails from the group and interviews with former FCC staff found that industry dominance of CSRIC appears to have contributed to a number of issues with the process and the final report, including:

  • Industry members of the working group successfully pushed for the final recommendations to rely on voluntary compliance, according to former FCC staffers. Security experts say that strategy ultimately leaves the entire cellular network at risk because there are thousands of smaller providers, often in rural areas, that are unlikely to prioritize rolling out the needed protections without a firm rule.
  • An August 2016 email shows that, early on in the process, DHS experts objected to describing the working group’s focus as being on “legacy” systems because it “conveys a message that these protocols and associated threats are going away soon and that’s not necessarily the case.” The group did not revise the legacy language, and it remained in the final report.
  • In an email from September 2016, an FCC official emailed Marinho, noting that edits from DHS were not being incorporated into the working draft. Marinho responded that he received them too late and planned to incorporate them in a later version. However, in a May 2018 letter to the FCC, Senator Wyden said DHS officials told his office that “the vast majority of edits to the final report” suggested by DHS experts “were rejected.”

In the emails obtained by POGO, Marinho also refers to warnings about security issues with SS7 that came from panelists at an event organized by George Washington University’s Cybersecurity Strategy and Information Management Program as “hyperbolic.”

Marinho did not respond to a series of specific questions about the Working Group’s activities. In a statement to POGO, CTIA said, “[t]he wireless industry is committed to safeguarding consumer security and privacy and collaborates closely with DHS, the FCC, and other stakeholders to combat evolving threats that could impact communications networks.”

The working group’s report acknowledged that problems remained with SS7, but it recommended voluntary measures and put the onus on telecom users to take extra steps like using apps that encrypt their phone calls and texts.

Criminals, terrorists, and spies

Just a month after the CSRIC working group released its SS7 report, DHS took a much more ominous tone, releasing a report that warned that SS7 “vulnerabilities can be exploited by criminals, terrorists, and nation-state actors/foreign intelligence organizations” and said that “many organizations appear to be sharing or selling expertise and services that could be used to spy on Americans.”

DHS wanted action.

“New laws and authorities may be needed to enable the Government to independently assess the national security and other risks associated with SS7” and other communications protocols, the agency wrote.

But DHS also admitted it wasn’t necessarily the agency that would take the lead. A footnote in that section reads: “Federal agencies such as the FCC and FTC may have authorities over some of these issues.”

CTIA pushed back with a confidential May 2017 white paper that downplayed the risks associated with SS7 and argued against stronger security rules. The paper, which was sent to DHS and to members of Congress, was later obtained and published by Motherboard.

“Congress and the Administration should reject the [DHS] Report’s call for greater regulation,” the trade group wrote.

When CSRIC created yet another working group in late 2017 to continue studying network reliability issues during Pai’s tenure, the DHS experts who objected to the previous working group’s report were “not invited back to participate,” according to Wyden’s May 2018 letter. The final report from that working group lists just one representative from DHS, compared to four in the previous group.

When reached for comment, DHS did not directly address questions about the agency’s experience with CSRIC.

Aside from DHS and individual members of Congress, other parts of the US government have signaled concerns about SS7. For example, as the initial CSRIC working group was starting to review the issue in the summer of 2016, the National Institute of Standards and Technology (NIST), an agency that sets standards for government best practices, released draft guidance echoing that of Google and other tech companies. It warned people away from relying on text messaging to validate identity for various online accounts and services because of the security issues.

But the draft drew pushback from the telecom industry, including CTIA.

“There is insufficient evidence at this time to support removing [text message] authentication in future versions of the Digital Identity Guidelines,” CTIA argued in comments on the NIST draft. After the pushback, NIST caved to industry pressure and removed its warning about relying on texts from the final version of its guidance.

While the government was deliberating, criminals were finding ways to exploit SS7 flaws.

In the summer of 2017, German researchers found that hackers used vulnerabilities in SS7 to drain victims’ bank accounts—exploiting essentially the same type of problems that NIST tried to flag in the scrapped draft guidance.

By 2018, attacks started happening in the domestic digital world, Senator Wyden wrote in his May 2018 letter to the FCC.

“This threat is not merely hypothetical—malicious attackers are already exploiting SS7 vulnerabilities,” Wyden wrote. “One of the major wireless carriers informed my office that it reported an SS7 breach, in which customer data was accessed, to law enforcement” using a portal managed by the FCC, he wrote.

The details of that incident remain unclear, presumably due to an ongoing investigation. However, the senator’s alarm highlights the fact that SS7 continues to put America’s economic and national security at risk.

Indeed, a report by The New York Times in October 2018 suggests that even the president’s own communications are vulnerable due to security problems in cellular networks, potentially including SS7. Chinese and Russian intelligence have gained valuable information about the president’s policy deliberations by intercepting calls made on his personal iPhone “as they travel through the cell towers, cables, and switches that make up national and international cellphone networks,” the Times reported.

The president disputed this account of his phone-use habits in a tweet, apparently sent from “Twitter for iPhone.” The next month, President Trump signed a law creating a Cybersecurity and Infrastructure Security Agency (or CISA) within the Department of Homeland Security, the agency that industry often suggests should oversee communications infrastructure cybersecurity instead of the FCC.

“CISA works regularly with the FCC and the communications sector to address security vulnerabilities and enhance the resilience of the nation’s communications infrastructure,” the agency said in a statement in response to questions for this story. “Our role as the nation’s risk advisor includes working with companies to exchange threat information, mitigate vulnerabilities, and provide incident response upon request.”

Other than efforts to reduce the reliance of US networks on technology from Chinese manufacturers, driven by fears about “supply chain security,” the FCC has largely abandoned its responsibility for protecting America’s networks from looming digital threats.

As the FCC’s engagement on cybersecurity has waned, so has CSRIC’s activity. CSRIC VI, whose members were chosen and nominated by current Chairman Pai, created less than a third of the working groups of its predecessor.

CSRIC VI’s final meeting was in March 2019. It’s unclear who will be part of the group’s seventh iteration—or if they will represent the public over the telecom industry’s interests.

Source: https://arstechnica.com/features/2019/04/fully-compromised-comms-how-industry-influence-at-the-fcc-risks-our-digital-security/2/

Alexa do you work for the NSA ;-)

Tens of millions of people use smart speakers and their voice software to play games, find music or trawl for trivia. Millions more are reluctant to invite the devices and their powerful microphones into their homes out of concern that someone might be listening.

Sometimes, someone is.

Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands.

The Alexa voice review process, described by seven people who have worked on the program, highlights the often-overlooked human role in training software algorithms. In marketing materials Amazon says Alexa “lives in the cloud and is always getting smarter.” But like many software tools built to learn from experience, humans are doing some of the teaching.

The team comprises a mix of contractors and full-time Amazon employees who work in outposts from Boston to Costa Rica, India and Romania, according to the people, who signed nondisclosure agreements barring them from speaking publicly about the program. They work nine hours a day, with each reviewer parsing as many as 1,000 audio clips per shift, according to two workers based at Amazon’s Bucharest office, which takes up the top three floors of the Globalworth building in the Romanian capital’s up-and-coming Pipera district. The modern facility stands out amid the crumbling infrastructure and bears no exterior sign advertising Amazon’s presence.

The work is mostly mundane. One worker in Boston said he mined accumulated voice data for specific utterances such as “Taylor Swift” and annotated them to indicate the searcher meant the musical artist. Occasionally the listeners pick up things Echo owners likely would rather stay private: a woman singing badly off key in the shower, say, or a child screaming for help. The teams use internal chat rooms to share files when they need help parsing a muddled word—or come across an amusing recording.

 Amazon in Bucharest
Amazon has offices in this Bucharest building.
Photographer: Irina Vilcu/Bloomberg

Sometimes they hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was a sexual assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere.

“We take the security and privacy of our customers’ personal information seriously,” an Amazon spokesman said in an emailed statement. “We only annotate an extremely small sample of Alexa voice recordings in order [to] improve the customer experience. For example, this information helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.

“We have strict technical and operational safeguards, and have a zero tolerance policy for the abuse of our system. Employees do not have direct access to information that can identify the person or account as part of this workflow. All information is treated with high confidentiality and we use multi-factor authentication to restrict access, service encryption and audits of our control environment to protect it.”

Amazon, in its marketing and privacy policy materials, doesn’t explicitly say humans are listening to recordings of some conversations picked up by Alexa. “We use your requests to Alexa to train our speech recognition and natural language understanding systems,” the company says in a list of frequently asked questions.

In Alexa’s privacy settings, Amazon gives users the option of disabling the use of their voice recordings for the development of new features. The company says people who opt out of that program might still have their recordings analyzed by hand over the regular course of the review process. A screenshot reviewed by Bloomberg shows that the recordings sent to the Alexa reviewers don’t provide a user’s full name and address but are associated with an account number, as well as the user’s first name and the device’s serial number.

The Intercept reported earlier this year that employees of Amazon-owned Ring manually identify vehicles and people in videos captured by the company’s doorbell cameras, an effort to better train the software to do that work itself.

“You don’t necessarily think of another human listening to what you’re telling your smart speaker in the intimacy of your home,” said Florian Schaub, a professor at the University of Michigan who has researched privacy issues related to smart speakers. “I think we’ve been conditioned to the [assumption] that these machines are just doing magic machine learning. But the fact is there is still manual processing involved.”

“Whether that’s a privacy concern or not depends on how cautious Amazon and other companies are in what type of information they have manually annotated, and how they present that information to someone,” he added.

When the Echo debuted in 2014, Amazon’s cylindrical smart speaker quickly popularized the use of voice software in the home. Before long, Alphabet Inc. launched its own version, called Google Home, followed by Apple Inc.’s HomePod. Various companies also sell their own devices in China. Globally, consumers bought 78 million smart speakers last year, according to researcher Canalys. Millions more use voice software to interact with digital assistants on their smartphones.

Alexa software is designed to continuously record snatches of audio, listening for a wake word. That’s “Alexa” by default, but people can change it to “Echo” or “computer.” When the wake word is detected, the light ring at the top of the Echo turns blue, indicating the device is recording and beaming a command to Amazon servers.

 Inside An Amazon 4-Star Store
An Echo smart speaker inside an Amazon 4-star store in Berkeley, California.
Photographer: Cayce Clifford/Bloomberg

Most modern speech-recognition systems rely on neural networks patterned on the human brain. The software learns as it goes, by spotting patterns amid vast amounts of data. The algorithms powering the Echo and other smart speakers use models of probability to make educated guesses. If someone asks Alexa if there’s a Greek place nearby, the algorithms know the user is probably looking for a restaurant, not a church or community center.

But sometimes Alexa gets it wrong—especially when grappling with new slang, regional colloquialisms or languages other than English. In French, avec sa, “with his” or “with her,” can confuse the software into thinking someone is using the Alexa wake word. Hecho, Spanish for a fact or deed, is sometimes misinterpreted as Echo. And so on. That’s why Amazon recruited human helpers to fill in the gaps missed by the algorithms.

Apple’s Siri also has human helpers, who work to gauge whether the digital assistant’s interpretation of requests lines up with what the person said. The recordings they review lack personally identifiable information and are stored for six months tied to a random identifier, according to an Apple security white paper. After that, the data is stripped of its random identification information but may be stored for longer periods to improve Siri’s voice recognition.

At Google, some reviewers can access some audio snippets from its Assistant to help train and improve the product, but it’s not associated with any personally identifiable information and the audio is distorted, the company says.

A recent Amazon job posting, seeking a quality assurance manager for Alexa Data Services in Bucharest, describes the role humans play: “Every day she [Alexa] listens to thousands of people talking to her about different topics and different languages, and she needs our help to make sense of it all.” The want ad continues: “This is big data handling like you’ve never seen it. We’re creating, labeling, curating and analyzing vast quantities of speech on a daily basis.”

Amazon’s review process for speech data begins when Alexa pulls a random, small sampling of customer voice recordings and sends the audio files to the far-flung employees and contractors, according to a person familiar with the program’s design.

 Amazon.com Inc. Holds Product Reveal Launch
The Echo Spot
Photographer: Daniel Berman/Bloomberg

Some Alexa reviewers are tasked with transcribing users’ commands, comparing the recordings to Alexa’s automated transcript, say, or annotating the interaction between user and machine. What did the person ask? Did Alexa provide an effective response?

Others note everything the speaker picks up, including background conversations—even when children are speaking. Sometimes listeners hear users discussing private details such as names or bank details; in such cases, they’re supposed to tick a dialog box denoting “critical data.” They then move on to the next audio file.

According to Amazon’s website, no audio is stored unless Echo detects the wake word or is activated by pressing a button. But sometimes Alexa appears to begin recording without any prompt at all, and the audio files start with a blaring television or unintelligible noise. Whether or not the activation is mistaken, the reviewers are required to transcribe it. One of the people said the auditors each transcribe as many as 100 recordings a day when Alexa receives no wake command or is triggered by accident.

In homes around the world, Echo owners frequently speculate about who might be listening, according to two of the reviewers. “Do you work for the NSA?” they ask. “Alexa, is someone else listening to us?”

— With assistance by Gerrit De Vynck, Mark Gurman, and Irina Vilcu

Source: https://www.bloomberg.com/news/articles/2019-04-10/is-anyone-listening-to-you-on-alexa-a-global-team-reviews-audio

Is GDPR the new hacker scare tactic?

GDPR in Europe

No one questions the good intent behind the EU’s General Data Protection Regulation (GDPR) legislation, or the need for companies to be more careful with the proprietary information they have about clients, patients, and other individuals they interact with regularly. While the provisions within the GDPR do help, they have also created new opportunities for hackers and identity thieves to exploit that data.

There’s no doubt that seeking to be fully GDPR compliant is more than just a good idea. Along the way, just make sure your organization doesn’t fall victim to one of the various scams that are surfacing. Let’s take a quick review of GDPR and then dive into the dirty tricks hackers have been playing.

Understanding the Basics of GDPR

In 2018, the GDPR established a set of guidelines for managing the collection and storage of consumer and proprietary data. Much of it pertains to personal information provided by individuals to an entity.

That entity may be a banking institution, insurance company, investing service, or even a health care facility. The primary goal is to ensure adequate protections are in place so that an ill-intentioned third party can’t exploit the personal information of those organizations’ employees, clients, and patients.

The GDPR addresses key areas of data security:

  • Explicit consent to collect and maintain personal data
  • Notification in the event of a data breach
  • Dedicated data security personnel within the organization
  • Data encryption that protects personal information in the event of a breach
  • Access to personal information for review of accuracy (integrity), and to set limitations on the intended use

While there has been pushback about some of the provisions within the GDPR (especially the need for additional data security personnel outside of the usual IT team), many organizations have been eager to adopt the measures. After all, being GDPR compliant can decrease the risk of a breach and would prove helpful if lawsuits resulted after a breach.

GDPR and Appropriate Security

There is an ongoing discussion about what represents adequate and appropriate security in terms of GDPR compliance. To some degree, the exact approach to security will vary, based on the type of organization involved and the nature of the data that is collected and maintained.

Even so, there is some overlap that would apply in every case. Compliance involves identifying and reinforcing every point in the network where some type of intrusion could possibly take place. Using Artificial Intelligence technology to reinforce points of vulnerability while also monitoring them for possible cyberattacks is another element. Even having an escalation plan in place to handle a major data breach within a short period of time is something any organization could enact.

One point that is sometimes lost in the entire discussion about GDPR security is that the guidelines set minimum standards. Entities are free to go above and beyond in terms of protecting proprietary data like customer lists. Viewing compliance as the starting point and continuing to refine network security will serve a company well in the long run.

So What Have Hackers Been Doing Since the Launch of GDPR?

There’s no doubt that hackers and others with less than honorable intentions have been doing their best to work around the GDPR guidelines even as they use them to their advantage. Some news reports claim that GDPR has made it easier for hackers to gain access to data. So what exactly have these ethically challenged individuals concocted?

Here are some examples:

Introducing Reverse Ransomware

As far as we know, it’s not really called reverse ransomware but that seems to be a pretty good way to describe this evil little scheme. As a review, a ransomware attack is when a hacker gets into your system and encrypts data so you can’t see or use it. Only with the payment of a ransom, typically in untraceable Bitcoin or other cryptocurrencies, will the hacker make your data usable again.

The sad ending to the ransomware saga is that more times than not, the data is never released even if the ransom is paid.

But GDPR has provided the inspiration for the bad guys to put a sneaky spin on the data drama. In this case, they penetrate the network by whatever means available to collect the customer lists, etc., which the EU has worked so hard to protect with the new regulations.

The threat with this variation, however, is that the data will be released publicly, which would put the organization in immediate violation of GDPR and make it liable for what could be a hefty fine — one that is substantially larger than the ransom the criminals are demanding.

Of course, the hacker promises not to release the data if the hostage company pays a ransom and might even further promise to destroy the data afterward. If you believe they’ll actually do that, I’d like to introduce you to the Easter Bunny and Tooth Fairy.

The attacker has already demonstrated a strong amoral streak. What’s to stop them from demanding another payment a month down the road? If you guessed nothing, you’re right. But wait, there’s more.

Doing a Lot of Phishing

Many organizations have seen a continual flow of unsolicited emails offering to help them become GDPR compliant. These range from offering free consultations that can be conducted remotely to conducting online training sessions to explain GDPR and suggest ways to increase security.

Typically, this type of phishing scheme offers a way to remit payments for services in advance, with the understanding that the client pays a portion now and the rest later.

Unsurprisingly, anyone who clicks on the link may lose more than whatever payment is rendered. Wherever the individual lands, the site is likely to be infected with spyware or worse. And if the email is forwarded throughout an organization or outside of it? The infection spreads.

I believe we need to be savvier with emails. That means training employees to never click on links in unsolicited emails, and to report suspicious emails to the security team at once.

What Can You Do?

As you can see, GDPR has provided a variety of crime opportunities for an enterprising hacker. These are just two examples of how they use GDPR for profit at the expense of hardworking business owners. The best first step when confronted with any of these types of threats is to not act on it. Instead, forward it to an agency that can properly evaluate the communication.

At the risk of sounding like Captain Obvious, have you done everything possible to fortify your network against advanced threats? Here are the basic preventive steps:

  1. Web security software: The first line of defense is a firewall (updated regularly of course) that prowls the perimeter, looking to prevent any outside threat’s attempt to penetrate. In addition, be sure to implement network security software that detects malicious network activity resulting from a threat that manages to bypass your perimeter controls. It used to be that you could survive with a haphazard philosophy towards security, but those days are long gone. Get good security software and put it to work.
  1. Encrypt that data: While the firewall and security software protects a network from outside penetration attempts, your data doesn’t always stay at home safe and sound. Any time a remote worker connects back to your network or an employee on premises ventures out to the open Internet, data is at risk. That’s why a virtual private network (VPN) should be a mandatory preventive security measure.

It’s a simple but strong idea. Using military grade protocols, a properly configured VPN service encrypts the flow of data between a network device and the Internet or between a remote device and the company network. The big idea here is that even if a hacker manages to siphon off data, they will be greeted with an indecipherable mess that would take the world’s strongest computers working in unison a few billion years to crack. They’ll probably move onto an easier game.

And while a VPN should be a frontline tool to combat hackers, there’s something else that might even be more important.

  1. Education and Training: Through ignorance or inattention, employees can be the biggest threat to cybersecurity. It’s not enough to simply sit them down when you hire them and warn dire consequences if they let malware in the building. Owners need a thorough, ongoing education program related to online security that emphasizes its importance as being only slightly below breathing.

The Bottom Line

The GDPR does not have to be a stumbling block for you or an opportunity for a hacker. Stay proactive with your security measures and keep your antenna tuned for signs of trouble.

Source: https://betanews.com/2019/03/29/is-gdpr-the-new-hacker-scare-tactic/

Ad IDs Behaving Badly

The Ad ID

Persistent identifiers are the bread and butter of the online tracking industry. They allow companies to learn the websites that you visit and the apps that you use, including what you do within those apps. A persistent identifier is just a unique number that is used to either identify you or your device. Your Social Security Number and phone number are examples of persistent identifiers used in real life; cookies use persistent identifiers to identify you across websites.

On your mobile device, there are many different types of persistent identifiers that are used by app developers and third parties contacted by those apps. For example, one app might send an advertising network your device’s serial number. When a different app on your same phone sends that same advertising network your device’s serial number, that advertising network now knows that you use both of these apps, and can use that information to profile you. This sort of profiling is what is meant by “behavioral advertising.” That is, they track your behaviors so that they can infer your interests from those behaviors, and then send you ads targeted to those inferred interests.

On the web, if you don’t want to be tracked in this manner, you can periodically clear your cookies or configure your browser to simply not accept cookies (though this breaks a lot of the web, given that there are many other uses for cookies beyond tracking). Clearing your cookies resets all of the persistent identifiers, which means that new persistent identifiers will be sent to third parties, making it more difficult for them to associate your future online activities with the previous profile they had constructed.

Regarding the persistent identifiers used by mobile apps, up until a few years ago, there was no way of doing the equivalent of clearing your cookies: many of the persistent identifiers used to track your mobile app activities were based in hardware, such the device’s serial number, IMEI, WiFi MAC address, SIM card serial number, etc. Many apps used (and still use) the Android ID for tracking purposes, which while not based in hardware, can only be reset by performing a factory reset on the device and deleting all of its data. Thus, there wasn’t an easy way for users to do the equivalent of clearing their cookies.

However, this changed in 2013 with the creation of the “ad ID”: both Android and iOS unveiled a new persistent identifier based in software that provides the user with privacy controls to reset that identifier at will (similar to clearing cookies).

Of course, being able to reset the ad identifier is only a good privacy-preserving solution if it is the only identifier being collected from the device. Imagine the following situation:

  1. An app sends both the ad ID and the IMEI (a non-resettable hardware-based identifier) to a data broker.
  2. Concerned with her privacy, the user uses one of the above privacy settings panels to reset her phone’s ad ID.
  3. Later, when using a different app, the same data broker is sent the new ad ID alongside the IMEI.
  4. The data broker sees that while the ad IDs are different between these two transmissions, the IMEI is the same, and therefore they must have come from the same device. Knowing this, the data broker can then add the second transmission to the user’s existing profile.

In this case, sending a non-resettable identifier alongside the ad ID completely undermines the privacy-preserving properties of the ad ID: resetting it does not prevent tracking. For this reason, both iOS and Android have policies that prohibit developers from transmitting other identifiers alongside the ad ID. For example, in 2017, it was major news that Uber’s app had violated iOS App Store privacy guidelines by collecting non-resettable persistent identifiers. Tim Cook personally threatened to have the Uber app removed from the store. Similarly, Google’s Play Store policy says that the ad ID cannot be transmitted alongside other identifiers without users’ explicit consent, and that for advertising purposes, the ad ID is the only identifier that can be used:

Association with personally-identifiable information or other identifiers. The advertising identifier must not be connected to personally-identifiable information or associated with any persistent device identifier (for example: SSAID, MAC address, IMEI, etc.) without explicit consent of the user.

Abiding by the terms of use. The advertising identifier may only be used in accordance with these terms, including by any party that you may share it with in the course of your business. All apps uploaded or published to Google Play must use the advertising ID (when available on a device) in lieu of any other device identifiers for any advertising purposes.

https://play.google.com/about/monetization-ads/ads/ad-id/

Violations of Ad ID Policies

I examined the AppCensus database to examine compliance with this policy. That is, are there apps violating this policy by transmitting the ad ID alongside other persistent identifiers to advertisers? When I performed this experiment last September, there were approximately 24k apps in our database that we had observed transmitting the ad ID. Of these, approximately 17k (i.e., ~70%) were transmitting the ad ID alongside other persistent identifiers. Based on the data recipients of some of the most popular offenders, these are clearly being used for advertising purposes:

App Name Installs Data Types Recipient
Clean Master – Antivirus, Cleaner & Booster 1B Ad ID + Android ID t.appsflyer.com
Subway Surfers 1B Android ID api.vungle.com
Flipboard: News For Our Time 500M Ad ID + Android ID ad.flipboard.com
My Talking Tom 500M Ad ID + Android ID m2m1.inner-active.mobi
Temple Run 2 500M Ad ID + Android ID live.chartboost.com
3D Bowling 100M Ad ID + Android ID + IMEI ws.tapjoyads.com
8 Ball Pool 100M Ad ID + Android ID ws.tapjoyads.com
Agar.io 100M Ad ID + Android ID ws.tapjoyads.com
Angry Birds Classic 100M Android ID ads.api.vungle.com
Audiobooks from Audible 100M Ad ID + Android ID api.branch.io
Azar 100M Ad ID + Android ID api.branch.io
B612 – Beauty & Filter Camera 100M Ad ID + Android ID t.appsflyer.com
Banana Kong 100M Ad ID + Android ID live.chartboost.com
Battery Doctor – Battery Life Saver & Battery Cooler 100M Ad ID + Android ID + IMEI t.appsflyer.com
BeautyPlus – Easy Photo Editor & Selfie Camera 100M Ad ID + Android ID t.appsflyer.com,
live.chartboost.com
Bus Rush 100M Ad ID + Android ID ads.api.vungle.com,
ws.tapjoyads.com
CamScanner – Phone PDF Creator 100M Ad ID + Android ID + IMEI t.appsflyer.com
Cheetah Keyboard – Emoji & Stickers Keyboard 100M Ad ID + Android ID t.appsflyer.com
Cooking Fever 100M Ad ID + Android ID ws.tapjoyads.com
Cut The Rope Full FREE 100M Ad ID + Android ID ws.tapjoyads.com

This is just the top 20 most popular apps that are violating this policy, sorted alphabetically. All of the domains receiving the data in the right-most column are either advertising networks, or companies otherwise involved in tracking users’ interactions with ads (i.e., to use Google’s language, “any advertising purposes”). In fact, as of today, there are over 18k distinct apps transmitting the Ad ID alongside other persistent identifiers.

In September, our research group reported just under 17k apps to Google that were transmitting the ad ID alongside other identifiers. The data we gave them included the data types being transmitted and a list of the recipient domains, which included some of the following companies involved in mobile advertising:

  • ad-mediation.tuanguwen.com
  • ad.adsrvr.org
  • ad.doubleclick.net
  • ad.lkqd.net
  • adc-ad-assets.adtilt.com
  • admarvel-d.openx.net
  • admediator.unityads.unity3d.com
  • adproxy.fyber.com
  • ads-roularta.adhese.com
  • ads-secure.videohub.tv
  • ads.adadapted.com
  • ads.adecosystems.net
  • ads.admarvel.com
  • ads.api.vungle.com
  • ads.flurry.com
  • ads.heyzap.com
  • ads.mopub.com
  • ads.nexage.com
  • ads.superawesome.tv
  • adtrack.king.com
  • adwatch.appodeal.com
  • amazon-adsystem.com
  • androidads23.adcolony.com
  • api.salmonads.com
  • app.adjust.com
  • init.supersonicads.com
  • live.chartboost.com
  • marketing-ssl.upsight-api.com
  • track.appsflyer.com
  • ws.tapjoyads.com

The majority of these have the word “ads” in the hostname. Looking at the traffic shows that they are either being used to place ads in apps, or track user engagement with ads.

It has been 5 months since we submitted that report, and we have not received anything from Google about whether they plan to address this pervasive problem. In the interim, more apps now appear to be violating Google’s policy. The problem with all of this is that Google is providing users with privacy controls (see above image), but those privacy controls don’t actually do anything because they only control the ad ID, and we’ve shown that in the vast majority of cases, other persistent identifiers are being collected by apps in addition to the ad ID.

https://blog.appcensus.mobi/2019/02/14/ad-ids-behaving-badly/

Germany bans Facebook from combining user data without permission

Germany’s Federal Cartel Office, or Bundeskartellamt, on Thursday banned Facebook from combining user data from its various platforms such as WhatsApp and Instagram without explicit user permission.

The decision, which comes as the result of a nearly three-year antitrust investigation into Facebook’s data gathering practices, also bans the social media company from gleaning user data from third-party sites unless they voluntarily consent.

“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Bundeskartellamt President Andreas Mundt said in a release. “In [the] future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.”

Mundt noted that combining user data from various sources “substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power.”

Experts agreed with the decision. “It is high time to regulate the internet giants effectively!” said Marc Al-Hames, general manager of German data protection technologies developer Cliqz GmbH. “Unregulated data capitalism inevitably creates unfair conditions.”

Al-Hames noted that apps like WhatsApp have become “indispensable for many young people,” who feel compelled to join if they want to be part of the social scene. “Social media create social pressure,” he said. “And Facebook exploits this mercilessly: Give me your data or you’re an outsider.”

He called the practice an abuse of dominant market position. “But that’s not all: Facebook monitors our activities regardless of whether we are a member of one of its networks or not. Even those who consciously renounce the social networks for the sake of privacy will still be spied out,” he said, adding that Cliqz and Ghostery stats show that “every fourth of our website visits are monitored by Facebook’s data collection technologies, so-called trackers.”

The Bundeskartellamt’s decision will prevent Facebook from collecting and using data without restriction. “Voluntary consent means that the use of Facebook’s services must [now] be subject to the users’ consent to their data being collected and combined in this way,” said Mundt. “If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

The ban drew support and calls for it to be expanded to other companies.

“This latest move by Germany’s competition regulator is welcome,” said Morten Brøgger, CEO of secure collaboration platform Wire. “Compromising user privacy for profit is a risk no exec should be willing to take.”

Brøgger contends that Facebook has not fully understood digital privacy’s importance. “From emails suggesting cashing in on user data for money, to the infamous Cambridge Analytica scandal, the company is taking steps back in a world which is increasingly moving towards the protection of everyone’s data,” he said.

“The lesson here is that you cannot simply trust firms that rely on the exchange of data as its main offering, Brøgger added, “and firms using Facebook-owned applications should have a rethink about the platforms they use to do business.”

Al-Hames said regulators shouldn’t stop with Facebook, which he called the number-two offender. “By far the most important data monopolist is Alphabet. With Google search, the Android operating system, the Play Store app sales platform and the Chrome browser, the internet giant collects data on virtually everyone in the Western world,” Al-Hames said. “And even those who want to get free by using alternative services stay trapped in Alphabet’s clutches: With a tracker reach of nearly 80 percent of all page loads Alphabet probably knows more about them than their closest friends or relatives. When it comes to our data, the top priority of the market regulators shouldn’t be Facebook, it should be Alphabet!”

Source: https://www.scmagazine.com/home/network-security/germany-bans-facebook-from-combining-user-data-without-permission/

Apple Glassboxes IOS Apps to remove screen recording code

Pedestrians pass in front of a billboard advertising Apple Inc. iPhone security during the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Monday, Jan. 7, 2019. Apple made its presence felt at CES 2019 with a massive billboard highlighting the iPhone’s privacy features. Source: Photographer: David Paul Morris/Bloomberg via Getty Images

Apple is telling app developers to remove or properly disclose their use of analytics code that allows them to record how a user interacts with their iPhone apps — or face removal from the app store, TechCrunch can confirm.

In an email, an Apple spokesperson said: “Protecting user privacy is paramount in the Apple ecosystem. Our App Store Review Guidelines require that apps request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity.”

“We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary,” the spokesperson added.

It follows an investigation by TechCrunch that revealed major companies, like Expedia, Hollister and Hotels.com, were using a third-party analytics tool to record every tap and swipe inside the app. We found that none of the apps we tested asked the user for permission, and none of the companies said in their privacy policies that they were recording a user’s app activity.

Even though sensitive data is supposed to be masked, some data — like passport numbers and credit card numbers — was leaking.

Glassbox is a cross-platform analytics tool that specializes in session replay technology. It allows companies to integrate its screen recording technology into their apps to replay how a user interacts with the apps. Glassbox says it provides the technology, among many reasons, to help reduce app error rates. But the company “doesn’t enforce its customers” to mention that they use Glassbox’s screen recording tools in their privacy policies.

But Apple expressly forbids apps that covertly collect data without a user’s permission.

TechCrunch began hearing on Thursday that app developers had already been notified that their apps had fallen afoul of Apple’s rules. One app developer was told by Apple to remove code that recorded app activities, citing the company’s app store guidelines.

“Your app uses analytics software to collect and send user or device data to a third party without the user’s consent. Apps must request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity,” Apple said in the email.

Apple gave the developer less than a day to remove the code and resubmit their app or the app would be removed from the app store, the email said.

When asked if Glassbox was aware of the app store removals, a spokesperson for Glassbox said that “the communication with Apple is through our customers.”

Glassbox is also available to Android app developers. Google did not immediately comment if it would also ban the screen recording code. Google Play also expressly prohibits apps from secretly collecting device usage. “Apps must not hide or cloak tracking behavior or attempt to mislead users about such functionality,” the developer rules state. We’ll update if and when we hear back.

It’s the latest privacy debacle that has forced Apple to wade in to protect its customers after apps were caught misbehaving.

Last week, TechCrunch reported that Apple banned Facebook’s “research” app that the social media giant paid teenagers to collect all of their data.

It followed another investigation by TechCrunch that revealed Facebook misused its Apple-issued enterprise developer certificate to build and provide apps for consumers outside Apple’s App Store. Apple temporarily revoked Facebook’s enterprise developer certificate, knocking all of the company’s internal iOS apps offline for close to a day.

Source: https://techcrunch.com/2019/02/07/apple-glassbox-apps/

Pedestrians pass in front of a billboard advertising Apple Inc. iPhone security during the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Monday, Jan. 7, 2019. Apple made its presence felt at CES 2019 with a massive billboard highlighting the iPhone’s privacy features. Source: Photographer: David Paul Morris/Bloomberg via Getty Images