Archiv des Autors: innovation

Two-factor authentication explained: How to choose the right level of security for every account

f you aren’t already protecting your most personal accounts with two-factor or two-step authentication, you should be. An extra line of defense that’s tougher than the strongest password, 2FA is extremely important to blocking hacks and attacks on your personal data. If you don’t quite understand what it is, we’ve broken it all down for you.

Two-factor-authentication: What it is

Two-factor authentication is basically a combination of two of the following factors:

  1. Something you know
  2. Something you have
  3. Something you are

Something you know is your password, so 2FA always starts there. Rather than let you into your account once your password is entered, however, two-factor authentication requires a second set of credentials, like when the DMV wants your license and a utility bill. So that’s where factors 2 and 3 come into play. Something you have is your phone or another device, while something you are is your face, irises, or fingerprint. If you can’t provide authentication beyond the password alone, you won’t be allowed into the service you’re trying to log into.

So there are several options for the second factor: SMS, authenticator apps, Bluetooth-, USB-, and NFC-based security keys, and biometrics. So let’s take a look at your options so you can decide which is best for you.

Two-factor-authentication: SMS

2fa sms Michael Simon/IDG
When you choose SMS-based 2FA, all you need is a mobile phone number.

What it is: The most common “something you have” second authentication method is SMS. A service will send a text to your phone with a numerical code, which then needs to be typed into the field provided. If the codes match, your identification is verified and access is granted.

How to set it up: Nearly every two-factor authentication system uses SMS by default, so there isn’t much to do beyond flipping the toggle or switch to turn on 2FA on the chosen account. Depending on the app or service, you’ll find it somewhere in settings, under Security if the tab exists. Once activated you’ll need to enter your password and a mobile phone number.

How it works: When you turn on SMS-based authentication, you’ll receive a code via text that you’ll need to enter after you type your password. That protects you against someone randomly logging into your account from somewhere else, since your password alone in useless without the code. While some apps and services solely rely on SMS-based 2FA, many of them offer numerous options, even if SMS is selected by default.

2fa sms setup IDG
With SMS-based authentication, you’ll get a code via text that will allow access to your account.

How secure it is: By definition, SMS authentication is the least secure method of two-factor authentication. Your phone can be cloned or just plain stolen, SMS messages can be intercepted, and by nature most default messaging apps aren’t encrypted. So the code that’s sent to you could possibly fall into someone’s hands other than yours. It’s unlikely to be an issue unless you’re a valuable target, however.

How convenient it is: Very. You’re likely to always have your phone within reach, so the second authentication is super convenient, especially if the account you’re signing into is on your phone.

Should you use it? Any two-factor authentication is better than none, but if you’re serious about security, SMS won’t cut it.

Two-factor-authentication: Authenticator apps

2fa authenticator Authenticator apps
Authenticator apps generate random codes that aren’t delivered over SMS.

What it is: Like SMS-based two-factor authentication, authenticator apps generate codes that need to be inputted when prompted. However, rather than sending them over unencrypted SMS, they’re generated within an app, and you don’t even need an Internet connection to get one.

How to set it up: To get started with an authentication app, you’ll need to download one from the Play Store or the App Store. Google Authenticator works great for your Google account and anything you use it to log into, but there are other great one’s as well, including Authy, LastPass, Microsoft and a slew of other individual companies, such as Blizzard, Sophos, and Salesforce. If an app or service supports authenticator apps, it’ll supply a QR code that you can scan or enter on your phone.

How it works: When you open your chosen authenticator app and scan the code, a 6-figure code will appear, just like with SMS 2FA. Input that code into the app and you’re good to go. After the initial setup, you’ll be able to go into the app to get a code without scanning a QR code whenever you need one.

2fa authenticator setup IDG
Authenticator apps generate randome codes every 30 seconds and can be used offline.

How secure it is: Unless someone has access to your phone or whatever device is running your authenticator app, it’s completely secure. Since codes are randomized within the app and aren’t delivered over SMS, there’s no way for prying eyes to steal them. For extra security, Authy allows you to set pin and password protection, too, something Google doesn’t offer on its authenticator app.

How convenient it is: While opening an app is slightly less convenient than receiving a text message, authenticator apps don’t take more than few seconds to use. They’re far more secure than SMS, and you can use them offline if you ever run into an issue where you need a code but have no connection.

Should you use it? An authenticator app strikes the sweet spot between security and convenience. While you might find some services that don’t support authenticator apps, the vast majority do.

Two-factor authentication: Universal second factor (security key)

2fa security key Michael Simon/IDG
As their name implies, Security keys are the most secure way to lock down your account.

What it is: Unlike SMS- and authenticator-based 2FA, universal second factor is truly a “something you have” method of protecting your accounts. Instead of a digital code, the second factor is a hardware-based security key. You’ll need to order a physical key to use it, which will connect to your phone or PC via USB, NFC, or Bluetooth.

You can buy a Titan Security Key bundle from Google for $50, which includes a USB-A security key and a Bluetooth security key along with a USB-A-to-USB-C adapter, or buy one from Yubico. An NFC-enabled key is recommended if you’re going to be using it with a phone.

How to set it up: Setting up a security key is basically the same as the other methods, except you’ll need a computer. You’ll need to turn on two-factor authentication, and then select the “security key” option, if it’s available. Most popular accounts, such as Twitter, Facebook, and Google all support security keys, so your most vulnerable accounts should be all set. However, while Chrome, Firefox, and Microsoft’s Edge browser all support security keys, Apple’s Safari browser does not, so you’ll be prompted to switch during setup.

Once you reach the security settings page for the service you’re enabling 2FA with, select security key, and follow the prompts. You’ll be asked to insert your key (so make sure you have an USB-C adapter on hand if you have a MacBook) and press the button on it. That will initiate the connection with your computer, pair your key, and in a few seconds your account will be ready to go.

How it works: When an account requests 2FA verification, you’ll need to plug your security key into your phone or PC’s USB-C port or (if supported) tap it to the back of your NFC-enabled phone. Then it’s only a matter of pressing the button on the key to establish the connection and you’re in.

2fa security key steps IDG
Setting up your security key with your Google account is a multi-step process.

How secure it is: Extremely. Since all of the login authentication is stored on a physical key that is either on your person or stored somewhere safe, the odds of someone accessing your account are extremely low. To do so, they would need to steal your password and the key to access your account, which is very unlikely.

How convenient it is: Not very. When you log into one of your accounts on a new device, you’ll need to type your password and then authenticate it via the hardware key, either by inserting it into your PC’s USB port or pressing it against the back of an NFC-enabled phone. Neither method takes more than a few seconds, though, provided you have your security key within reach.

Two-factor authentication: Google Advanced Protection Program

What it is: If you want to completely lock down your most important data, Google offers the Advanced Protection Program for your Google account, which disables everything except security key-based 2FA. It also limits access your emails and Drive files to Google apps and select third-party apps, and shuts down web access to browsers other than Chrome and Firefox.

How to set it up: You’ll need to make a serious commitment. To enroll in Google Advanced Protection, you’ll need to purchase two Security Keys: one as your main key and one as your backup key. Google sells its own Titan Security Key bundle, but you can also buy a set from Yubico or Feitian.

Once you get your keys, you’ll need to register them with your Google account and then agree to turn off all other forms of authentication. But here’s the rub: To ensure that every one of your devices is properly protected, Google will log you out of every account on every device you own so you can log in again using Advanced Protection.

How it works: Advanced Protection works just like a security except you won’t be able to choose a different method if you forgot or lost your security key.

How secure it is: Google Advanced Protection is basically impenetrable. By relying solely on security keys, it makes sure that no one will be able to access your account without both your password and physical key, which is extremely unlikely.

How convenient it is: By nature, Google Advanced Protection is supposed to make it difficult for hackers to access your Google account and anything associated with it, so naturally it’s not so easy for the user either. Since there’s no fallback authentication method, you’ll need to remember your key whenever you leave the house. And when you run into a roadback—like the Safari browser on a Mac—you’re pretty much out of luck. But if you want your account to have the best possible protection, accept no substitute.

Two-factor authentication: Biometrics

op6t fingerprint Christopher Hebert/IDG
Nearly every smartphone made today has some form of secure biometrics built into it.

What it is: A password-free world where all apps and services are authenticated by a fingerprint or facial scan.

How to set it up: You can see biometrics at work when you opt to use the fingerprint scanner on your phone or Face ID on the iPhone XS, but at the moment, biometric security is little more than a replacement for your password after you login in and verify via another 2FA method.

How it works: Like the way you use your fingerprint or face to unlock your smartphone, biometric 2FA uses your body’s unique characteristics as your password. So your Google account would know it was you based on your scan when you set up your account, and it would automatically allow access when it recognized you.

How secure it is: Since it’s extremely difficult to clone your fingerprint or face, biometric authentication is the closest thing to a digital vault.

How convenient it is: You can’t go anywhere without your fingerprint or your face, so it doesn’t get more convenient than that.

Two-factor authentication: iCloud

2fa icloud Michael Simon/IDG
Apple sends a code to one of your trusted devices when it needs authentication to access an account.

What it is: Apple has its own method of two-factor authentication or your iCloud and iTunes accounts that involves setting up trusted Apple devices (iPhone, iPad, or Mac—Apple Watch isn’t supported) that can receive verification codes. You can also set up trusted numbers to receive SMS codes or get verification codes via an authenticator app built into the Settings app.

How to set it up: As long as you’re logged into into your iCloud account, you can turn on two-factor authentication from pretty much anywhere. Just go into Settings on your iOS device or System Preferences on your Mac, PC, or Android phone, then Security, and Turn On Two-Factor Authentication. From there, you can follow the prompts to set up your trusted phone number and devices.

How it works: When you need to access an account protected by 2FA, Apple will send a code to one of your trusted devices. If you don’t have a second Apple device, Apple will send you a code via SMS or you can get one from the Settings app on your iPhone or System preferences on your Mac.

2fa apple id code IDG
When Apple needs a code to log into an account, it sends it to one of your trusted devices.

How secure it is: It depends on how many Apple devices you own. If you own more than one Apple device, it’s very secure. Apple will send a code to one of your other devices whenever you or someone else tries to log into your account or one of Apple’s services on a new device. It even tells you the location of the request, so if you don’t recognize it you can instantly reject it, before the code even appears.

If you only have one device, you’ll have to use SMS or Apple’s built-in authenticator, neither of which is all that secure, especially since it’s likely to both be done using the same device. Also, Apple has a weird snafu that sends the 2FA access code to the same device when you manage your account using a browser, which also defeats the purpose of 2FA.

How convenient it is: If you’re using an iPhone and have an iPad or Mac nearby, the process takes seconds, but if you don’t have an Apple device within reach or are away from your keyboard, it can be tedious.

Source: https://www.pcworld.com/article/3387420/two-factor-authentication-faq-sms-authenticator-security-key-icloud.html

SS7 contains back doors open like Swiss Cheese

The outages hit in the summer of 1991. Over several days, phone lines in major metropolises went dead without warning, disrupting emergency services and even air traffic control, often for hours. Phones went down one day in Los Angeles, then on another day in Washington, DC and Baltimore, and then in Pittsburgh. Even after service was restored to an area, there was no guarantee the lines would not fail again—and sometimes they did. The outages left millions of Americans disconnected.

The culprit? A computer glitch. A coding mistake in software used to route calls for a piece of telecom infrastructure known as Signaling System No. 7 (SS7) caused network-crippling overloads. It was an early sign of the fragility of the digital architecture that binds together the nation’s phone systems.

Leaders on Capitol Hill called on the one agency with the authority to help: the Federal Communications Commission (FCC). The FCC made changes, including new outage reporting requirements for phone carriers. To help the agency respond to digital network stability concerns, the FCC also launched an outside advisory group—then known as the Network Reliability Council but now called the Communications Security, Reliability, and Interoperability Council (CSRIC, pronounced “scissor-ick”).

Yet decades later, SS7 and other components of the nation’s digital backbone remain flawed, leaving calls and texts vulnerable to interception and disruption. Instead of facing the challenges of our hyper-connected age, the FCC is stumbling, according to documents obtained by the Project On Government Oversight (POGO) and through extensive interviews with current and former agency employees. The agency is hampered by a lack of leadership on cybersecurity issues and a dearth of in-house technical expertise that all too often leaves it relying on security advice from the very companies it is supposed to oversee.

Captured

CSRIC is a prime example of this so-called “agency capture”—the group was set up to help supplement FCC expertise and craft meaningful rules for emerging technologies. But instead, the FCC’s reliance on security advice from industry representatives creates an inherent conflict of interest. The result is weakened regulation and enforcement that ultimately puts all Americans at risk, according to former agency staff.

While the agency took steps to improve its oversight of digital security issues under the Obama administration, many of these reforms have been walked back under current Chairman Ajit Pai. Pai, a former Verizon lawyer, has consistently signaled that he doesn’t want his agency to play a significant role in the digital security of Americans’ communications—despite security being a core agency responsibility since the FCC’s inception in 1934.

The FCC’s founding statute charges it with crafting regulations that promote the “safety of life and property through the use of wire and radio communications,” giving it broad authority to secure communications. Former FCC Chairman Tom Wheeler and many legal experts argue that this includes cyber threats.

As a regulator, the FCC carries a stick: it can hit communications companies with fines if they don’t comply with its rules. That responsibility is even more important now that “smart” devices are networking almost every aspect of our lives.

But not everyone thinks the agency’s mandate is quite so clear, especially in the telecom industry. Telecom companies fight back hard against regulation; over the last decade, they spent nearly a billion dollars lobbying Congress and federal agencies, according to data from OpenSecrets. The industry argues that the FCC’s broad mandate to secure communications doesn’t extend to cybersecurity, and it has pushed for oversight of cybersecurity to come instead from other parts of government, typically the Department of Homeland Security (DHS) or the Federal Trade Commission (FTC)—neither of which is vested with the same level of rule-making powers as the FCC.

To Wheeler, himself the former head of industry trade group CTIA, the push toward DHS seemed like an obvious ploy. “The people and companies the FCC was charged with regulating wanted to see if they could get their jurisdiction moved to someone with less regulatory authority,” he told POGO.

But Chairman Pai seems to agree with industry. In a November 2018 letter to Senator Ron Wyden (D-Ore.) about the agency’s handling of SS7 problems, provided to POGO by the senator’s office, Pai wrote that the FCC “plays a supporting role, as a partner with DHS, in identifying vulnerabilities and working with stakeholders to increase security and resiliency in communications network infrastructure.”

The FCC declined to comment for this story.

The current FCC declined comment, but POGO spoke with former chairman Tom Wheeler, seen here <a href="https://arstechnica.com/information-technology/2016/03/how-a-former-lobbyist-became-the-broadband-industrys-worst-nightmare/">speaking with Ars</a> back in 2016.
Enlarge / The current FCC declined comment, but POGO spoke with former chairman Tom Wheeler, seen here speaking with Ars back in 2016.
Jon Brodkin

Failing to protect the “crown jewels” of telecom

How the telecom industry leveraged lawmakers’ calls for FCC reform in the wake of the SS7 outages is a case study in how corporate influence can overcome even the best of the government’s intentions.

From the beginning, industry representatives dominated membership of the advisory group now known as CSRIC—though, initially, the group only provided input on a small subset of digital communications issues. Over time, as innovations in communications raced forward with the expansion of cellular networks and the Internet, the FCC’s internal technical capabilities didn’t keep up: throughout the 1990s and early 2000s, the agency’s technical expertise was largely limited to telephone networks while the world shifted to data networks, former staffers told POGO. The few agency staffers with expertise on new technologies were siloed in different offices, making it hard to coordinate a comprehensive response to the paradigm shift in communication systems. That gap left the agency increasingly dependent on advice from CSRIC.

During the early 1990s, the SS7-based software system was just coming into wide use. Today, though, it is considered outdated and insecure. Despite that, carriers still use the technology as a backup in their networks. This leaves the people who rely on those networks vulnerable to the technology’s problems, as Jonathan Mayer, a Princeton computer science and public affairs professor and former FCC Enforcement Bureau chief technologist, explained during a Congressional hearing in June 2018.

Unlike in the 1990s, the risks now go much deeper than just service disruption. Researchers have long warned that flaws in the system allow cybercriminals or hackers—sometimes working on behalf of foreign adversaries—to turn cell phones into sophisticated geo-tracking devices or to intercept calls and text messages. Security problems with SS7 are so severe that some government agencies and some major companies like Google are moving away from using codes sent via text to help secure important accounts, such as those for email or online banking.

A panel advising President Bill Clinton raised the alarm back in 1997, saying that SS7 was among America’s networking “crown jewels” and warning that if those crown jewels were “attacked or exploited, [it] could result in a situation that threatened the security and reliability of the telecommunications infrastructure.” By 2001, security researchers argued that risks associated with SS7 were multiplying thanks to “deregulation” and “the Internet and wireless networks.” They were proved right in 2008 when other researchers demonstrated ways that hackers could use flaws in SS7 to pinpoint the location of unsuspecting cell phone users.

By 2014, it was clear that foreign governments had caught on to the disruptive promise of the problem. That year, Russian intelligence used SS7 vulnerabilities to attack a Ukrainian telecommunications company, according to a report published by NATO’s Cooperative Cyber Defence Centre of Excellence, and more research about SS7 call interception made headlines in The Washington Post and elsewhere.

Despite the increasingly dire stakes, the FCC didn’t pay much attention to the issue until the summer of 2016, after Rep. Ted Lieu (D-Calif.) allowed 60 Minutes to demonstrate how researchers could use security flaws in the SS7 protocol to spy on his phone. The FCC—then led by Wheeler—responded by essentially passing the buck to CSRIC. It created a working group to study and make security recommendations about SS7 and other so-called “legacy systems.” The result was a March 2017 report with non-binding guidance about best practices for securing against SS7 vulnerabilities, a non-public report, and the eventual creation of yet another CSRIC working group to study similar security issues.

A POGO analysis of CSRIC membership in recent years shows that its membership, which is solely appointed by the FCC chairman, leans heavily toward industry. And the authorship of the March 2017 report was even more lopsided than CSRIC, overall. Of the twenty working-group members listed in the final report, only five were from the government, including four from the Department of Homeland Security. The remaining fifteen represented private-sector interests. None were academics or consumer advocates.

The working group’s leadership was drawn entirely from industry. The group’s co-chairs came from networking infrastructure company Verisign and iconectiv, a subsidiary of Swedish telecom company Ericsson. The lead editor of the group’s final report was CTIA Vice President for Technology and Cyber Security John Marinho.

Emails from 2016 between working group members, obtained by POGO via a Freedom of Information Act request, show that the group dragged its feet on resolving SS7 security vulnerabilities despite urging from FCC officials to move quickly. The group also repeatedly ignored input from DHS technical experts.

The problem wasn’t figuring out a fix, however, according to David Simpson, a retired rear-admiral who led the FCC’s Public Safety and Homeland Security Bureau at the time. The group was quickly able to discern some best practices—primarily through using different filtering systems—that some major carriers had already deployed and that others could use to mitigate the risks associated with SS7.

“We knew the answer within the first couple months from the technical experts in the working groups,” said Simpson, who consulted with the Working Group. But ultimately, the “consensus orientation of the CSRIC unfortunately allowed” the final report to be pushed from the lame-duck session into the Trump administration—which is not generally inclined toward introducing new federal regulations.

Overall, POGO’s analysis of emails from the group and interviews with former FCC staff found that industry dominance of CSRIC appears to have contributed to a number of issues with the process and the final report, including:

  • Industry members of the working group successfully pushed for the final recommendations to rely on voluntary compliance, according to former FCC staffers. Security experts say that strategy ultimately leaves the entire cellular network at risk because there are thousands of smaller providers, often in rural areas, that are unlikely to prioritize rolling out the needed protections without a firm rule.
  • An August 2016 email shows that, early on in the process, DHS experts objected to describing the working group’s focus as being on “legacy” systems because it “conveys a message that these protocols and associated threats are going away soon and that’s not necessarily the case.” The group did not revise the legacy language, and it remained in the final report.
  • In an email from September 2016, an FCC official emailed Marinho, noting that edits from DHS were not being incorporated into the working draft. Marinho responded that he received them too late and planned to incorporate them in a later version. However, in a May 2018 letter to the FCC, Senator Wyden said DHS officials told his office that “the vast majority of edits to the final report” suggested by DHS experts “were rejected.”

In the emails obtained by POGO, Marinho also refers to warnings about security issues with SS7 that came from panelists at an event organized by George Washington University’s Cybersecurity Strategy and Information Management Program as “hyperbolic.”

Marinho did not respond to a series of specific questions about the Working Group’s activities. In a statement to POGO, CTIA said, “[t]he wireless industry is committed to safeguarding consumer security and privacy and collaborates closely with DHS, the FCC, and other stakeholders to combat evolving threats that could impact communications networks.”

The working group’s report acknowledged that problems remained with SS7, but it recommended voluntary measures and put the onus on telecom users to take extra steps like using apps that encrypt their phone calls and texts.

Criminals, terrorists, and spies

Just a month after the CSRIC working group released its SS7 report, DHS took a much more ominous tone, releasing a report that warned that SS7 “vulnerabilities can be exploited by criminals, terrorists, and nation-state actors/foreign intelligence organizations” and said that “many organizations appear to be sharing or selling expertise and services that could be used to spy on Americans.”

DHS wanted action.

“New laws and authorities may be needed to enable the Government to independently assess the national security and other risks associated with SS7” and other communications protocols, the agency wrote.

But DHS also admitted it wasn’t necessarily the agency that would take the lead. A footnote in that section reads: “Federal agencies such as the FCC and FTC may have authorities over some of these issues.”

CTIA pushed back with a confidential May 2017 white paper that downplayed the risks associated with SS7 and argued against stronger security rules. The paper, which was sent to DHS and to members of Congress, was later obtained and published by Motherboard.

“Congress and the Administration should reject the [DHS] Report’s call for greater regulation,” the trade group wrote.

When CSRIC created yet another working group in late 2017 to continue studying network reliability issues during Pai’s tenure, the DHS experts who objected to the previous working group’s report were “not invited back to participate,” according to Wyden’s May 2018 letter. The final report from that working group lists just one representative from DHS, compared to four in the previous group.

When reached for comment, DHS did not directly address questions about the agency’s experience with CSRIC.

Aside from DHS and individual members of Congress, other parts of the US government have signaled concerns about SS7. For example, as the initial CSRIC working group was starting to review the issue in the summer of 2016, the National Institute of Standards and Technology (NIST), an agency that sets standards for government best practices, released draft guidance echoing that of Google and other tech companies. It warned people away from relying on text messaging to validate identity for various online accounts and services because of the security issues.

But the draft drew pushback from the telecom industry, including CTIA.

“There is insufficient evidence at this time to support removing [text message] authentication in future versions of the Digital Identity Guidelines,” CTIA argued in comments on the NIST draft. After the pushback, NIST caved to industry pressure and removed its warning about relying on texts from the final version of its guidance.

While the government was deliberating, criminals were finding ways to exploit SS7 flaws.

In the summer of 2017, German researchers found that hackers used vulnerabilities in SS7 to drain victims’ bank accounts—exploiting essentially the same type of problems that NIST tried to flag in the scrapped draft guidance.

By 2018, attacks started happening in the domestic digital world, Senator Wyden wrote in his May 2018 letter to the FCC.

“This threat is not merely hypothetical—malicious attackers are already exploiting SS7 vulnerabilities,” Wyden wrote. “One of the major wireless carriers informed my office that it reported an SS7 breach, in which customer data was accessed, to law enforcement” using a portal managed by the FCC, he wrote.

The details of that incident remain unclear, presumably due to an ongoing investigation. However, the senator’s alarm highlights the fact that SS7 continues to put America’s economic and national security at risk.

Indeed, a report by The New York Times in October 2018 suggests that even the president’s own communications are vulnerable due to security problems in cellular networks, potentially including SS7. Chinese and Russian intelligence have gained valuable information about the president’s policy deliberations by intercepting calls made on his personal iPhone “as they travel through the cell towers, cables, and switches that make up national and international cellphone networks,” the Times reported.

The president disputed this account of his phone-use habits in a tweet, apparently sent from “Twitter for iPhone.” The next month, President Trump signed a law creating a Cybersecurity and Infrastructure Security Agency (or CISA) within the Department of Homeland Security, the agency that industry often suggests should oversee communications infrastructure cybersecurity instead of the FCC.

“CISA works regularly with the FCC and the communications sector to address security vulnerabilities and enhance the resilience of the nation’s communications infrastructure,” the agency said in a statement in response to questions for this story. “Our role as the nation’s risk advisor includes working with companies to exchange threat information, mitigate vulnerabilities, and provide incident response upon request.”

Other than efforts to reduce the reliance of US networks on technology from Chinese manufacturers, driven by fears about “supply chain security,” the FCC has largely abandoned its responsibility for protecting America’s networks from looming digital threats.

As the FCC’s engagement on cybersecurity has waned, so has CSRIC’s activity. CSRIC VI, whose members were chosen and nominated by current Chairman Pai, created less than a third of the working groups of its predecessor.

CSRIC VI’s final meeting was in March 2019. It’s unclear who will be part of the group’s seventh iteration—or if they will represent the public over the telecom industry’s interests.

Source: https://arstechnica.com/features/2019/04/fully-compromised-comms-how-industry-influence-at-the-fcc-risks-our-digital-security/2/

Alexa do you work for the NSA ;-)

Tens of millions of people use smart speakers and their voice software to play games, find music or trawl for trivia. Millions more are reluctant to invite the devices and their powerful microphones into their homes out of concern that someone might be listening.

Sometimes, someone is.

Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands.

The Alexa voice review process, described by seven people who have worked on the program, highlights the often-overlooked human role in training software algorithms. In marketing materials Amazon says Alexa “lives in the cloud and is always getting smarter.” But like many software tools built to learn from experience, humans are doing some of the teaching.

The team comprises a mix of contractors and full-time Amazon employees who work in outposts from Boston to Costa Rica, India and Romania, according to the people, who signed nondisclosure agreements barring them from speaking publicly about the program. They work nine hours a day, with each reviewer parsing as many as 1,000 audio clips per shift, according to two workers based at Amazon’s Bucharest office, which takes up the top three floors of the Globalworth building in the Romanian capital’s up-and-coming Pipera district. The modern facility stands out amid the crumbling infrastructure and bears no exterior sign advertising Amazon’s presence.

The work is mostly mundane. One worker in Boston said he mined accumulated voice data for specific utterances such as “Taylor Swift” and annotated them to indicate the searcher meant the musical artist. Occasionally the listeners pick up things Echo owners likely would rather stay private: a woman singing badly off key in the shower, say, or a child screaming for help. The teams use internal chat rooms to share files when they need help parsing a muddled word—or come across an amusing recording.

 Amazon in Bucharest
Amazon has offices in this Bucharest building.
Photographer: Irina Vilcu/Bloomberg

Sometimes they hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was a sexual assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere.

“We take the security and privacy of our customers’ personal information seriously,” an Amazon spokesman said in an emailed statement. “We only annotate an extremely small sample of Alexa voice recordings in order [to] improve the customer experience. For example, this information helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.

“We have strict technical and operational safeguards, and have a zero tolerance policy for the abuse of our system. Employees do not have direct access to information that can identify the person or account as part of this workflow. All information is treated with high confidentiality and we use multi-factor authentication to restrict access, service encryption and audits of our control environment to protect it.”

Amazon, in its marketing and privacy policy materials, doesn’t explicitly say humans are listening to recordings of some conversations picked up by Alexa. “We use your requests to Alexa to train our speech recognition and natural language understanding systems,” the company says in a list of frequently asked questions.

In Alexa’s privacy settings, Amazon gives users the option of disabling the use of their voice recordings for the development of new features. The company says people who opt out of that program might still have their recordings analyzed by hand over the regular course of the review process. A screenshot reviewed by Bloomberg shows that the recordings sent to the Alexa reviewers don’t provide a user’s full name and address but are associated with an account number, as well as the user’s first name and the device’s serial number.

The Intercept reported earlier this year that employees of Amazon-owned Ring manually identify vehicles and people in videos captured by the company’s doorbell cameras, an effort to better train the software to do that work itself.

“You don’t necessarily think of another human listening to what you’re telling your smart speaker in the intimacy of your home,” said Florian Schaub, a professor at the University of Michigan who has researched privacy issues related to smart speakers. “I think we’ve been conditioned to the [assumption] that these machines are just doing magic machine learning. But the fact is there is still manual processing involved.”

“Whether that’s a privacy concern or not depends on how cautious Amazon and other companies are in what type of information they have manually annotated, and how they present that information to someone,” he added.

When the Echo debuted in 2014, Amazon’s cylindrical smart speaker quickly popularized the use of voice software in the home. Before long, Alphabet Inc. launched its own version, called Google Home, followed by Apple Inc.’s HomePod. Various companies also sell their own devices in China. Globally, consumers bought 78 million smart speakers last year, according to researcher Canalys. Millions more use voice software to interact with digital assistants on their smartphones.

Alexa software is designed to continuously record snatches of audio, listening for a wake word. That’s “Alexa” by default, but people can change it to “Echo” or “computer.” When the wake word is detected, the light ring at the top of the Echo turns blue, indicating the device is recording and beaming a command to Amazon servers.

 Inside An Amazon 4-Star Store
An Echo smart speaker inside an Amazon 4-star store in Berkeley, California.
Photographer: Cayce Clifford/Bloomberg

Most modern speech-recognition systems rely on neural networks patterned on the human brain. The software learns as it goes, by spotting patterns amid vast amounts of data. The algorithms powering the Echo and other smart speakers use models of probability to make educated guesses. If someone asks Alexa if there’s a Greek place nearby, the algorithms know the user is probably looking for a restaurant, not a church or community center.

But sometimes Alexa gets it wrong—especially when grappling with new slang, regional colloquialisms or languages other than English. In French, avec sa, “with his” or “with her,” can confuse the software into thinking someone is using the Alexa wake word. Hecho, Spanish for a fact or deed, is sometimes misinterpreted as Echo. And so on. That’s why Amazon recruited human helpers to fill in the gaps missed by the algorithms.

Apple’s Siri also has human helpers, who work to gauge whether the digital assistant’s interpretation of requests lines up with what the person said. The recordings they review lack personally identifiable information and are stored for six months tied to a random identifier, according to an Apple security white paper. After that, the data is stripped of its random identification information but may be stored for longer periods to improve Siri’s voice recognition.

At Google, some reviewers can access some audio snippets from its Assistant to help train and improve the product, but it’s not associated with any personally identifiable information and the audio is distorted, the company says.

A recent Amazon job posting, seeking a quality assurance manager for Alexa Data Services in Bucharest, describes the role humans play: “Every day she [Alexa] listens to thousands of people talking to her about different topics and different languages, and she needs our help to make sense of it all.” The want ad continues: “This is big data handling like you’ve never seen it. We’re creating, labeling, curating and analyzing vast quantities of speech on a daily basis.”

Amazon’s review process for speech data begins when Alexa pulls a random, small sampling of customer voice recordings and sends the audio files to the far-flung employees and contractors, according to a person familiar with the program’s design.

 Amazon.com Inc. Holds Product Reveal Launch
The Echo Spot
Photographer: Daniel Berman/Bloomberg

Some Alexa reviewers are tasked with transcribing users’ commands, comparing the recordings to Alexa’s automated transcript, say, or annotating the interaction between user and machine. What did the person ask? Did Alexa provide an effective response?

Others note everything the speaker picks up, including background conversations—even when children are speaking. Sometimes listeners hear users discussing private details such as names or bank details; in such cases, they’re supposed to tick a dialog box denoting “critical data.” They then move on to the next audio file.

According to Amazon’s website, no audio is stored unless Echo detects the wake word or is activated by pressing a button. But sometimes Alexa appears to begin recording without any prompt at all, and the audio files start with a blaring television or unintelligible noise. Whether or not the activation is mistaken, the reviewers are required to transcribe it. One of the people said the auditors each transcribe as many as 100 recordings a day when Alexa receives no wake command or is triggered by accident.

In homes around the world, Echo owners frequently speculate about who might be listening, according to two of the reviewers. “Do you work for the NSA?” they ask. “Alexa, is someone else listening to us?”

— With assistance by Gerrit De Vynck, Mark Gurman, and Irina Vilcu

Source: https://www.bloomberg.com/news/articles/2019-04-10/is-anyone-listening-to-you-on-alexa-a-global-team-reviews-audio

Is GDPR the new hacker scare tactic?

GDPR in Europe

No one questions the good intent behind the EU’s General Data Protection Regulation (GDPR) legislation, or the need for companies to be more careful with the proprietary information they have about clients, patients, and other individuals they interact with regularly. While the provisions within the GDPR do help, they have also created new opportunities for hackers and identity thieves to exploit that data.

There’s no doubt that seeking to be fully GDPR compliant is more than just a good idea. Along the way, just make sure your organization doesn’t fall victim to one of the various scams that are surfacing. Let’s take a quick review of GDPR and then dive into the dirty tricks hackers have been playing.

Understanding the Basics of GDPR

In 2018, the GDPR established a set of guidelines for managing the collection and storage of consumer and proprietary data. Much of it pertains to personal information provided by individuals to an entity.

That entity may be a banking institution, insurance company, investing service, or even a health care facility. The primary goal is to ensure adequate protections are in place so that an ill-intentioned third party can’t exploit the personal information of those organizations’ employees, clients, and patients.

The GDPR addresses key areas of data security:

  • Explicit consent to collect and maintain personal data
  • Notification in the event of a data breach
  • Dedicated data security personnel within the organization
  • Data encryption that protects personal information in the event of a breach
  • Access to personal information for review of accuracy (integrity), and to set limitations on the intended use

While there has been pushback about some of the provisions within the GDPR (especially the need for additional data security personnel outside of the usual IT team), many organizations have been eager to adopt the measures. After all, being GDPR compliant can decrease the risk of a breach and would prove helpful if lawsuits resulted after a breach.

GDPR and Appropriate Security

There is an ongoing discussion about what represents adequate and appropriate security in terms of GDPR compliance. To some degree, the exact approach to security will vary, based on the type of organization involved and the nature of the data that is collected and maintained.

Even so, there is some overlap that would apply in every case. Compliance involves identifying and reinforcing every point in the network where some type of intrusion could possibly take place. Using Artificial Intelligence technology to reinforce points of vulnerability while also monitoring them for possible cyberattacks is another element. Even having an escalation plan in place to handle a major data breach within a short period of time is something any organization could enact.

One point that is sometimes lost in the entire discussion about GDPR security is that the guidelines set minimum standards. Entities are free to go above and beyond in terms of protecting proprietary data like customer lists. Viewing compliance as the starting point and continuing to refine network security will serve a company well in the long run.

So What Have Hackers Been Doing Since the Launch of GDPR?

There’s no doubt that hackers and others with less than honorable intentions have been doing their best to work around the GDPR guidelines even as they use them to their advantage. Some news reports claim that GDPR has made it easier for hackers to gain access to data. So what exactly have these ethically challenged individuals concocted?

Here are some examples:

Introducing Reverse Ransomware

As far as we know, it’s not really called reverse ransomware but that seems to be a pretty good way to describe this evil little scheme. As a review, a ransomware attack is when a hacker gets into your system and encrypts data so you can’t see or use it. Only with the payment of a ransom, typically in untraceable Bitcoin or other cryptocurrencies, will the hacker make your data usable again.

The sad ending to the ransomware saga is that more times than not, the data is never released even if the ransom is paid.

But GDPR has provided the inspiration for the bad guys to put a sneaky spin on the data drama. In this case, they penetrate the network by whatever means available to collect the customer lists, etc., which the EU has worked so hard to protect with the new regulations.

The threat with this variation, however, is that the data will be released publicly, which would put the organization in immediate violation of GDPR and make it liable for what could be a hefty fine — one that is substantially larger than the ransom the criminals are demanding.

Of course, the hacker promises not to release the data if the hostage company pays a ransom and might even further promise to destroy the data afterward. If you believe they’ll actually do that, I’d like to introduce you to the Easter Bunny and Tooth Fairy.

The attacker has already demonstrated a strong amoral streak. What’s to stop them from demanding another payment a month down the road? If you guessed nothing, you’re right. But wait, there’s more.

Doing a Lot of Phishing

Many organizations have seen a continual flow of unsolicited emails offering to help them become GDPR compliant. These range from offering free consultations that can be conducted remotely to conducting online training sessions to explain GDPR and suggest ways to increase security.

Typically, this type of phishing scheme offers a way to remit payments for services in advance, with the understanding that the client pays a portion now and the rest later.

Unsurprisingly, anyone who clicks on the link may lose more than whatever payment is rendered. Wherever the individual lands, the site is likely to be infected with spyware or worse. And if the email is forwarded throughout an organization or outside of it? The infection spreads.

I believe we need to be savvier with emails. That means training employees to never click on links in unsolicited emails, and to report suspicious emails to the security team at once.

What Can You Do?

As you can see, GDPR has provided a variety of crime opportunities for an enterprising hacker. These are just two examples of how they use GDPR for profit at the expense of hardworking business owners. The best first step when confronted with any of these types of threats is to not act on it. Instead, forward it to an agency that can properly evaluate the communication.

At the risk of sounding like Captain Obvious, have you done everything possible to fortify your network against advanced threats? Here are the basic preventive steps:

  1. Web security software: The first line of defense is a firewall (updated regularly of course) that prowls the perimeter, looking to prevent any outside threat’s attempt to penetrate. In addition, be sure to implement network security software that detects malicious network activity resulting from a threat that manages to bypass your perimeter controls. It used to be that you could survive with a haphazard philosophy towards security, but those days are long gone. Get good security software and put it to work.
  1. Encrypt that data: While the firewall and security software protects a network from outside penetration attempts, your data doesn’t always stay at home safe and sound. Any time a remote worker connects back to your network or an employee on premises ventures out to the open Internet, data is at risk. That’s why a virtual private network (VPN) should be a mandatory preventive security measure.

It’s a simple but strong idea. Using military grade protocols, a properly configured VPN service encrypts the flow of data between a network device and the Internet or between a remote device and the company network. The big idea here is that even if a hacker manages to siphon off data, they will be greeted with an indecipherable mess that would take the world’s strongest computers working in unison a few billion years to crack. They’ll probably move onto an easier game.

And while a VPN should be a frontline tool to combat hackers, there’s something else that might even be more important.

  1. Education and Training: Through ignorance or inattention, employees can be the biggest threat to cybersecurity. It’s not enough to simply sit them down when you hire them and warn dire consequences if they let malware in the building. Owners need a thorough, ongoing education program related to online security that emphasizes its importance as being only slightly below breathing.

The Bottom Line

The GDPR does not have to be a stumbling block for you or an opportunity for a hacker. Stay proactive with your security measures and keep your antenna tuned for signs of trouble.

Source: https://betanews.com/2019/03/29/is-gdpr-the-new-hacker-scare-tactic/

Ad IDs Behaving Badly

The Ad ID

Persistent identifiers are the bread and butter of the online tracking industry. They allow companies to learn the websites that you visit and the apps that you use, including what you do within those apps. A persistent identifier is just a unique number that is used to either identify you or your device. Your Social Security Number and phone number are examples of persistent identifiers used in real life; cookies use persistent identifiers to identify you across websites.

On your mobile device, there are many different types of persistent identifiers that are used by app developers and third parties contacted by those apps. For example, one app might send an advertising network your device’s serial number. When a different app on your same phone sends that same advertising network your device’s serial number, that advertising network now knows that you use both of these apps, and can use that information to profile you. This sort of profiling is what is meant by “behavioral advertising.” That is, they track your behaviors so that they can infer your interests from those behaviors, and then send you ads targeted to those inferred interests.

On the web, if you don’t want to be tracked in this manner, you can periodically clear your cookies or configure your browser to simply not accept cookies (though this breaks a lot of the web, given that there are many other uses for cookies beyond tracking). Clearing your cookies resets all of the persistent identifiers, which means that new persistent identifiers will be sent to third parties, making it more difficult for them to associate your future online activities with the previous profile they had constructed.

Regarding the persistent identifiers used by mobile apps, up until a few years ago, there was no way of doing the equivalent of clearing your cookies: many of the persistent identifiers used to track your mobile app activities were based in hardware, such the device’s serial number, IMEI, WiFi MAC address, SIM card serial number, etc. Many apps used (and still use) the Android ID for tracking purposes, which while not based in hardware, can only be reset by performing a factory reset on the device and deleting all of its data. Thus, there wasn’t an easy way for users to do the equivalent of clearing their cookies.

However, this changed in 2013 with the creation of the “ad ID”: both Android and iOS unveiled a new persistent identifier based in software that provides the user with privacy controls to reset that identifier at will (similar to clearing cookies).

Of course, being able to reset the ad identifier is only a good privacy-preserving solution if it is the only identifier being collected from the device. Imagine the following situation:

  1. An app sends both the ad ID and the IMEI (a non-resettable hardware-based identifier) to a data broker.
  2. Concerned with her privacy, the user uses one of the above privacy settings panels to reset her phone’s ad ID.
  3. Later, when using a different app, the same data broker is sent the new ad ID alongside the IMEI.
  4. The data broker sees that while the ad IDs are different between these two transmissions, the IMEI is the same, and therefore they must have come from the same device. Knowing this, the data broker can then add the second transmission to the user’s existing profile.

In this case, sending a non-resettable identifier alongside the ad ID completely undermines the privacy-preserving properties of the ad ID: resetting it does not prevent tracking. For this reason, both iOS and Android have policies that prohibit developers from transmitting other identifiers alongside the ad ID. For example, in 2017, it was major news that Uber’s app had violated iOS App Store privacy guidelines by collecting non-resettable persistent identifiers. Tim Cook personally threatened to have the Uber app removed from the store. Similarly, Google’s Play Store policy says that the ad ID cannot be transmitted alongside other identifiers without users’ explicit consent, and that for advertising purposes, the ad ID is the only identifier that can be used:

Association with personally-identifiable information or other identifiers. The advertising identifier must not be connected to personally-identifiable information or associated with any persistent device identifier (for example: SSAID, MAC address, IMEI, etc.) without explicit consent of the user.

Abiding by the terms of use. The advertising identifier may only be used in accordance with these terms, including by any party that you may share it with in the course of your business. All apps uploaded or published to Google Play must use the advertising ID (when available on a device) in lieu of any other device identifiers for any advertising purposes.

https://play.google.com/about/monetization-ads/ads/ad-id/

Violations of Ad ID Policies

I examined the AppCensus database to examine compliance with this policy. That is, are there apps violating this policy by transmitting the ad ID alongside other persistent identifiers to advertisers? When I performed this experiment last September, there were approximately 24k apps in our database that we had observed transmitting the ad ID. Of these, approximately 17k (i.e., ~70%) were transmitting the ad ID alongside other persistent identifiers. Based on the data recipients of some of the most popular offenders, these are clearly being used for advertising purposes:

App Name Installs Data Types Recipient
Clean Master – Antivirus, Cleaner & Booster 1B Ad ID + Android ID t.appsflyer.com
Subway Surfers 1B Android ID api.vungle.com
Flipboard: News For Our Time 500M Ad ID + Android ID ad.flipboard.com
My Talking Tom 500M Ad ID + Android ID m2m1.inner-active.mobi
Temple Run 2 500M Ad ID + Android ID live.chartboost.com
3D Bowling 100M Ad ID + Android ID + IMEI ws.tapjoyads.com
8 Ball Pool 100M Ad ID + Android ID ws.tapjoyads.com
Agar.io 100M Ad ID + Android ID ws.tapjoyads.com
Angry Birds Classic 100M Android ID ads.api.vungle.com
Audiobooks from Audible 100M Ad ID + Android ID api.branch.io
Azar 100M Ad ID + Android ID api.branch.io
B612 – Beauty & Filter Camera 100M Ad ID + Android ID t.appsflyer.com
Banana Kong 100M Ad ID + Android ID live.chartboost.com
Battery Doctor – Battery Life Saver & Battery Cooler 100M Ad ID + Android ID + IMEI t.appsflyer.com
BeautyPlus – Easy Photo Editor & Selfie Camera 100M Ad ID + Android ID t.appsflyer.com,
live.chartboost.com
Bus Rush 100M Ad ID + Android ID ads.api.vungle.com,
ws.tapjoyads.com
CamScanner – Phone PDF Creator 100M Ad ID + Android ID + IMEI t.appsflyer.com
Cheetah Keyboard – Emoji & Stickers Keyboard 100M Ad ID + Android ID t.appsflyer.com
Cooking Fever 100M Ad ID + Android ID ws.tapjoyads.com
Cut The Rope Full FREE 100M Ad ID + Android ID ws.tapjoyads.com

This is just the top 20 most popular apps that are violating this policy, sorted alphabetically. All of the domains receiving the data in the right-most column are either advertising networks, or companies otherwise involved in tracking users’ interactions with ads (i.e., to use Google’s language, “any advertising purposes”). In fact, as of today, there are over 18k distinct apps transmitting the Ad ID alongside other persistent identifiers.

In September, our research group reported just under 17k apps to Google that were transmitting the ad ID alongside other identifiers. The data we gave them included the data types being transmitted and a list of the recipient domains, which included some of the following companies involved in mobile advertising:

  • ad-mediation.tuanguwen.com
  • ad.adsrvr.org
  • ad.doubleclick.net
  • ad.lkqd.net
  • adc-ad-assets.adtilt.com
  • admarvel-d.openx.net
  • admediator.unityads.unity3d.com
  • adproxy.fyber.com
  • ads-roularta.adhese.com
  • ads-secure.videohub.tv
  • ads.adadapted.com
  • ads.adecosystems.net
  • ads.admarvel.com
  • ads.api.vungle.com
  • ads.flurry.com
  • ads.heyzap.com
  • ads.mopub.com
  • ads.nexage.com
  • ads.superawesome.tv
  • adtrack.king.com
  • adwatch.appodeal.com
  • amazon-adsystem.com
  • androidads23.adcolony.com
  • api.salmonads.com
  • app.adjust.com
  • init.supersonicads.com
  • live.chartboost.com
  • marketing-ssl.upsight-api.com
  • track.appsflyer.com
  • ws.tapjoyads.com

The majority of these have the word “ads” in the hostname. Looking at the traffic shows that they are either being used to place ads in apps, or track user engagement with ads.

It has been 5 months since we submitted that report, and we have not received anything from Google about whether they plan to address this pervasive problem. In the interim, more apps now appear to be violating Google’s policy. The problem with all of this is that Google is providing users with privacy controls (see above image), but those privacy controls don’t actually do anything because they only control the ad ID, and we’ve shown that in the vast majority of cases, other persistent identifiers are being collected by apps in addition to the ad ID.

https://blog.appcensus.mobi/2019/02/14/ad-ids-behaving-badly/

Germany bans Facebook from combining user data without permission

Germany’s Federal Cartel Office, or Bundeskartellamt, on Thursday banned Facebook from combining user data from its various platforms such as WhatsApp and Instagram without explicit user permission.

The decision, which comes as the result of a nearly three-year antitrust investigation into Facebook’s data gathering practices, also bans the social media company from gleaning user data from third-party sites unless they voluntarily consent.

“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Bundeskartellamt President Andreas Mundt said in a release. “In [the] future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.”

Mundt noted that combining user data from various sources “substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power.”

Experts agreed with the decision. “It is high time to regulate the internet giants effectively!” said Marc Al-Hames, general manager of German data protection technologies developer Cliqz GmbH. “Unregulated data capitalism inevitably creates unfair conditions.”

Al-Hames noted that apps like WhatsApp have become “indispensable for many young people,” who feel compelled to join if they want to be part of the social scene. “Social media create social pressure,” he said. “And Facebook exploits this mercilessly: Give me your data or you’re an outsider.”

He called the practice an abuse of dominant market position. “But that’s not all: Facebook monitors our activities regardless of whether we are a member of one of its networks or not. Even those who consciously renounce the social networks for the sake of privacy will still be spied out,” he said, adding that Cliqz and Ghostery stats show that “every fourth of our website visits are monitored by Facebook’s data collection technologies, so-called trackers.”

The Bundeskartellamt’s decision will prevent Facebook from collecting and using data without restriction. “Voluntary consent means that the use of Facebook’s services must [now] be subject to the users’ consent to their data being collected and combined in this way,” said Mundt. “If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

The ban drew support and calls for it to be expanded to other companies.

“This latest move by Germany’s competition regulator is welcome,” said Morten Brøgger, CEO of secure collaboration platform Wire. “Compromising user privacy for profit is a risk no exec should be willing to take.”

Brøgger contends that Facebook has not fully understood digital privacy’s importance. “From emails suggesting cashing in on user data for money, to the infamous Cambridge Analytica scandal, the company is taking steps back in a world which is increasingly moving towards the protection of everyone’s data,” he said.

“The lesson here is that you cannot simply trust firms that rely on the exchange of data as its main offering, Brøgger added, “and firms using Facebook-owned applications should have a rethink about the platforms they use to do business.”

Al-Hames said regulators shouldn’t stop with Facebook, which he called the number-two offender. “By far the most important data monopolist is Alphabet. With Google search, the Android operating system, the Play Store app sales platform and the Chrome browser, the internet giant collects data on virtually everyone in the Western world,” Al-Hames said. “And even those who want to get free by using alternative services stay trapped in Alphabet’s clutches: With a tracker reach of nearly 80 percent of all page loads Alphabet probably knows more about them than their closest friends or relatives. When it comes to our data, the top priority of the market regulators shouldn’t be Facebook, it should be Alphabet!”

Source: https://www.scmagazine.com/home/network-security/germany-bans-facebook-from-combining-user-data-without-permission/

Apple Glassboxes IOS Apps to remove screen recording code

Pedestrians pass in front of a billboard advertising Apple Inc. iPhone security during the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Monday, Jan. 7, 2019. Apple made its presence felt at CES 2019 with a massive billboard highlighting the iPhone’s privacy features. Source: Photographer: David Paul Morris/Bloomberg via Getty Images

Apple is telling app developers to remove or properly disclose their use of analytics code that allows them to record how a user interacts with their iPhone apps — or face removal from the app store, TechCrunch can confirm.

In an email, an Apple spokesperson said: “Protecting user privacy is paramount in the Apple ecosystem. Our App Store Review Guidelines require that apps request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity.”

“We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary,” the spokesperson added.

It follows an investigation by TechCrunch that revealed major companies, like Expedia, Hollister and Hotels.com, were using a third-party analytics tool to record every tap and swipe inside the app. We found that none of the apps we tested asked the user for permission, and none of the companies said in their privacy policies that they were recording a user’s app activity.

Even though sensitive data is supposed to be masked, some data — like passport numbers and credit card numbers — was leaking.

Glassbox is a cross-platform analytics tool that specializes in session replay technology. It allows companies to integrate its screen recording technology into their apps to replay how a user interacts with the apps. Glassbox says it provides the technology, among many reasons, to help reduce app error rates. But the company “doesn’t enforce its customers” to mention that they use Glassbox’s screen recording tools in their privacy policies.

But Apple expressly forbids apps that covertly collect data without a user’s permission.

TechCrunch began hearing on Thursday that app developers had already been notified that their apps had fallen afoul of Apple’s rules. One app developer was told by Apple to remove code that recorded app activities, citing the company’s app store guidelines.

“Your app uses analytics software to collect and send user or device data to a third party without the user’s consent. Apps must request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity,” Apple said in the email.

Apple gave the developer less than a day to remove the code and resubmit their app or the app would be removed from the app store, the email said.

When asked if Glassbox was aware of the app store removals, a spokesperson for Glassbox said that “the communication with Apple is through our customers.”

Glassbox is also available to Android app developers. Google did not immediately comment if it would also ban the screen recording code. Google Play also expressly prohibits apps from secretly collecting device usage. “Apps must not hide or cloak tracking behavior or attempt to mislead users about such functionality,” the developer rules state. We’ll update if and when we hear back.

It’s the latest privacy debacle that has forced Apple to wade in to protect its customers after apps were caught misbehaving.

Last week, TechCrunch reported that Apple banned Facebook’s “research” app that the social media giant paid teenagers to collect all of their data.

It followed another investigation by TechCrunch that revealed Facebook misused its Apple-issued enterprise developer certificate to build and provide apps for consumers outside Apple’s App Store. Apple temporarily revoked Facebook’s enterprise developer certificate, knocking all of the company’s internal iOS apps offline for close to a day.

Source: https://techcrunch.com/2019/02/07/apple-glassbox-apps/

Pedestrians pass in front of a billboard advertising Apple Inc. iPhone security during the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Monday, Jan. 7, 2019. Apple made its presence felt at CES 2019 with a massive billboard highlighting the iPhone’s privacy features. Source: Photographer: David Paul Morris/Bloomberg via Getty Images

Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret

Dozens of companies use smartphone locations to help advertisers and even hedge funds. They say it’s anonymous, but the data shows how personal it is.

The millions of dots on the map trace highways, side streets and bike trails — each one following the path of an anonymous cellphone user.

One path tracks someone from a home outside Newark to a nearby Planned Parenthood, remaining there for more than an hour. Another represents a person who travels with the mayor of New York during the day and returns to Long Island at night.

Yet another leaves a house in upstate New York at 7 a.m. and travels to a middle school 14 miles away, staying until late afternoon each school day. Only one person makes that trip: Lisa Magrin, a 46-year-old math teacher. Her smartphone goes with her.

An app on the device gathered her location information, which was then sold without her knowledge. It recorded her whereabouts as often as every two seconds, according to a database of more than a million phones in the New York area that was reviewed by The New York Times. While Ms. Magrin’s identity was not disclosed in those records, The Times was able to easily connect her to that dot.

The app tracked her as she went to a Weight Watchers meeting and to her dermatologist’s office for a minor procedure. It followed her hiking with her dog and staying at her ex-boyfriend’s home, information she found disturbing.

“It’s the thought of people finding out those intimate details that you don’t want people to know,” said Ms. Magrin, who allowed The Times to review her location data.

Like many consumers, Ms. Magrin knew that apps could track people’s movements. But as smartphones have become ubiquitous and technology more accurate, an industry of snooping on people’s daily habits has spread and grown more intrusive.

 

At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States — about half those in use last year. The database reviewed by The Times — a sample of information gathered in 2017 and held by one company — reveals people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day.

[Learn how to stop apps from tracking your location.]

These companies sell, use or analyze the data to cater to advertisers, retail outlets and even hedge funds seeking insights into consumer behavior. It’s a hot market, with sales of location-targeted advertising reaching an estimated $21 billion this year. IBM has gotten into the industry, with its purchase of the Weather Channel’s apps. The social network Foursquare remade itself as a location marketing company. Prominent investors in location start-ups include Goldman Sachs and Peter Thiel, the PayPal co-founder.

Businesses say their interest is in the patterns, not the identities, that the data reveals about consumers. They note that the information apps collect is tied not to someone’s name or phone number but to a unique ID. But those with access to the raw data — including employees or clients — could still identify a person without consent. They could follow someone they knew, by pinpointing a phone that regularly spent time at that person’s home address. Or, working in reverse, they could attach a name to an anonymous dot, by seeing where the device spent nights and using public records to figure out who lived there.

Many location companies say that when phone users enable location services, their data is fair game. But, The Times found, the explanations people see when prompted to give permission are often incomplete or misleading. An app may tell users that granting access to their location will help them get traffic information, but not mention that the data will be shared and sold. That disclosure is often buried in a vague privacy policy.

“Location information can reveal some of the most intimate details of a person’s life — whether you’ve visited a psychiatrist, whether you went to an A.A. meeting, who you might date,” said Senator Ron Wyden, Democrat of Oregon, who has proposed bills to limit the collection and sale of such data, which are largely unregulated in the United States.

“It’s not right to have consumers kept in the dark about how their data is sold and shared and then leave them unable to do anything about it,” he added.

Mobile Surveillance Devices

After Elise Lee, a nurse in Manhattan, saw that her device had been tracked to the main operating room at the hospital where she works, she expressed concern about her privacy and that of her patients.

“It’s very scary,” said Ms. Lee, who allowed The Times to examine her location history in the data set it reviewed. “It feels like someone is following me, personally.”

The mobile location industry began as a way to customize apps and target ads for nearby businesses, but it has morphed into a data collection and analysis machine.

Retailers look to tracking companies to tell them about their own customers and their competitors’. For a web seminar last year, Elina Greenstein, an executive at the location company GroundTruth, mapped out the path of a hypothetical consumer from home to work to show potential clients how tracking could reveal a person’s preferences. For example, someone may search online for healthy recipes, but GroundTruth can see that the person often eats at fast-food restaurants.

“We look to understand who a person is, based on where they’ve been and where they’re going, in order to influence what they’re going to do next,” Ms. Greenstein said.

Financial firms can use the information to make investment decisions before a company reports earnings — seeing, for example, if more people are working on a factory floor, or going to a retailer’s stores.

 

Health care facilities are among the more enticing but troubling areas for tracking, as Ms. Lee’s reaction demonstrated. Tell All Digital, a Long Island advertising firm that is a client of a location company, says it runs ad campaigns for personal injury lawyers targeting people anonymously in emergency rooms.

“The book ‘1984,’ we’re kind of living it in a lot of ways,” said Bill Kakis, a managing partner at Tell All.

Jails, schools, a military base and a nuclear power plant — even crime scenes — appeared in the data set The Times reviewed. One person, perhaps a detective, arrived at the site of a late-night homicide in Manhattan, then spent time at a nearby hospital, returning repeatedly to the local police station.

Two location firms, Fysical and SafeGraph, mapped people attending the 2017 presidential inauguration. On Fysical’s map, a bright red box near the Capitol steps indicated the general location of President Trump and those around him, cellphones pinging away. Fysical’s chief executive said in an email that the data it used was anonymous. SafeGraph did not respond to requests for comment.

 

More than 1,000 popular apps contain location-sharing code from such companies, according to 2018 data from MightySignal, a mobile analysis firm. Google’s Android system was found to have about 1,200 apps with such code, compared with about 200 on Apple’s iOS.

The most prolific company was Reveal Mobile, based in North Carolina, which had location-gathering code in more than 500 apps, including many that provide local news. A Reveal spokesman said that the popularity of its code showed that it helped app developers make ad money and consumers get free services.

To evaluate location-sharing practices, The Times tested 20 apps, most of which had been flagged by researchers and industry insiders as potentially sharing the data. Together, 17 of the apps sent exact latitude and longitude to about 70 businesses. Precise location data from one app, WeatherBug on iOS, was received by 40 companies. When contacted by The Times, some of the companies that received that data described it as “unsolicited” or “inappropriate.”

WeatherBug, owned by GroundTruth, asks users’ permission to collect their location and tells them the information will be used to personalize ads. GroundTruth said that it typically sent the data to ad companies it worked with, but that if they didn’t want the information they could ask to stop receiving it.

The Times also identified more than 25 other companies that have said in marketing materials or interviews that they sell location data or services, including targeted advertising.

[Read more about how The Times analyzed location tracking companies.]

The spread of this information raises questions about how securely it is handled and whether it is vulnerable to hacking, said Serge Egelman, a computer security and privacy researcher affiliated with the University of California, Berkeley.

“There are really no consequences” for companies that don’t protect the data, he said, “other than bad press that gets forgotten about.”

A Question of Awareness

Companies that use location data say that people agree to share their information in exchange for customized services, rewards and discounts. Ms. Magrin, the teacher, noted that she liked that tracking technology let her record her jogging routes.

Brian Wong, chief executive of Kiip, a mobile ad firm that has also sold anonymous data from some of the apps it works with, says users give apps permission to use and share their data. “You are receiving these services for free because advertisers are helping monetize and pay for it,” he said, adding, “You would have to be pretty oblivious if you are not aware that this is going on.”

But Ms. Lee, the nurse, had a different view. “I guess that’s what they have to tell themselves,” she said of the companies. “But come on.”

Ms. Lee had given apps on her iPhone access to her location only for certain purposes — helping her find parking spaces, sending her weather alerts — and only if they did not indicate that the information would be used for anything else, she said. Ms. Magrin had allowed about a dozen apps on her Android phone access to her whereabouts for services like traffic notifications.

But it is easy to share information without realizing it. Of the 17 apps that The Times saw sending precise location data, just three on iOS and one on Android told users in a prompt during the permission process that the information could be used for advertising. Only one app, GasBuddy, which identifies nearby gas stations, indicated that data could also be shared to “analyze industry trends.”

More typical was theScore, a sports app: When prompting users to grant access to their location, it said the data would help “recommend local teams and players that are relevant to you.” The app passed precise coordinates to 16 advertising and location companies.

A spokesman for theScore said that the language in the prompt was intended only as a “quick introduction to certain key product features” and that the full uses of the data were described in the app’s privacy policy.

The Weather Channel app, owned by an IBM subsidiary, told users that sharing their locations would let them get personalized local weather reports. IBM said the subsidiary, the Weather Company, discussed other uses in its privacy policy and in a separate “privacy settings” section of the app. Information on advertising was included there, but a part of the app called “location settings” made no mention of it.

The app did not explicitly disclose that the company had also analyzed the data for hedge funds — a pilot program that was promoted on the company’s website. An IBM spokesman said the pilot had ended. (IBM updated the app’s privacy policy on Dec. 5, after queries from The Times, to say that it might share aggregated location data for commercial purposes such as analyzing foot traffic.)

Even industry insiders acknowledge that many people either don’t read those policies or may not fully understand their opaque language. Policies for apps that funnel location information to help investment firms, for instance, have said the data is used for market analysis, or simply shared for business purposes.

“Most people don’t know what’s going on,” said Emmett Kilduff, the chief executive of Eagle Alpha, which sells data to financial firms and hedge funds. Mr. Kilduff said responsibility for complying with data-gathering regulations fell to the companies that collected it from people.

Many location companies say they voluntarily take steps to protect users’ privacy, but policies vary widely.

For example, Sense360, which focuses on the restaurant industry, says it scrambles data within a 1,000-foot square around the device’s approximate home location. Another company, Factual, says that it collects data from consumers at home, but that its database doesn’t contain their addresses.

Some companies say they delete the location data after using it to serve ads, some use it for ads and pass it along to data aggregation companies, and others keep the information for years.

Several people in the location business said that it would be relatively simple to figure out individual identities in this kind of data, but that they didn’t do it. Others suggested it would require so much effort that hackers wouldn’t bother.

It “would take an enormous amount of resources,” said Bill Daddi, a spokesman for Cuebiq, which analyzes anonymous location data to help retailers and others, and raised more than $27 million this year from investors including Goldman Sachs and Nasdaq Ventures. Nevertheless, Cuebiq encrypts its information, logs employee queries and sells aggregated analysis, he said.

There is no federal law limiting the collection or use of such data. Still, apps that ask for access to users’ locations, prompting them for permission while leaving out important details about how the data will be used, may run afoul of federal rules on deceptive business practices, said Maneesha Mithal, a privacy official at the Federal Trade Commission.

“You can’t cure a misleading just-in-time disclosure with information in a privacy policy,” Ms. Mithal said.

Following the Money

Apps form the backbone of this new location data economy.

The app developers can make money by directly selling their data, or by sharing it for location-based ads, which command a premium. Location data companies pay half a cent to two cents per user per month, according to offer letters to app makers reviewed by The Times.

Targeted advertising is by far the most common use of the information.

Google and Facebook, which dominate the mobile ad market, also lead in location-based advertising. Both companies collect the data from their own apps. They say they don’t sell it but keep it for themselves to personalize their services, sell targeted ads across the internet and track whether the ads lead to sales at brick-and-mortar stores. Google, which also receives precise location information from apps that use its ad services, said it modified that data to make it less exact.

Smaller companies compete for the rest of the market, including by selling data and analysis to financial institutions. This segment of the industry is small but growing, expected to reach about $250 million a year by 2020, according to the market research firm Opimas.

Apple and Google have a financial interest in keeping developers happy, but both have taken steps to limit location data collection. In the most recent version of Android, apps that are not in use can collect locations “a few times an hour,” instead of continuously.

Apple has been stricter, for example requiring apps to justify collecting location details in pop-up messages. But Apple’s instructions for writing these pop-ups do not mention advertising or data sale, only features like getting “estimated travel times.”

A spokesman said the company mandates that developers use the data only to provide a service directly relevant to the app, or to serve advertising that met Apple’s guidelines.

Apple recently shelved plans that industry insiders say would have significantly curtailed location collection. Last year, the company said an upcoming version of iOS would show a blue bar onscreen whenever an app not in use was gaining access to location data.

The discussion served as a “warning shot” to people in the location industry, David Shim, chief executive of the location company Placed, said at an industry event last year.

After examining maps showing the locations extracted by their apps, Ms. Lee, the nurse, and Ms. Magrin, the teacher, immediately limited what data those apps could get. Ms. Lee said she told the other operating-room nurses to do the same.

“I went through all their phones and just told them: ‘You have to turn this off. You have to delete this,’” Ms. Lee said. “Nobody knew.”

Source: https://www.nytimes.com/interactive/2018/12/10/business/location-data-privacy-apps.html?action=click&module=Top%20Stories&pgtype=Homepage

Planned Parenthood
Records show a device entering Gracie Mansion, the mayor’s residence, before traveling to a Y.M.C.A. in Brooklyn that the mayor frequents.
It travels to an event on Staten Island that the mayor attended. Later, it returns to a home on Long Island.

An app on Lisa Magrin’s cellphone collected her location information, which was then shared with other companies. The data revealed her daily habits, including hikes with her dog, Lulu. Nathaniel Brooks for The New York Times

A notice that Android users saw when theScore, a sports app, asked for access to their location data.

The Weather Channel app showed iPhone users this message when it first asked for their location data.

Nuclear plant

In the data set reviewed by The Times, phone locations are recorded in sensitive areas including the Indian Point nuclear plant near New York City. By Michael H. Keller | Satellite imagery by Mapbox and DigitalGlobe

Megachurch

Delete all Your Apps – Android and iOS’s Apps make money by selling your personal data and location history to advertisers.

Delete All Your Apps

It’s not just Facebook: Android and iOS’s App Stores have incentivized an app economy where free apps make money by selling your personal data and location history to advertisers.

Image: Shutterstock

Monday morning, the New York Times published a horrifying investigation in which the publication reviewed a huge, “anonymized” dataset of smartphone location data from a third-party vendor, de-anonymized it, and tracked ordinary people through their day-to-day lives—including sensitive stops at places like Planned Parenthood, their homes, and their offices.

The article lays bare what the privacy-conscious have suspected for years: The apps on your smartphone are tracking you, and that for all the talk about “anonymization” and claims that the data is collected only in aggregate, our habits are so specific—and often unique—so that anonymized identifiers can often be reverse engineered and used to track individual people.

Along with the investigation, the New York Times published a guide to managing and restricting location data on specific apps. This is easier on iOS than it is Android, and is something everyone should be periodically doing. But the main takeaway, I think, is not just that we need to be more scrupulous about our location data settings. It’s that we need to be much, much more restrictive about the apps that we install on our phones.

Everywhere we go, we are carrying a device that not only has a GPS chip designed to track our location, but an internet or LTE connection designed to transmit that information to third parties, many of whom have monetized that data. Rough location data can be gleaned by tracking the cell phone towers your phone connects to, and the best way to guarantee privacy would be to have a dumb phone, an iPod Touch, or no phone at all. But for most people, that’s not terribly practical, and so I think it’s worth taking a look at the types of apps that we have installed on our phone, and their value propositions—both to us, and to their developers.

A good question to ask yourself when evaluating your apps is “why does this app exist?”

The early design decisions of Apple, Google, and app developers continue to haunt us all more than a decade later. Broadly and historically speaking, we have been willing to spend hundreds of dollars on a smartphone, but balk at the idea of spending $.99 on an app. Our reluctance to pay any money up front for apps has come at an unknowable but massive cost to our privacy. Even a lowly flashlight or fart noise app is not free to make, and the overwhelming majority of “free” apps are not altruistic—they are designed to make money, which usually means by harvesting and reselling your data.

A good question to ask yourself when evaluating your apps is “why does this app exist?” If it exists because it costs money to buy, or because it’s the free app extension of a service that costs money, then it is more likely to be able to sustain itself without harvesting and selling your data. If it’s a free app that exists for the sole purpose of amassing a large amount of users, then chances are it has been monetized by selling data to advertisers.

The New York Times noted that much of the data used in its investigation came from free weather and sports scores apps that turned around and sold their users’ data; hundreds of free games, flashlight apps, and podcast apps ask for permissions they don’t actually need for the express purpose of monetizing your data.

Even apps that aren’t blatantly sketchy data grabs often function that way: Facebook and its suite of apps (Instagram, Messenger, etc) collect loads of data about you both from your behavior on the app itself but also directly from your phone (Facebook went to great lengths to hide the fact that its Android app was collecting call log data.) And Android itself is a smartphone ecosystem that also serves as yet another data collection apparatus for Google. Unless you feel particularly inclined to read privacy policies that are dozens of pages long for every app you download, who knows what information bespoke apps for news, podcasts, airlines, ticket buying, travel, and social media are collecting and selling.

This problem is getting worse, not better: Facebook made WhatsApp, an app that managed to be profitable with a $1 per year subscription fee, into a “free” service because it believed it could make more money with an advertising-based business model.

What this means is that the dominant business model on our smartphones is one that’s predicated on monetizing you, and only through paying obsessive attention to your app permissions and seeking paid alternatives can you hope to minimize these impacts on yourself. If this bothers you, your only options are to get rid of your smartphone altogether or to rethink what apps you want installed on your phone and act accordingly.

It might be time to get rid of all the free single-use apps that are essentially re-sized websites. Generally speaking, it is safer, privacywise, to access your data on a browser, even if it’s more inconvenient. On second thought, it may be time to delete all your apps and start over using only apps that respect your privacy and that have sustainable business models that don’t rely on monetizing your data. On iOS, this might mean using more of Apple’s first party apps, even if they don’t work as well as free third-party versions.

Source: https://motherboard.vice.com/en_us/article/j5zap3/delete-all-your-apps

Smart firewall iPhone app promises to put your privacy before profits

you-asked-am-i-addicted-to-my-phone

For weeks, a small team of security researchers and developers have been putting the finishing touches on a new privacy app, which its founder says can nix some of the hidden threats that mobile users face — often without realizing.

Phones track your location, apps siphon off our data, and aggressive ads try to grab your attention. Your phone has long been a beacon of data, broadcasting to ad networks and data trackers, trying to build up profiles on you wherever you go to sell you things you’ll never want.

Will Strafach  knows that all too well. A security researcher and former iPhone jailbreaker, Strafach has shifted his time digging into apps for insecure, suspicious and unethical behavior. Last year, he found AccuWeather was secretly sending precise location data without a user’s permission. And just a few months ago, he revealed a list of dozens of apps that were sneakily siphoning off their users’ tracking data to data monetization firms without their users’ explicit consent.

Now his team — including co-founder Joshua Hill and chief operating officer Chirayu Patel — will soon bake those findings into its new “smart firewall” app, which he says will filter and block traffic that invades a user’s privacy.

“We’re in a ‘wild west’ of data collection,” he said, “where data is flying out from your phone under the radar — not because people don’t care but there’s no real visibility and people don’t know it’s happening,” he told me in a call last week.

At its heart, the Guardian Mobile Firewall — currently in a closed beta — funnels all of an iPhone or iPad’s internet traffic through an encrypted virtual private network (VPN) tunnel to Guardian’s servers, outsourcing all of the filtering and enforcement to the cloud to help reduce performance issues on the device’s battery. It means the Guardian app can near-instantly spot if another app is secretly sending a device’s tracking data to a tracking firm, warning the user or giving the option to stop it in its tracks. The aim isn’t to prevent a potentially dodgy app from working properly, but to give users’ awareness and choice over what data leaves their device.

Strafach described the app as “like a junk email filter for your web traffic,” and you can see from of the app’s dedicated tabs what data gets blocked and why. A future version plans to allow users to modify or block their precise geolocation from being sent to certain servers. Strafach said the app will later tell a user how many times an app accesses device data, like their contact lists.

But unlike other ad and tracker blockers, the app doesn’t use overkill third-party lists that prevent apps from working properly. Instead, taking a tried-and-tested approach from the team’s own research. The team periodically scans a range of apps in the App Store to help identify problematic and privacy-invasive issues that are fed to the app to help improve over time. If an app is known to have security issues, the Guardian app can alert a user to the threat. The team plans to continue building machine learning models that help to identify new threats — including so-called “aggressive ads” — that hijack your mobile browser and redirect you to dodgy pages or apps.

Screenshots of the Guardian app, set to be released in December (Image: supplied)

Strafach said that the app will “err on the side of usability” by warning users first — with the option of blocking it. A planned future option will allow users to go into a higher, more restrictive privacy level — “Lockdown mode” — which will deny bad traffic by default until the user intervenes.

What sets the Guardian app from its distant competitors is its anti-data collection.

Whenever you use a VPN — to evade censorship, site blocks or surveillance — you have to put more trust in the VPN server to keep all of your internet traffic safe than your internet provider or cell carrier. Strafach said that neither he nor the team wants to know who uses the app. The less data they have, the less they know, and the safer and more private its users are.

“We don’t want to collect data that we don’t need,” said Strafach. “We consider data a liability. Our rule is to collect as little as possible. We don’t even use Google Analytics or any kind of tracking in the app — or even on our site, out of principle.”

The app works by generating a random set of VPN credentials to connect to the cloud. The connection uses IPSec (IKEv2) with a strong cipher suite, he said. In other words, the Guardian app isn’t a creepy VPN app like Facebook’s Onavo, which Apple pulled from the App Store for collecting data it shouldn’t have been. “On the server side, we’ll only see a random device identifier, because we don’t have accounts so you can’t be attributable to your traffic,” he said.

“We don’t even want to say ‘you can trust us not to do anything,’ because we don’t want to be in a position that we have to be trusted,” he said. “We really just want to run our business the old fashioned way. We want people to pay for our product and we provide them service, and we don’t want their data or send them marketing.”

“It’s a very hard line,” he said. “We would shut down before we even have to face that kind of decision. It would go against our core principles.”

I’ve been using the app for the past week. It’s surprisingly easy to use. For a semi-advanced user, it can feel unnatural to flip a virtual switch on the app’s main screen and allow it to run its course. Anyone who cares about their security and privacy are often always aware of their “opsec” — one wrong move and it can blow your anonymity shield wide open. Overall, the app works well. It’s non-intrusive, it doesn’t interfere, but with the “VPN” icon lit up at the top of the screen, there’s a constant reminder that the app is working in the background.

It’s impressive how much the team has kept privacy and anonymity so front of mind throughout the app’s design process — even down to allowing users to pay by Apple Pay and through in-app purchases so that no billing information is ever exchanged.

The app doesn’t appear to slow down the connection when browsing the web or scrolling through Twitter or Facebook, on neither LTE or a Wi-Fi network. Even streaming a medium-quality live video stream didn’t cause any issues. But it’s still early days, and even though the closed beta has a few hundred users — myself included — as with any bandwidth-intensive cloud service, the quality could fluctuate over time. Strafach said that the backend infrastructure is scalable and can plug-and-play with almost any cloud service in the case of outages.

In its pre-launch state, the company is financially healthy, scoring a round of initial seed funding to support getting the team together, the app’s launch, and maintaining its cloud infrastructure. Steve Russell, an experienced investor and board member, said he was “impressed” with the team’s vision and technology.

“Quality solutions for mobile security and privacy are desperately needed, and Guardian distinguishes itself both in its uniqueness and its effectiveness,” said Russell in an email.

He added that the team is “world class,” and has built a product that’s “sorely needed.”

Strafach said the team is running financially conservatively ahead of its public reveal, but that the startup is looking to raise a Series A to support its anticipated growth — but also the team’s research that feeds the app with new data. “There’s a lot we want to look into and we want to put out more reports on quite a few different topics,” he said.

As the team continue to find new threats, the better the app will become.

The app’s early adopter program is open, including its premium options. The app is expected to launch fully in December.

Source: https://techcrunch.com/2018/10/24/smart-firewall-guardian-iphone-app-privacy-before-profits/