Archiv der Kategorie: Security

Important cybersecurity terms even your non-tech employees need to know

Cyberattacks continue to grow in scale, ferocity, and audacity. No one is safe. Large corporations are a target because hackers see the potential payoff as huge. Small companies are vulnerable too because they don’t have the financial muscle needed to invest in sophisticated security systems. Now more than ever, businesses must do whatever it takes to keep their data and tech infrastructure safe. If non-techie employees understand key cybersecurity terms, they’ll have a much better chance of making the right security decisions. There are thousands of cybersecurity terms but no one (techie or otherwise) is under obligation to know all of them. Some terms are, however, more important than others and these are the ones all staff must be aware of.

Note that knowing these cybersecurity terms is more than just mastering the definitions. Rather, it’s being able to understand the patterns and behavior that define them.

Shutterstock

1. Adware

Adware is a set of programs installed without explicit user authorization that seek to inundate the user with ads. The primary aim of adware is to redirect search requests and URL clicks to advertising websites and data collection portals.

While adware mainly aims to advertise a product and monitor user browsing activity, it also slows down browsing speed, page-load speed, device performance, eats into metered data, and may even download malicious applications in the background.

2. Botnet

Shutterstock

Botnets are simply a collection of several (and they can number in the millions) Internet-enabled devices such as computers, smartphones, servers, routers, and IoT devices that are under a central command and control.

Botnets are infectious and can be propagated across multiple devices. Botnet is a portmanteau of “robot” and “network.” Some of the largest and most dramatic cyberattacks in recent times have involved botnets, including the destructive Mirai malware that infected IoT devices.

3. Cyber-espionage

When you hear the term espionage, what first comes to mind is the world in a bygone era. But espionage is as alive today as it was a century ago. The difference is that thanks to the proliferation of information technology and the ubiquity of the Internet, espionage can now be executed electronically and remotely.

Cyber-espionage is the gathering of confidential information online via illegal and unauthorized means. As you’d expect, the primary target of cyber-espionage is governments as well as large corporations. China has been in the news in this regard though other world powers such as the United States and Russia have been accused of doing the same at some point.

cybersecurity terms

4. Defense-in-depth

Defense-in-depth is a cybersecurity strategy that involves creating multiple layers of protection in order to protect the organization and its assets from attack. It’s born out of a realization that even with the best and most sophisticated technical controls, no security is ever 100 percent impenetrable.

With defense-in-depth, if one security control fails to prevent unauthorized access, the intruder will run into a new barrier. It’s unlikely that many hackers will have the knowledge and skills to surmount these multiple barriers.

5. End-to-end encryption

End-to-end encryption is a means of securing and protecting data that prevents unauthorized third parties from accessing it during rest or transmission. For instance, when you shop online and pay with your credit card, your computer or smartphone has to relay the credit card number you provide to the merchant for authentication and payment processing.

If your card details fall into the wrong hands, someone could use it to make purchases without your permission. By encrypting the data during transmission, you make it harder for third parties to access your confidential information.

6. Firewalls

A firewall is a defense mechanism that is meant to keep the bad guys from penetrating your network. It’s a virtual wall that protects servers and workstations from internal and external attack. It keeps tabs on access requests, user activity, and network traffic patterns in order to determine who can and cannot be allowed to interact with the network.

7. Hashing

Hashing is an algorithm for encrypting passwords from plain text into random strings of characters. It’s a form of security method that transforms fixed-length character strings into a shorter value that represents it. That way, if an intruder somehow got through to the password file or table, whatever they see will be text that is useless to them.

8. Identity theft

Identity theft is sometimes referred to as identity fraud. It’s the No. 1 reason why hackers seek to access confidential information and customer data especially from an organization. An identity thief hopes impersonate an individual by presenting the individual’s confidential records or authentication information as their own.

For example, an identity thief could steal credit card numbers, addresses, and email addresses then use that to fraudulently transact online, file for Social Security benefits, or submit an insurance claim.

9. Intrusion detection system (IDS)

It’s relatively uncommon for a cyberattack to be completely unprecedented or unknown in its form, pattern, and logic. From viruses to brute force attack, there are certain indicators that point to unusual activity. In addition, once your network is up and running, all network traffic and server activity will follow a relatively predictable pattern.

An IDS seeks to keep tabs on network traffic by quickly detecting malicious, suspicious, or anomalous activity before too much damage is done. The IDS blocks malicious traffic and sends an alert to the network administrator.

10. IP spoofing

IP address forgery or spoofing is an address-hijacking mechanism in which a third party pretends to be a trusted IP address in order to mimic a legitimate user’s identity, hijack an Internet browser, or otherwise gain access to a restricted network. It isn’t illegal for one to spoof an IP address. Some people do so in order to conceal their online activity and maintain anonymity (using tools such as Tor).

But IP spoofing is more often associated with illegal or malicious activity. So organizations should exercise caution and take appropriate precautions whenever they detect that a third party wants to connect to their network using a spoofed address.

11. Keylogger

Keylogger is short for keystroke logger. It’s a program that maintains a record of the keystrokes on your keyboard. The keylogger saves the log in a file, then encrypts and distributes it. While a keylogging algorithm can be used for good (some text-to-voice apps for example use keylogging mechanism to capture and translate user activity) keyloggers are often a form of malware.

A keylogger in the hands of nefarious persons is a destructive tool and is perhaps the most powerful weapon of infiltration a hacker can have. Remember, the keylogger will capture all key information such as user names, passwords, PINs, pattern locks, and financial information. With this data, the hacker can easily access your systems without breaking a sweat.

12. Malware

Malware is one of the cybersecurity terms you will hear the most often. It’s a catch-all word that describes all malicious programs including viruses, Trojans, spyware, adware, ransomware, and keyloggers. It’s any program that takes over some or all of the computing functions of a target computer for ill intent. Some malware is just little more than a nuisance but in many cases, malware is part of a wider hacking and data extraction scheme

13. Password sniffing

cybersecurity terms

Password sniffing is the process of intercepting and reading through the transmission of a data packet that includes one or more passwords. Given the volume of network traffic relayed per second, password sniffing is most effectively done by an application referred to as a password sniffer. The sniffer captures and stores the password string for malicious and illegal purposes.

14. Pharming

Pharming is the malicious redirection of a user to a fraudulent site that has colors, design, and features that look very similar to the original legitimate website. A user will unsuspectingly key in their data into the fake website’s input forms only to realize days, weeks, or months later that the site they gave their information to was harvesting their data to commit fraud.

15. Phishing

Phishing is a form of social engineering and the most common type of cyberattack. Every day, more than 100 billion phishing emails are sent out globally. Phishing emails purport to originate from a credible recognizable sender such as e-Bay or Amazon or financial institutions. The email will trick the recipient into sharing their username and password on what they believe is a legitimate website but is in reality a website maintained by cyberattackers.

Knowing these cybersecurity terms is a first step in preventing cyberattacks

While technical controls are crucial, employees are the weakest link in your security architecture. Nothing makes employees better prepared for a cyberattack than security training and awareness. For most organizations, the IT department represents only a fraction of the entire workforce.

Tech staff can therefore not be everywhere to explain cybersecurity terms and help each employee make security-conscious decisions. Therefore, making sure your non-techie staff is familiar with these cybersecurity terms is fundamental.

http://techgenix.com/15-cybersecurity-terms/

Werbeanzeigen

Beapy uses NSA’s DoublePulsar EternalBlue & Mimikatz to collect and use passwords to mine for cryptocurrency following Coinhive

Two years after highly classified exploits built by the National Security Agency were stolen and published, hackers are still using the tools for nefarious reasons.

Security researchers at Symantec say they’ve seen a recent spike in a new malware, dubbed Beapy, which uses the leaked hacking tools to spread like wildfire across corporate networks to enslave computers into running mining code to generate cryptocurrency.

Beapy was first spotted in January but rocketed to more than 12,000 unique infections across 732 organizations since March, said Alan Neville, Symantec’s lead researcher on Beapy, in an email to TechCrunch. The malware almost exclusively targets enterprises, host to large numbers of computers, which when infected with cryptocurrency mining malware can generate sizable sums of money.

The malware relies on someone in the company opening a malicious email. Once opened, the malware drops the NSA-developed DoublePulsar malware to create a persistent backdoor on the infected computer, and uses the NSA’s EternalBlue exploit to spread laterally throughout the network. These are the same exploits that helped spread the WannaCry ransomware in 2017. Once the computers on the network are backdoored, the Beapy malware is pulled from the hacker’s command and control server to infect each computer with the mining software.

Not only does Beapy use the NSA’s exploits to spread, it also uses Mimikatz, an open-source credential stealer, to collect and use passwords from infected computers to navigate its way across the network.

According to the researchers, more than 80 percent of Beapy’s infections are in China.

Hijacking computers to mine for cryptocurrency — known as cryptojacking — has been on the decline in recent months, partially following the shutdown of Coinhive, a popular mining tool. Hackers are finding the rewards fluctuate greatly depending on the value of the cryptocurrency. But cryptojacking remains a more stable source of revenue than the hit-and-miss results of ransomware.

In September, some 919,000 computers were vulnerable to EternalBlue attacks — many of which were exploited for mining cryptocurrency. Today, that figure has risen to more than a million.

Typically cryptojackers exploit vulnerabilities in websites, which, when opened on a user’s browser, uses the computer’s processing power to generate cryptocurrency. But file-based cryptojacking is far more efficient and faster, allowing the hackers to make more money.

In a single month, file-based mining can generate up to $750,000, Symantec researchers estimate, compared to just $30,000 from a browser-based mining operation.

Cryptojacking might seem like a victimless crime — no data is stolen and files aren’t encrypted, but Symantec says the mining campaigns can slow down computers and cause device degradation.

A new cryptocurrency mining malware uses leaked NSA exploits to spread across enterprise networks

Sensorvault Googles Location Database – using cellphone users’ locations into a digital dragnet for law enforcement

The warrants, which draw on an enormous Google database employees call Sensorvault, turn the business of tracking cellphone users’ locations into a digital dragnet for law enforcement. In an era of ubiquitous data gathering by tech companies, it is just the latest example of how personal information — where you go, who your friends are, what you read, eat and watch, and when you do it — is being used for purposes many people never expected. As privacy concerns have mounted among consumers, policymakers and regulators, tech companies have come under intensifying scrutiny over their data collection practices.

The Arizona case demonstrates the promise and perils of the new investigative technique, whose use has risen sharply in the past six months, according to Google employees familiar with the requests. It can help solve crimes. But it can also snare innocent people.

https://www.seattletimes.com/nation-world/tracking-phones-google-is-a-dragnet-for-the-police/

Two-factor authentication explained: How to choose the right level of security for every account

f you aren’t already protecting your most personal accounts with two-factor or two-step authentication, you should be. An extra line of defense that’s tougher than the strongest password, 2FA is extremely important to blocking hacks and attacks on your personal data. If you don’t quite understand what it is, we’ve broken it all down for you.

Two-factor-authentication: What it is

Two-factor authentication is basically a combination of two of the following factors:

  1. Something you know
  2. Something you have
  3. Something you are

Something you know is your password, so 2FA always starts there. Rather than let you into your account once your password is entered, however, two-factor authentication requires a second set of credentials, like when the DMV wants your license and a utility bill. So that’s where factors 2 and 3 come into play. Something you have is your phone or another device, while something you are is your face, irises, or fingerprint. If you can’t provide authentication beyond the password alone, you won’t be allowed into the service you’re trying to log into.

So there are several options for the second factor: SMS, authenticator apps, Bluetooth-, USB-, and NFC-based security keys, and biometrics. So let’s take a look at your options so you can decide which is best for you.

Two-factor-authentication: SMS

2fa sms Michael Simon/IDG
When you choose SMS-based 2FA, all you need is a mobile phone number.

What it is: The most common “something you have” second authentication method is SMS. A service will send a text to your phone with a numerical code, which then needs to be typed into the field provided. If the codes match, your identification is verified and access is granted.

How to set it up: Nearly every two-factor authentication system uses SMS by default, so there isn’t much to do beyond flipping the toggle or switch to turn on 2FA on the chosen account. Depending on the app or service, you’ll find it somewhere in settings, under Security if the tab exists. Once activated you’ll need to enter your password and a mobile phone number.

How it works: When you turn on SMS-based authentication, you’ll receive a code via text that you’ll need to enter after you type your password. That protects you against someone randomly logging into your account from somewhere else, since your password alone in useless without the code. While some apps and services solely rely on SMS-based 2FA, many of them offer numerous options, even if SMS is selected by default.

2fa sms setup IDG
With SMS-based authentication, you’ll get a code via text that will allow access to your account.

How secure it is: By definition, SMS authentication is the least secure method of two-factor authentication. Your phone can be cloned or just plain stolen, SMS messages can be intercepted, and by nature most default messaging apps aren’t encrypted. So the code that’s sent to you could possibly fall into someone’s hands other than yours. It’s unlikely to be an issue unless you’re a valuable target, however.

How convenient it is: Very. You’re likely to always have your phone within reach, so the second authentication is super convenient, especially if the account you’re signing into is on your phone.

Should you use it? Any two-factor authentication is better than none, but if you’re serious about security, SMS won’t cut it.

Two-factor-authentication: Authenticator apps

2fa authenticator Authenticator apps
Authenticator apps generate random codes that aren’t delivered over SMS.

What it is: Like SMS-based two-factor authentication, authenticator apps generate codes that need to be inputted when prompted. However, rather than sending them over unencrypted SMS, they’re generated within an app, and you don’t even need an Internet connection to get one.

How to set it up: To get started with an authentication app, you’ll need to download one from the Play Store or the App Store. Google Authenticator works great for your Google account and anything you use it to log into, but there are other great one’s as well, including Authy, LastPass, Microsoft and a slew of other individual companies, such as Blizzard, Sophos, and Salesforce. If an app or service supports authenticator apps, it’ll supply a QR code that you can scan or enter on your phone.

How it works: When you open your chosen authenticator app and scan the code, a 6-figure code will appear, just like with SMS 2FA. Input that code into the app and you’re good to go. After the initial setup, you’ll be able to go into the app to get a code without scanning a QR code whenever you need one.

2fa authenticator setup IDG
Authenticator apps generate randome codes every 30 seconds and can be used offline.

How secure it is: Unless someone has access to your phone or whatever device is running your authenticator app, it’s completely secure. Since codes are randomized within the app and aren’t delivered over SMS, there’s no way for prying eyes to steal them. For extra security, Authy allows you to set pin and password protection, too, something Google doesn’t offer on its authenticator app.

How convenient it is: While opening an app is slightly less convenient than receiving a text message, authenticator apps don’t take more than few seconds to use. They’re far more secure than SMS, and you can use them offline if you ever run into an issue where you need a code but have no connection.

Should you use it? An authenticator app strikes the sweet spot between security and convenience. While you might find some services that don’t support authenticator apps, the vast majority do.

Two-factor authentication: Universal second factor (security key)

2fa security key Michael Simon/IDG
As their name implies, Security keys are the most secure way to lock down your account.

What it is: Unlike SMS- and authenticator-based 2FA, universal second factor is truly a “something you have” method of protecting your accounts. Instead of a digital code, the second factor is a hardware-based security key. You’ll need to order a physical key to use it, which will connect to your phone or PC via USB, NFC, or Bluetooth.

You can buy a Titan Security Key bundle from Google for $50, which includes a USB-A security key and a Bluetooth security key along with a USB-A-to-USB-C adapter, or buy one from Yubico. An NFC-enabled key is recommended if you’re going to be using it with a phone.

How to set it up: Setting up a security key is basically the same as the other methods, except you’ll need a computer. You’ll need to turn on two-factor authentication, and then select the “security key” option, if it’s available. Most popular accounts, such as Twitter, Facebook, and Google all support security keys, so your most vulnerable accounts should be all set. However, while Chrome, Firefox, and Microsoft’s Edge browser all support security keys, Apple’s Safari browser does not, so you’ll be prompted to switch during setup.

Once you reach the security settings page for the service you’re enabling 2FA with, select security key, and follow the prompts. You’ll be asked to insert your key (so make sure you have an USB-C adapter on hand if you have a MacBook) and press the button on it. That will initiate the connection with your computer, pair your key, and in a few seconds your account will be ready to go.

How it works: When an account requests 2FA verification, you’ll need to plug your security key into your phone or PC’s USB-C port or (if supported) tap it to the back of your NFC-enabled phone. Then it’s only a matter of pressing the button on the key to establish the connection and you’re in.

2fa security key steps IDG
Setting up your security key with your Google account is a multi-step process.

How secure it is: Extremely. Since all of the login authentication is stored on a physical key that is either on your person or stored somewhere safe, the odds of someone accessing your account are extremely low. To do so, they would need to steal your password and the key to access your account, which is very unlikely.

How convenient it is: Not very. When you log into one of your accounts on a new device, you’ll need to type your password and then authenticate it via the hardware key, either by inserting it into your PC’s USB port or pressing it against the back of an NFC-enabled phone. Neither method takes more than a few seconds, though, provided you have your security key within reach.

Two-factor authentication: Google Advanced Protection Program

What it is: If you want to completely lock down your most important data, Google offers the Advanced Protection Program for your Google account, which disables everything except security key-based 2FA. It also limits access your emails and Drive files to Google apps and select third-party apps, and shuts down web access to browsers other than Chrome and Firefox.

How to set it up: You’ll need to make a serious commitment. To enroll in Google Advanced Protection, you’ll need to purchase two Security Keys: one as your main key and one as your backup key. Google sells its own Titan Security Key bundle, but you can also buy a set from Yubico or Feitian.

Once you get your keys, you’ll need to register them with your Google account and then agree to turn off all other forms of authentication. But here’s the rub: To ensure that every one of your devices is properly protected, Google will log you out of every account on every device you own so you can log in again using Advanced Protection.

How it works: Advanced Protection works just like a security except you won’t be able to choose a different method if you forgot or lost your security key.

How secure it is: Google Advanced Protection is basically impenetrable. By relying solely on security keys, it makes sure that no one will be able to access your account without both your password and physical key, which is extremely unlikely.

How convenient it is: By nature, Google Advanced Protection is supposed to make it difficult for hackers to access your Google account and anything associated with it, so naturally it’s not so easy for the user either. Since there’s no fallback authentication method, you’ll need to remember your key whenever you leave the house. And when you run into a roadback—like the Safari browser on a Mac—you’re pretty much out of luck. But if you want your account to have the best possible protection, accept no substitute.

Two-factor authentication: Biometrics

op6t fingerprint Christopher Hebert/IDG
Nearly every smartphone made today has some form of secure biometrics built into it.

What it is: A password-free world where all apps and services are authenticated by a fingerprint or facial scan.

How to set it up: You can see biometrics at work when you opt to use the fingerprint scanner on your phone or Face ID on the iPhone XS, but at the moment, biometric security is little more than a replacement for your password after you login in and verify via another 2FA method.

How it works: Like the way you use your fingerprint or face to unlock your smartphone, biometric 2FA uses your body’s unique characteristics as your password. So your Google account would know it was you based on your scan when you set up your account, and it would automatically allow access when it recognized you.

How secure it is: Since it’s extremely difficult to clone your fingerprint or face, biometric authentication is the closest thing to a digital vault.

How convenient it is: You can’t go anywhere without your fingerprint or your face, so it doesn’t get more convenient than that.

Two-factor authentication: iCloud

2fa icloud Michael Simon/IDG
Apple sends a code to one of your trusted devices when it needs authentication to access an account.

What it is: Apple has its own method of two-factor authentication or your iCloud and iTunes accounts that involves setting up trusted Apple devices (iPhone, iPad, or Mac—Apple Watch isn’t supported) that can receive verification codes. You can also set up trusted numbers to receive SMS codes or get verification codes via an authenticator app built into the Settings app.

How to set it up: As long as you’re logged into into your iCloud account, you can turn on two-factor authentication from pretty much anywhere. Just go into Settings on your iOS device or System Preferences on your Mac, PC, or Android phone, then Security, and Turn On Two-Factor Authentication. From there, you can follow the prompts to set up your trusted phone number and devices.

How it works: When you need to access an account protected by 2FA, Apple will send a code to one of your trusted devices. If you don’t have a second Apple device, Apple will send you a code via SMS or you can get one from the Settings app on your iPhone or System preferences on your Mac.

2fa apple id code IDG
When Apple needs a code to log into an account, it sends it to one of your trusted devices.

How secure it is: It depends on how many Apple devices you own. If you own more than one Apple device, it’s very secure. Apple will send a code to one of your other devices whenever you or someone else tries to log into your account or one of Apple’s services on a new device. It even tells you the location of the request, so if you don’t recognize it you can instantly reject it, before the code even appears.

If you only have one device, you’ll have to use SMS or Apple’s built-in authenticator, neither of which is all that secure, especially since it’s likely to both be done using the same device. Also, Apple has a weird snafu that sends the 2FA access code to the same device when you manage your account using a browser, which also defeats the purpose of 2FA.

How convenient it is: If you’re using an iPhone and have an iPad or Mac nearby, the process takes seconds, but if you don’t have an Apple device within reach or are away from your keyboard, it can be tedious.

Source: https://www.pcworld.com/article/3387420/two-factor-authentication-faq-sms-authenticator-security-key-icloud.html

SS7 contains back doors open like Swiss Cheese

The outages hit in the summer of 1991. Over several days, phone lines in major metropolises went dead without warning, disrupting emergency services and even air traffic control, often for hours. Phones went down one day in Los Angeles, then on another day in Washington, DC and Baltimore, and then in Pittsburgh. Even after service was restored to an area, there was no guarantee the lines would not fail again—and sometimes they did. The outages left millions of Americans disconnected.

The culprit? A computer glitch. A coding mistake in software used to route calls for a piece of telecom infrastructure known as Signaling System No. 7 (SS7) caused network-crippling overloads. It was an early sign of the fragility of the digital architecture that binds together the nation’s phone systems.

Leaders on Capitol Hill called on the one agency with the authority to help: the Federal Communications Commission (FCC). The FCC made changes, including new outage reporting requirements for phone carriers. To help the agency respond to digital network stability concerns, the FCC also launched an outside advisory group—then known as the Network Reliability Council but now called the Communications Security, Reliability, and Interoperability Council (CSRIC, pronounced “scissor-ick”).

Yet decades later, SS7 and other components of the nation’s digital backbone remain flawed, leaving calls and texts vulnerable to interception and disruption. Instead of facing the challenges of our hyper-connected age, the FCC is stumbling, according to documents obtained by the Project On Government Oversight (POGO) and through extensive interviews with current and former agency employees. The agency is hampered by a lack of leadership on cybersecurity issues and a dearth of in-house technical expertise that all too often leaves it relying on security advice from the very companies it is supposed to oversee.

Captured

CSRIC is a prime example of this so-called “agency capture”—the group was set up to help supplement FCC expertise and craft meaningful rules for emerging technologies. But instead, the FCC’s reliance on security advice from industry representatives creates an inherent conflict of interest. The result is weakened regulation and enforcement that ultimately puts all Americans at risk, according to former agency staff.

While the agency took steps to improve its oversight of digital security issues under the Obama administration, many of these reforms have been walked back under current Chairman Ajit Pai. Pai, a former Verizon lawyer, has consistently signaled that he doesn’t want his agency to play a significant role in the digital security of Americans’ communications—despite security being a core agency responsibility since the FCC’s inception in 1934.

The FCC’s founding statute charges it with crafting regulations that promote the “safety of life and property through the use of wire and radio communications,” giving it broad authority to secure communications. Former FCC Chairman Tom Wheeler and many legal experts argue that this includes cyber threats.

As a regulator, the FCC carries a stick: it can hit communications companies with fines if they don’t comply with its rules. That responsibility is even more important now that “smart” devices are networking almost every aspect of our lives.

But not everyone thinks the agency’s mandate is quite so clear, especially in the telecom industry. Telecom companies fight back hard against regulation; over the last decade, they spent nearly a billion dollars lobbying Congress and federal agencies, according to data from OpenSecrets. The industry argues that the FCC’s broad mandate to secure communications doesn’t extend to cybersecurity, and it has pushed for oversight of cybersecurity to come instead from other parts of government, typically the Department of Homeland Security (DHS) or the Federal Trade Commission (FTC)—neither of which is vested with the same level of rule-making powers as the FCC.

To Wheeler, himself the former head of industry trade group CTIA, the push toward DHS seemed like an obvious ploy. “The people and companies the FCC was charged with regulating wanted to see if they could get their jurisdiction moved to someone with less regulatory authority,” he told POGO.

But Chairman Pai seems to agree with industry. In a November 2018 letter to Senator Ron Wyden (D-Ore.) about the agency’s handling of SS7 problems, provided to POGO by the senator’s office, Pai wrote that the FCC “plays a supporting role, as a partner with DHS, in identifying vulnerabilities and working with stakeholders to increase security and resiliency in communications network infrastructure.”

The FCC declined to comment for this story.

The current FCC declined comment, but POGO spoke with former chairman Tom Wheeler, seen here <a href="https://arstechnica.com/information-technology/2016/03/how-a-former-lobbyist-became-the-broadband-industrys-worst-nightmare/">speaking with Ars</a> back in 2016.
Enlarge / The current FCC declined comment, but POGO spoke with former chairman Tom Wheeler, seen here speaking with Ars back in 2016.
Jon Brodkin

Failing to protect the “crown jewels” of telecom

How the telecom industry leveraged lawmakers’ calls for FCC reform in the wake of the SS7 outages is a case study in how corporate influence can overcome even the best of the government’s intentions.

From the beginning, industry representatives dominated membership of the advisory group now known as CSRIC—though, initially, the group only provided input on a small subset of digital communications issues. Over time, as innovations in communications raced forward with the expansion of cellular networks and the Internet, the FCC’s internal technical capabilities didn’t keep up: throughout the 1990s and early 2000s, the agency’s technical expertise was largely limited to telephone networks while the world shifted to data networks, former staffers told POGO. The few agency staffers with expertise on new technologies were siloed in different offices, making it hard to coordinate a comprehensive response to the paradigm shift in communication systems. That gap left the agency increasingly dependent on advice from CSRIC.

During the early 1990s, the SS7-based software system was just coming into wide use. Today, though, it is considered outdated and insecure. Despite that, carriers still use the technology as a backup in their networks. This leaves the people who rely on those networks vulnerable to the technology’s problems, as Jonathan Mayer, a Princeton computer science and public affairs professor and former FCC Enforcement Bureau chief technologist, explained during a Congressional hearing in June 2018.

Unlike in the 1990s, the risks now go much deeper than just service disruption. Researchers have long warned that flaws in the system allow cybercriminals or hackers—sometimes working on behalf of foreign adversaries—to turn cell phones into sophisticated geo-tracking devices or to intercept calls and text messages. Security problems with SS7 are so severe that some government agencies and some major companies like Google are moving away from using codes sent via text to help secure important accounts, such as those for email or online banking.

A panel advising President Bill Clinton raised the alarm back in 1997, saying that SS7 was among America’s networking “crown jewels” and warning that if those crown jewels were “attacked or exploited, [it] could result in a situation that threatened the security and reliability of the telecommunications infrastructure.” By 2001, security researchers argued that risks associated with SS7 were multiplying thanks to “deregulation” and “the Internet and wireless networks.” They were proved right in 2008 when other researchers demonstrated ways that hackers could use flaws in SS7 to pinpoint the location of unsuspecting cell phone users.

By 2014, it was clear that foreign governments had caught on to the disruptive promise of the problem. That year, Russian intelligence used SS7 vulnerabilities to attack a Ukrainian telecommunications company, according to a report published by NATO’s Cooperative Cyber Defence Centre of Excellence, and more research about SS7 call interception made headlines in The Washington Post and elsewhere.

Despite the increasingly dire stakes, the FCC didn’t pay much attention to the issue until the summer of 2016, after Rep. Ted Lieu (D-Calif.) allowed 60 Minutes to demonstrate how researchers could use security flaws in the SS7 protocol to spy on his phone. The FCC—then led by Wheeler—responded by essentially passing the buck to CSRIC. It created a working group to study and make security recommendations about SS7 and other so-called “legacy systems.” The result was a March 2017 report with non-binding guidance about best practices for securing against SS7 vulnerabilities, a non-public report, and the eventual creation of yet another CSRIC working group to study similar security issues.

A POGO analysis of CSRIC membership in recent years shows that its membership, which is solely appointed by the FCC chairman, leans heavily toward industry. And the authorship of the March 2017 report was even more lopsided than CSRIC, overall. Of the twenty working-group members listed in the final report, only five were from the government, including four from the Department of Homeland Security. The remaining fifteen represented private-sector interests. None were academics or consumer advocates.

The working group’s leadership was drawn entirely from industry. The group’s co-chairs came from networking infrastructure company Verisign and iconectiv, a subsidiary of Swedish telecom company Ericsson. The lead editor of the group’s final report was CTIA Vice President for Technology and Cyber Security John Marinho.

Emails from 2016 between working group members, obtained by POGO via a Freedom of Information Act request, show that the group dragged its feet on resolving SS7 security vulnerabilities despite urging from FCC officials to move quickly. The group also repeatedly ignored input from DHS technical experts.

The problem wasn’t figuring out a fix, however, according to David Simpson, a retired rear-admiral who led the FCC’s Public Safety and Homeland Security Bureau at the time. The group was quickly able to discern some best practices—primarily through using different filtering systems—that some major carriers had already deployed and that others could use to mitigate the risks associated with SS7.

“We knew the answer within the first couple months from the technical experts in the working groups,” said Simpson, who consulted with the Working Group. But ultimately, the “consensus orientation of the CSRIC unfortunately allowed” the final report to be pushed from the lame-duck session into the Trump administration—which is not generally inclined toward introducing new federal regulations.

Overall, POGO’s analysis of emails from the group and interviews with former FCC staff found that industry dominance of CSRIC appears to have contributed to a number of issues with the process and the final report, including:

  • Industry members of the working group successfully pushed for the final recommendations to rely on voluntary compliance, according to former FCC staffers. Security experts say that strategy ultimately leaves the entire cellular network at risk because there are thousands of smaller providers, often in rural areas, that are unlikely to prioritize rolling out the needed protections without a firm rule.
  • An August 2016 email shows that, early on in the process, DHS experts objected to describing the working group’s focus as being on “legacy” systems because it “conveys a message that these protocols and associated threats are going away soon and that’s not necessarily the case.” The group did not revise the legacy language, and it remained in the final report.
  • In an email from September 2016, an FCC official emailed Marinho, noting that edits from DHS were not being incorporated into the working draft. Marinho responded that he received them too late and planned to incorporate them in a later version. However, in a May 2018 letter to the FCC, Senator Wyden said DHS officials told his office that “the vast majority of edits to the final report” suggested by DHS experts “were rejected.”

In the emails obtained by POGO, Marinho also refers to warnings about security issues with SS7 that came from panelists at an event organized by George Washington University’s Cybersecurity Strategy and Information Management Program as “hyperbolic.”

Marinho did not respond to a series of specific questions about the Working Group’s activities. In a statement to POGO, CTIA said, “[t]he wireless industry is committed to safeguarding consumer security and privacy and collaborates closely with DHS, the FCC, and other stakeholders to combat evolving threats that could impact communications networks.”

The working group’s report acknowledged that problems remained with SS7, but it recommended voluntary measures and put the onus on telecom users to take extra steps like using apps that encrypt their phone calls and texts.

Criminals, terrorists, and spies

Just a month after the CSRIC working group released its SS7 report, DHS took a much more ominous tone, releasing a report that warned that SS7 “vulnerabilities can be exploited by criminals, terrorists, and nation-state actors/foreign intelligence organizations” and said that “many organizations appear to be sharing or selling expertise and services that could be used to spy on Americans.”

DHS wanted action.

“New laws and authorities may be needed to enable the Government to independently assess the national security and other risks associated with SS7” and other communications protocols, the agency wrote.

But DHS also admitted it wasn’t necessarily the agency that would take the lead. A footnote in that section reads: “Federal agencies such as the FCC and FTC may have authorities over some of these issues.”

CTIA pushed back with a confidential May 2017 white paper that downplayed the risks associated with SS7 and argued against stronger security rules. The paper, which was sent to DHS and to members of Congress, was later obtained and published by Motherboard.

“Congress and the Administration should reject the [DHS] Report’s call for greater regulation,” the trade group wrote.

When CSRIC created yet another working group in late 2017 to continue studying network reliability issues during Pai’s tenure, the DHS experts who objected to the previous working group’s report were “not invited back to participate,” according to Wyden’s May 2018 letter. The final report from that working group lists just one representative from DHS, compared to four in the previous group.

When reached for comment, DHS did not directly address questions about the agency’s experience with CSRIC.

Aside from DHS and individual members of Congress, other parts of the US government have signaled concerns about SS7. For example, as the initial CSRIC working group was starting to review the issue in the summer of 2016, the National Institute of Standards and Technology (NIST), an agency that sets standards for government best practices, released draft guidance echoing that of Google and other tech companies. It warned people away from relying on text messaging to validate identity for various online accounts and services because of the security issues.

But the draft drew pushback from the telecom industry, including CTIA.

“There is insufficient evidence at this time to support removing [text message] authentication in future versions of the Digital Identity Guidelines,” CTIA argued in comments on the NIST draft. After the pushback, NIST caved to industry pressure and removed its warning about relying on texts from the final version of its guidance.

While the government was deliberating, criminals were finding ways to exploit SS7 flaws.

In the summer of 2017, German researchers found that hackers used vulnerabilities in SS7 to drain victims’ bank accounts—exploiting essentially the same type of problems that NIST tried to flag in the scrapped draft guidance.

By 2018, attacks started happening in the domestic digital world, Senator Wyden wrote in his May 2018 letter to the FCC.

“This threat is not merely hypothetical—malicious attackers are already exploiting SS7 vulnerabilities,” Wyden wrote. “One of the major wireless carriers informed my office that it reported an SS7 breach, in which customer data was accessed, to law enforcement” using a portal managed by the FCC, he wrote.

The details of that incident remain unclear, presumably due to an ongoing investigation. However, the senator’s alarm highlights the fact that SS7 continues to put America’s economic and national security at risk.

Indeed, a report by The New York Times in October 2018 suggests that even the president’s own communications are vulnerable due to security problems in cellular networks, potentially including SS7. Chinese and Russian intelligence have gained valuable information about the president’s policy deliberations by intercepting calls made on his personal iPhone “as they travel through the cell towers, cables, and switches that make up national and international cellphone networks,” the Times reported.

The president disputed this account of his phone-use habits in a tweet, apparently sent from “Twitter for iPhone.” The next month, President Trump signed a law creating a Cybersecurity and Infrastructure Security Agency (or CISA) within the Department of Homeland Security, the agency that industry often suggests should oversee communications infrastructure cybersecurity instead of the FCC.

“CISA works regularly with the FCC and the communications sector to address security vulnerabilities and enhance the resilience of the nation’s communications infrastructure,” the agency said in a statement in response to questions for this story. “Our role as the nation’s risk advisor includes working with companies to exchange threat information, mitigate vulnerabilities, and provide incident response upon request.”

Other than efforts to reduce the reliance of US networks on technology from Chinese manufacturers, driven by fears about “supply chain security,” the FCC has largely abandoned its responsibility for protecting America’s networks from looming digital threats.

As the FCC’s engagement on cybersecurity has waned, so has CSRIC’s activity. CSRIC VI, whose members were chosen and nominated by current Chairman Pai, created less than a third of the working groups of its predecessor.

CSRIC VI’s final meeting was in March 2019. It’s unclear who will be part of the group’s seventh iteration—or if they will represent the public over the telecom industry’s interests.

Source: https://arstechnica.com/features/2019/04/fully-compromised-comms-how-industry-influence-at-the-fcc-risks-our-digital-security/2/

Is GDPR the new hacker scare tactic?

GDPR in Europe

No one questions the good intent behind the EU’s General Data Protection Regulation (GDPR) legislation, or the need for companies to be more careful with the proprietary information they have about clients, patients, and other individuals they interact with regularly. While the provisions within the GDPR do help, they have also created new opportunities for hackers and identity thieves to exploit that data.

There’s no doubt that seeking to be fully GDPR compliant is more than just a good idea. Along the way, just make sure your organization doesn’t fall victim to one of the various scams that are surfacing. Let’s take a quick review of GDPR and then dive into the dirty tricks hackers have been playing.

Understanding the Basics of GDPR

In 2018, the GDPR established a set of guidelines for managing the collection and storage of consumer and proprietary data. Much of it pertains to personal information provided by individuals to an entity.

That entity may be a banking institution, insurance company, investing service, or even a health care facility. The primary goal is to ensure adequate protections are in place so that an ill-intentioned third party can’t exploit the personal information of those organizations’ employees, clients, and patients.

The GDPR addresses key areas of data security:

  • Explicit consent to collect and maintain personal data
  • Notification in the event of a data breach
  • Dedicated data security personnel within the organization
  • Data encryption that protects personal information in the event of a breach
  • Access to personal information for review of accuracy (integrity), and to set limitations on the intended use

While there has been pushback about some of the provisions within the GDPR (especially the need for additional data security personnel outside of the usual IT team), many organizations have been eager to adopt the measures. After all, being GDPR compliant can decrease the risk of a breach and would prove helpful if lawsuits resulted after a breach.

GDPR and Appropriate Security

There is an ongoing discussion about what represents adequate and appropriate security in terms of GDPR compliance. To some degree, the exact approach to security will vary, based on the type of organization involved and the nature of the data that is collected and maintained.

Even so, there is some overlap that would apply in every case. Compliance involves identifying and reinforcing every point in the network where some type of intrusion could possibly take place. Using Artificial Intelligence technology to reinforce points of vulnerability while also monitoring them for possible cyberattacks is another element. Even having an escalation plan in place to handle a major data breach within a short period of time is something any organization could enact.

One point that is sometimes lost in the entire discussion about GDPR security is that the guidelines set minimum standards. Entities are free to go above and beyond in terms of protecting proprietary data like customer lists. Viewing compliance as the starting point and continuing to refine network security will serve a company well in the long run.

So What Have Hackers Been Doing Since the Launch of GDPR?

There’s no doubt that hackers and others with less than honorable intentions have been doing their best to work around the GDPR guidelines even as they use them to their advantage. Some news reports claim that GDPR has made it easier for hackers to gain access to data. So what exactly have these ethically challenged individuals concocted?

Here are some examples:

Introducing Reverse Ransomware

As far as we know, it’s not really called reverse ransomware but that seems to be a pretty good way to describe this evil little scheme. As a review, a ransomware attack is when a hacker gets into your system and encrypts data so you can’t see or use it. Only with the payment of a ransom, typically in untraceable Bitcoin or other cryptocurrencies, will the hacker make your data usable again.

The sad ending to the ransomware saga is that more times than not, the data is never released even if the ransom is paid.

But GDPR has provided the inspiration for the bad guys to put a sneaky spin on the data drama. In this case, they penetrate the network by whatever means available to collect the customer lists, etc., which the EU has worked so hard to protect with the new regulations.

The threat with this variation, however, is that the data will be released publicly, which would put the organization in immediate violation of GDPR and make it liable for what could be a hefty fine — one that is substantially larger than the ransom the criminals are demanding.

Of course, the hacker promises not to release the data if the hostage company pays a ransom and might even further promise to destroy the data afterward. If you believe they’ll actually do that, I’d like to introduce you to the Easter Bunny and Tooth Fairy.

The attacker has already demonstrated a strong amoral streak. What’s to stop them from demanding another payment a month down the road? If you guessed nothing, you’re right. But wait, there’s more.

Doing a Lot of Phishing

Many organizations have seen a continual flow of unsolicited emails offering to help them become GDPR compliant. These range from offering free consultations that can be conducted remotely to conducting online training sessions to explain GDPR and suggest ways to increase security.

Typically, this type of phishing scheme offers a way to remit payments for services in advance, with the understanding that the client pays a portion now and the rest later.

Unsurprisingly, anyone who clicks on the link may lose more than whatever payment is rendered. Wherever the individual lands, the site is likely to be infected with spyware or worse. And if the email is forwarded throughout an organization or outside of it? The infection spreads.

I believe we need to be savvier with emails. That means training employees to never click on links in unsolicited emails, and to report suspicious emails to the security team at once.

What Can You Do?

As you can see, GDPR has provided a variety of crime opportunities for an enterprising hacker. These are just two examples of how they use GDPR for profit at the expense of hardworking business owners. The best first step when confronted with any of these types of threats is to not act on it. Instead, forward it to an agency that can properly evaluate the communication.

At the risk of sounding like Captain Obvious, have you done everything possible to fortify your network against advanced threats? Here are the basic preventive steps:

  1. Web security software: The first line of defense is a firewall (updated regularly of course) that prowls the perimeter, looking to prevent any outside threat’s attempt to penetrate. In addition, be sure to implement network security software that detects malicious network activity resulting from a threat that manages to bypass your perimeter controls. It used to be that you could survive with a haphazard philosophy towards security, but those days are long gone. Get good security software and put it to work.
  1. Encrypt that data: While the firewall and security software protects a network from outside penetration attempts, your data doesn’t always stay at home safe and sound. Any time a remote worker connects back to your network or an employee on premises ventures out to the open Internet, data is at risk. That’s why a virtual private network (VPN) should be a mandatory preventive security measure.

It’s a simple but strong idea. Using military grade protocols, a properly configured VPN service encrypts the flow of data between a network device and the Internet or between a remote device and the company network. The big idea here is that even if a hacker manages to siphon off data, they will be greeted with an indecipherable mess that would take the world’s strongest computers working in unison a few billion years to crack. They’ll probably move onto an easier game.

And while a VPN should be a frontline tool to combat hackers, there’s something else that might even be more important.

  1. Education and Training: Through ignorance or inattention, employees can be the biggest threat to cybersecurity. It’s not enough to simply sit them down when you hire them and warn dire consequences if they let malware in the building. Owners need a thorough, ongoing education program related to online security that emphasizes its importance as being only slightly below breathing.

The Bottom Line

The GDPR does not have to be a stumbling block for you or an opportunity for a hacker. Stay proactive with your security measures and keep your antenna tuned for signs of trouble.

Source: https://betanews.com/2019/03/29/is-gdpr-the-new-hacker-scare-tactic/

Ad IDs Behaving Badly

The Ad ID

Persistent identifiers are the bread and butter of the online tracking industry. They allow companies to learn the websites that you visit and the apps that you use, including what you do within those apps. A persistent identifier is just a unique number that is used to either identify you or your device. Your Social Security Number and phone number are examples of persistent identifiers used in real life; cookies use persistent identifiers to identify you across websites.

On your mobile device, there are many different types of persistent identifiers that are used by app developers and third parties contacted by those apps. For example, one app might send an advertising network your device’s serial number. When a different app on your same phone sends that same advertising network your device’s serial number, that advertising network now knows that you use both of these apps, and can use that information to profile you. This sort of profiling is what is meant by “behavioral advertising.” That is, they track your behaviors so that they can infer your interests from those behaviors, and then send you ads targeted to those inferred interests.

On the web, if you don’t want to be tracked in this manner, you can periodically clear your cookies or configure your browser to simply not accept cookies (though this breaks a lot of the web, given that there are many other uses for cookies beyond tracking). Clearing your cookies resets all of the persistent identifiers, which means that new persistent identifiers will be sent to third parties, making it more difficult for them to associate your future online activities with the previous profile they had constructed.

Regarding the persistent identifiers used by mobile apps, up until a few years ago, there was no way of doing the equivalent of clearing your cookies: many of the persistent identifiers used to track your mobile app activities were based in hardware, such the device’s serial number, IMEI, WiFi MAC address, SIM card serial number, etc. Many apps used (and still use) the Android ID for tracking purposes, which while not based in hardware, can only be reset by performing a factory reset on the device and deleting all of its data. Thus, there wasn’t an easy way for users to do the equivalent of clearing their cookies.

However, this changed in 2013 with the creation of the “ad ID”: both Android and iOS unveiled a new persistent identifier based in software that provides the user with privacy controls to reset that identifier at will (similar to clearing cookies).

Of course, being able to reset the ad identifier is only a good privacy-preserving solution if it is the only identifier being collected from the device. Imagine the following situation:

  1. An app sends both the ad ID and the IMEI (a non-resettable hardware-based identifier) to a data broker.
  2. Concerned with her privacy, the user uses one of the above privacy settings panels to reset her phone’s ad ID.
  3. Later, when using a different app, the same data broker is sent the new ad ID alongside the IMEI.
  4. The data broker sees that while the ad IDs are different between these two transmissions, the IMEI is the same, and therefore they must have come from the same device. Knowing this, the data broker can then add the second transmission to the user’s existing profile.

In this case, sending a non-resettable identifier alongside the ad ID completely undermines the privacy-preserving properties of the ad ID: resetting it does not prevent tracking. For this reason, both iOS and Android have policies that prohibit developers from transmitting other identifiers alongside the ad ID. For example, in 2017, it was major news that Uber’s app had violated iOS App Store privacy guidelines by collecting non-resettable persistent identifiers. Tim Cook personally threatened to have the Uber app removed from the store. Similarly, Google’s Play Store policy says that the ad ID cannot be transmitted alongside other identifiers without users’ explicit consent, and that for advertising purposes, the ad ID is the only identifier that can be used:

Association with personally-identifiable information or other identifiers. The advertising identifier must not be connected to personally-identifiable information or associated with any persistent device identifier (for example: SSAID, MAC address, IMEI, etc.) without explicit consent of the user.

Abiding by the terms of use. The advertising identifier may only be used in accordance with these terms, including by any party that you may share it with in the course of your business. All apps uploaded or published to Google Play must use the advertising ID (when available on a device) in lieu of any other device identifiers for any advertising purposes.

https://play.google.com/about/monetization-ads/ads/ad-id/

Violations of Ad ID Policies

I examined the AppCensus database to examine compliance with this policy. That is, are there apps violating this policy by transmitting the ad ID alongside other persistent identifiers to advertisers? When I performed this experiment last September, there were approximately 24k apps in our database that we had observed transmitting the ad ID. Of these, approximately 17k (i.e., ~70%) were transmitting the ad ID alongside other persistent identifiers. Based on the data recipients of some of the most popular offenders, these are clearly being used for advertising purposes:

App Name Installs Data Types Recipient
Clean Master – Antivirus, Cleaner & Booster 1B Ad ID + Android ID t.appsflyer.com
Subway Surfers 1B Android ID api.vungle.com
Flipboard: News For Our Time 500M Ad ID + Android ID ad.flipboard.com
My Talking Tom 500M Ad ID + Android ID m2m1.inner-active.mobi
Temple Run 2 500M Ad ID + Android ID live.chartboost.com
3D Bowling 100M Ad ID + Android ID + IMEI ws.tapjoyads.com
8 Ball Pool 100M Ad ID + Android ID ws.tapjoyads.com
Agar.io 100M Ad ID + Android ID ws.tapjoyads.com
Angry Birds Classic 100M Android ID ads.api.vungle.com
Audiobooks from Audible 100M Ad ID + Android ID api.branch.io
Azar 100M Ad ID + Android ID api.branch.io
B612 – Beauty & Filter Camera 100M Ad ID + Android ID t.appsflyer.com
Banana Kong 100M Ad ID + Android ID live.chartboost.com
Battery Doctor – Battery Life Saver & Battery Cooler 100M Ad ID + Android ID + IMEI t.appsflyer.com
BeautyPlus – Easy Photo Editor & Selfie Camera 100M Ad ID + Android ID t.appsflyer.com,
live.chartboost.com
Bus Rush 100M Ad ID + Android ID ads.api.vungle.com,
ws.tapjoyads.com
CamScanner – Phone PDF Creator 100M Ad ID + Android ID + IMEI t.appsflyer.com
Cheetah Keyboard – Emoji & Stickers Keyboard 100M Ad ID + Android ID t.appsflyer.com
Cooking Fever 100M Ad ID + Android ID ws.tapjoyads.com
Cut The Rope Full FREE 100M Ad ID + Android ID ws.tapjoyads.com

This is just the top 20 most popular apps that are violating this policy, sorted alphabetically. All of the domains receiving the data in the right-most column are either advertising networks, or companies otherwise involved in tracking users’ interactions with ads (i.e., to use Google’s language, “any advertising purposes”). In fact, as of today, there are over 18k distinct apps transmitting the Ad ID alongside other persistent identifiers.

In September, our research group reported just under 17k apps to Google that were transmitting the ad ID alongside other identifiers. The data we gave them included the data types being transmitted and a list of the recipient domains, which included some of the following companies involved in mobile advertising:

  • ad-mediation.tuanguwen.com
  • ad.adsrvr.org
  • ad.doubleclick.net
  • ad.lkqd.net
  • adc-ad-assets.adtilt.com
  • admarvel-d.openx.net
  • admediator.unityads.unity3d.com
  • adproxy.fyber.com
  • ads-roularta.adhese.com
  • ads-secure.videohub.tv
  • ads.adadapted.com
  • ads.adecosystems.net
  • ads.admarvel.com
  • ads.api.vungle.com
  • ads.flurry.com
  • ads.heyzap.com
  • ads.mopub.com
  • ads.nexage.com
  • ads.superawesome.tv
  • adtrack.king.com
  • adwatch.appodeal.com
  • amazon-adsystem.com
  • androidads23.adcolony.com
  • api.salmonads.com
  • app.adjust.com
  • init.supersonicads.com
  • live.chartboost.com
  • marketing-ssl.upsight-api.com
  • track.appsflyer.com
  • ws.tapjoyads.com

The majority of these have the word “ads” in the hostname. Looking at the traffic shows that they are either being used to place ads in apps, or track user engagement with ads.

It has been 5 months since we submitted that report, and we have not received anything from Google about whether they plan to address this pervasive problem. In the interim, more apps now appear to be violating Google’s policy. The problem with all of this is that Google is providing users with privacy controls (see above image), but those privacy controls don’t actually do anything because they only control the ad ID, and we’ve shown that in the vast majority of cases, other persistent identifiers are being collected by apps in addition to the ad ID.

https://blog.appcensus.mobi/2019/02/14/ad-ids-behaving-badly/