The warrants, which draw on an enormous Google database employees call Sensorvault, turn the business of tracking cellphone users’ locations into a digital dragnet for law enforcement. In an era of ubiquitous data gathering by tech companies, it is just the latest example of how personal information — where you go, who your friends are, what you read, eat and watch, and when you do it — is being used for purposes many people never expected. As privacy concerns have mounted among consumers, policymakers and regulators, tech companies have come under intensifying scrutiny over their data collection practices.
The Arizona case demonstrates the promise and perils of the new investigative technique, whose use has risen sharply in the past six months, according to Google employees familiar with the requests. It can help solve crimes. But it can also snare innocent people.
The confusing rollout of meaningful social interactions—marked by internal dissent, blistering external criticism, genuine efforts at reform, and foolish mistakes—set the stage for Facebook’s 2018. This is the story of that annus horribilis, based on interviews with 65 current and former employees. It’s ultimately a story about the biggest shifts ever to take place inside the world’s biggest social network. But it’s also about a company trapped by its own pathologies and, perversely, by the inexorable logic of its own recipe for success.
Facebook’s powerful network effects have kept advertisers from fleeing, and overall user numbers remain healthy if you include people on Instagram, which Facebook owns. But the company’s original culture and mission kept creating a set of brutal debts that came due with regularity over the past 16 months. The company floundered, dissembled, and apologized. Even when it told the truth, people didn’t believe it. Critics appeared on all sides, demanding changes that ranged from the essential to the contradictory to the impossible. As crises multiplied and diverged, even the company’s own solutions began to cannibalize each other. And the most crucial episode in this story—the crisis that cut the deepest—began not long after Davos, when some reporters from The New York Times, The Guardian, and Britain’s Channel 4 News came calling. They’d learned some troubling things about a shady British company called Cambridge Analytica, and they had some questions.
15 Months of Fresh Hell Inside Facebook
Scandals. Backstabbing. Resignations. Record profits. Time Bombs. In early 2018, Mark Zuckerberg set out to fix Facebook. Here’s how that turned out:
They knew that they had to respond immediately. The writ would dominate the next day’s news, and Apple had to have a response. “Tim knew that this was a massive decision on his part,” Sewell said. It was a big moment, “a bet-the-company kind of decision.” Cook and the team stayed up all night—a straight 16 hours—working on their response. Cook already knew his position—Apple would refuse—but he wanted to know all the angles: What was Apple’s legal position? What was its legal obligation? Was this the right response? How should it sound? How should it read? What was the right tone?
iOS 8 added much stronger encryption than had been seen before in smartphones. It encrypted all the user’s data—phone call records, messages, photos, contacts, and so on—with the user’s passcode. The encryption was so strong, not even Apple could break it. Security on earlier devices was much weaker, and there were various ways to break into them, but Apple could no longer access locked devices running iOS 8, even if law enforcement had a valid warrant. “Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data,” the company wrote on its website. “So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.”
The War Room
For the next two months, the executive floor at One Infinite Loop turned into a 24/7 situation room, with staffers sending out messages and responding to journalists’ queries. One PR rep said that they were sometimes sending out multiple updates a day with up to 700 journalists cc’d on the emails. This is in stark contrast to Apple’s usual PR strategy, which consists of occasional press releases and routinely ignoring reporters’ calls and emails.
Cook also felt he had to rally the troops, to keep morale high at a time when the company was under attack. In an email to Apple employees, titled “Thank you for your support,” he wrote, “This case is about much more than a single phone or a single investigation.” He continued, “At stake is the data security of hundreds of millions of law-abiding people and setting a dangerous precedent that threatens everyone’s civil liberties.” It worked. Apple employees trusted their leader to make the decision that was right not only for them but also for the general public.
Cook was very concerned about how Apple would be perceived throughout this media firestorm. He wanted very much to use it as an opportunity to educate the public about personal security, privacy, and encryption. “I think a lot of reporters saw a new version, a new face of Apple,” said the PR person, who asked to remain anonymous. “And it was Tim’s decision to act in this fashion. Very different from what we have done in the past. We were sometimes sending out emails to reporters three times a day on keeping them updated.”
Outside Apple’s walls, Cook went on a charm offensive. Eight days after publishing his privacy letter, he sat down for a prime-time interview with ABC News. Sitting in his office at One Infinite Loop, he sincerely explained Apple’s position. It was the “most important [interview] he’s given as Apple’s CEO,” said the Washington Post. “Cook responded to questions with a raw conviction that was even more emphatic than usual,” wrote the paper. “He used sharp and soaring language, calling the request the ‘software equivalent of cancer’ and talking about ‘fundamental’ civil liberties.
What gaming will look like in a year or two, let alone 10, is a matter of some debate. Battle-royale games have reshaped multiplayer experiences; augmented reality marries the fantastic and real in unprecedented ways. Google is leading a charge away from traditional consoles by launching a cloud-gaming service, Stadia, later this year. Microsoft’s next version of the Xbox will presumably integrate cloud gaming as well to allow people to play Xbox games on multiple devices. Sony’s plans in this regard are still unclear—it’s one of the many things Cerny is keeping mum on, saying only that “we are cloud-gaming pioneers, and our vision should become clear as we head toward launch”—but it’s hard to think there won’t be more news coming on that front.
For now, there’s the living room. It’s where the PlayStation has sat through four generations—and will continue to sit at least one generation more.
Its surprise settlement with Qualcomm on Tuesday over a yearslong patent spat means it’s now in a position to keep pace with its competitors to bring a 5G-ready iPhone to market as soon as this year.
But even though Apple may win by getting a 5G iPhone to customers sooner than most people anticipated, it lost by settling with a company it loathes. Getting the iPhone to 5G means Apple was put in a sticky situation where it had to weigh four less-than-ideal options to make it all a reality.
In the end, Apple had to choose the lesser of all evils:
Option one: Settle with Qualcomm, the leader in 5G chips. Qualcomm’s 5G chips are already shipping in some devices today, with more expected as the year rolls on.
But Apple has seen Qualcomm’s business model as detrimental to the entire industry since it uses its dominant position to squeeze large fees out of each company that uses its chips and patents. Hence that nasty lawsuit. Apple CEO Tim Cook made his disdain for Qualcomm’s practices known in a January interview with CNBC’s Jim Cramer, and even blasted Qualcomm’s decision to hire a PR firm to write fake news stories about Apple, which Business Insider reported.
Option two: Wait for Intel to catch up in 5G. Even before Intel announced Tuesday night that it would abandon its plans to make 5G modems, there was speculation that the company was running behind to deliver the chips on time. Apple has been exclusively using Intel’s 4G modems in its latest iPhones as its dispute with Qualcomm raged on. If that dispute continued, a 5G iPhone might not have been possible until 2020 or even 2021.
Option four: Apple could make its own 5G chips. Apple is thought to be working on its own modems after opening an office in San Diego, Qualcomm’s hometown, and posting job listings for modem chip designers. But it would likely take Apple several years to develop its own 5G chip, putting it several years behind its rivals.
None of those options were ideal for Apple. It could’ve waited an extra year or two for Intel to get its 5G chips up to snuff. It could’ve waited several more years to develop a 5G chip of its own as competitors like Google and Samsung push out their 5G devices and market themselves as more innovative than Apple. It could’ve worked with Huawei, a company that still can’t sell products in the U.S. over security concerns.
Or it could’ve ended its dispute with Qualcomm, even if Cook is allergic to its business practices. Unfortunately for Apple, Qualcomm was the best bet.
Tuesday’s settlement could result in a 5G iPhone as soon as this fall, when Apple is expected to release its next iPhone. (For what it’s worth, timing on a 5G iPhone is still unclear. Qualcomm CEO Steve Mollenkopf said in an interview Wednesday on CNBC’s „Squawk Box “ that he couldn’t comment on Apple’s product plans that include Qualcomm chips.)
Qualcomm gets to take a victory lap this week. Its lead in 5G forced a settlement with Apple and added a massive boost to its stock. Qualcomm shares was up 12% Wednesday, adding to its 23% gain Tuesday. Intel was up about 4%. Apple was up just 1%.
The market agrees. Apple was the loser in this fight.
f you aren’t already protecting your most personal accounts with two-factor or two-step authentication, you should be. An extra line of defense that’s tougher than the strongest password, 2FA is extremely important to blocking hacks and attacks on your personal data. If you don’t quite understand what it is, we’ve broken it all down for you.
Two-factor authentication is basically a combination of two of the following factors:
Something you know
Something you have
Something you are
Something you know is your password, so 2FA always starts there. Rather than let you into your account once your password is entered, however, two-factor authentication requires a second set of credentials, like when the DMV wants your license and a utility bill. So that’s where factors 2 and 3 come into play. Something you have is your phone or another device, while something you are is your face, irises, or fingerprint. If you can’t provide authentication beyond the password alone, you won’t be allowed into the service you’re trying to log into.
So there are several options for the second factor: SMS, authenticator apps, Bluetooth-, USB-, and NFC-based security keys, and biometrics. So let’s take a look at your options so you can decide which is best for you.
What it is: The most common “something you have” second authentication method is SMS. A service will send a text to your phone with a numerical code, which then needs to be typed into the field provided. If the codes match, your identification is verified and access is granted.
How to set it up: Nearly every two-factor authentication system uses SMS by default, so there isn’t much to do beyond flipping the toggle or switch to turn on 2FA on the chosen account. Depending on the app or service, you’ll find it somewhere in settings, under Security if the tab exists. Once activated you’ll need to enter your password and a mobile phone number.
How it works: When you turn on SMS-based authentication, you’ll receive a code via text that you’ll need to enter after you type your password. That protects you against someone randomly logging into your account from somewhere else, since your password alone in useless without the code. While some apps and services solely rely on SMS-based 2FA, many of them offer numerous options, even if SMS is selected by default.
How secure it is: By definition, SMS authentication is the least secure method of two-factor authentication. Your phone can be cloned or just plain stolen, SMS messages can be intercepted, and by nature most default messaging apps aren’t encrypted. So the code that’s sent to you could possibly fall into someone’s hands other than yours. It’s unlikely to be an issue unless you’re a valuable target, however.
How convenient it is: Very. You’re likely to always have your phone within reach, so the second authentication is super convenient, especially if the account you’re signing into is on your phone.
Should you use it? Any two-factor authentication is better than none, but if you’re serious about security, SMS won’t cut it.
Two-factor-authentication: Authenticator apps
What it is: Like SMS-based two-factor authentication, authenticator apps generate codes that need to be inputted when prompted. However, rather than sending them over unencrypted SMS, they’re generated within an app, and you don’t even need an Internet connection to get one.
How to set it up: To get started with an authentication app, you’ll need to download one from the Play Store or the App Store. Google Authenticator works great for your Google account and anything you use it to log into, but there are other great one’s as well, including Authy, LastPass, Microsoft and a slew of other individual companies, such as Blizzard, Sophos, and Salesforce. If an app or service supports authenticator apps, it’ll supply a QR code that you can scan or enter on your phone.
How it works: When you open your chosen authenticator app and scan the code, a 6-figure code will appear, just like with SMS 2FA. Input that code into the app and you’re good to go. After the initial setup, you’ll be able to go into the app to get a code without scanning a QR code whenever you need one.
How secure it is: Unless someone has access to your phone or whatever device is running your authenticator app, it’s completely secure. Since codes are randomized within the app and aren’t delivered over SMS, there’s no way for prying eyes to steal them. For extra security, Authy allows you to set pin and password protection, too, something Google doesn’t offer on its authenticator app.
How convenient it is: While opening an app is slightly less convenient than receiving a text message, authenticator apps don’t take more than few seconds to use. They’re far more secure than SMS, and you can use them offline if you ever run into an issue where you need a code but have no connection.
Should you use it? An authenticator app strikes the sweet spot between security and convenience. While you might find some services that don’t support authenticator apps, the vast majority do.
Two-factor authentication: Universal second factor (security key)
What it is: Unlike SMS- and authenticator-based 2FA, universal second factor is truly a “something you have” method of protecting your accounts. Instead of a digital code, the second factor is a hardware-based security key. You’ll need to order a physical key to use it, which will connect to your phone or PC via USB, NFC, or Bluetooth.
You can buy a Titan Security Key bundle from Google for $50, which includes a USB-A security key and a Bluetooth security key along with a USB-A-to-USB-C adapter, or buy one from Yubico. An NFC-enabled key is recommended if you’re going to be using it with a phone.
How to set it up: Setting up a security key is basically the same as the other methods, except you’ll need a computer. You’ll need to turn on two-factor authentication, and then select the “security key” option, if it’s available. Most popular accounts, such as Twitter, Facebook, and Google all support security keys, so your most vulnerable accounts should be all set. However, while Chrome, Firefox, and Microsoft’s Edge browser all support security keys, Apple’s Safari browser does not, so you’ll be prompted to switch during setup.
Once you reach the security settings page for the service you’re enabling 2FA with, select security key, and follow the prompts. You’ll be asked to insert your key (so make sure you have an USB-C adapter on hand if you have a MacBook) and press the button on it. That will initiate the connection with your computer, pair your key, and in a few seconds your account will be ready to go.
How it works: When an account requests 2FA verification, you’ll need to plug your security key into your phone or PC’s USB-C port or (if supported) tap it to the back of your NFC-enabled phone. Then it’s only a matter of pressing the button on the key to establish the connection and you’re in.
How secure it is: Extremely. Since all of the login authentication is stored on a physical key that is either on your person or stored somewhere safe, the odds of someone accessing your account are extremely low. To do so, they would need to steal your password and the key to access your account, which is very unlikely.
How convenient it is: Not very. When you log into one of your accounts on a new device, you’ll need to type your password and then authenticate it via the hardware key, either by inserting it into your PC’s USB port or pressing it against the back of an NFC-enabled phone. Neither method takes more than a few seconds, though, provided you have your security key within reach.
Two-factor authentication: Google Advanced Protection Program
What it is: If you want to completely lock down your most important data, Google offers the Advanced Protection Program for your Google account, which disables everything except security key-based 2FA. It also limits access your emails and Drive files to Google apps and select third-party apps, and shuts down web access to browsers other than Chrome and Firefox.
How to set it up: You’ll need to make a serious commitment. To enroll in Google Advanced Protection, you’ll need to purchase two Security Keys: one as your main key and one as your backup key. Google sells its own Titan Security Key bundle, but you can also buy a set from Yubico or Feitian.
Once you get your keys, you’ll need to register them with your Google account and then agree to turn off all other forms of authentication. But here’s the rub: To ensure that every one of your devices is properly protected, Google will log you out of every account on every device you own so you can log in again using Advanced Protection.
How it works: Advanced Protection works just like a security except you won’t be able to choose a different method if you forgot or lost your security key.
How secure it is: Google Advanced Protection is basically impenetrable. By relying solely on security keys, it makes sure that no one will be able to access your account without both your password and physical key, which is extremely unlikely.
How convenient it is: By nature, Google Advanced Protection is supposed to make it difficult for hackers to access your Google account and anything associated with it, so naturally it’s not so easy for the user either. Since there’s no fallback authentication method, you’ll need to remember your key whenever you leave the house. And when you run into a roadback—like the Safari browser on a Mac—you’re pretty much out of luck. But if you want your account to have the best possible protection, accept no substitute.
Two-factor authentication: Biometrics
What it is: A password-free world where all apps and services are authenticated by a fingerprint or facial scan.
How to set it up: You can see biometrics at work when you opt to use the fingerprint scanner on your phone or Face ID on the iPhone XS, but at the moment, biometric security is little more than a replacement for your password after you login in and verify via another 2FA method.
How it works: Like the way you use your fingerprint or face to unlock your smartphone, biometric 2FA uses your body’s unique characteristics as your password. So your Google account would know it was you based on your scan when you set up your account, and it would automatically allow access when it recognized you.
How secure it is: Since it’s extremely difficult to clone your fingerprint or face, biometric authentication is the closest thing to a digital vault.
How convenient it is: You can’t go anywhere without your fingerprint or your face, so it doesn’t get more convenient than that.
Two-factor authentication: iCloud
What it is: Apple has its own method of two-factor authentication or your iCloud and iTunes accounts that involves setting up trusted Apple devices (iPhone, iPad, or Mac—Apple Watch isn’t supported) that can receive verification codes. You can also set up trusted numbers to receive SMS codes or get verification codes via an authenticator app built into the Settings app.
How to set it up: As long as you’re logged into into your iCloud account, you can turn on two-factor authentication from pretty much anywhere. Just go into Settings on your iOS device or System Preferences on your Mac, PC, or Android phone, then Security, and Turn On Two-Factor Authentication. From there, you can follow the prompts to set up your trusted phone number and devices.
How it works: When you need to access an account protected by 2FA, Apple will send a code to one of your trusted devices. If you don’t have a second Apple device, Apple will send you a code via SMS or you can get one from the Settings app on your iPhone or System preferences on your Mac.
How secure it is: It depends on how many Apple devices you own. If you own more than one Apple device, it’s very secure. Apple will send a code to one of your other devices whenever you or someone else tries to log into your account or one of Apple’s services on a new device. It even tells you the location of the request, so if you don’t recognize it you can instantly reject it, before the code even appears.
If you only have one device, you’ll have to use SMS or Apple’s built-in authenticator, neither of which is all that secure, especially since it’s likely to both be done using the same device. Also, Apple has a weird snafu that sends the 2FA access code to the same device when you manage your account using a browser, which also defeats the purpose of 2FA.
How convenient it is: If you’re using an iPhone and have an iPad or Mac nearby, the process takes seconds, but if you don’t have an Apple device within reach or are away from your keyboard, it can be tedious.
The outages hit in the summer of 1991. Over several days, phone lines in major metropolises went dead without warning, disrupting emergency services and even air traffic control, often for hours. Phones went down one day in Los Angeles, then on another day in Washington, DC and Baltimore, and then in Pittsburgh. Even after service was restored to an area, there was no guarantee the lines would not fail again—and sometimes they did. The outages left millions of Americans disconnected.
The culprit? A computer glitch. A coding mistake in software used to route calls for a piece of telecom infrastructure known as Signaling System No. 7 (SS7) caused network-crippling overloads. It was an early sign of the fragility of the digital architecture that binds together the nation’s phone systems.
Leaders on Capitol Hill called on the one agency with the authority to help: the Federal Communications Commission (FCC). The FCC made changes, including new outage reporting requirements for phone carriers. To help the agency respond to digital network stability concerns, the FCC also launched an outside advisory group—then known as the Network Reliability Council but now called the Communications Security, Reliability, and Interoperability Council (CSRIC, pronounced “scissor-ick”).
Yet decades later, SS7 and other components of the nation’s digital backbone remain flawed, leaving calls and texts vulnerable to interception and disruption. Instead of facing the challenges of our hyper-connected age, the FCC is stumbling, according to documents obtained by the Project On Government Oversight (POGO) and through extensive interviews with current and former agency employees. The agency is hampered by a lack of leadership on cybersecurity issues and a dearth of in-house technical expertise that all too often leaves it relying on security advice from the very companies it is supposed to oversee.
CSRIC is a prime example of this so-called “agency capture”—the group was set up to help supplement FCC expertise and craft meaningful rules for emerging technologies. But instead, the FCC’s reliance on security advice from industry representatives creates an inherent conflict of interest. The result is weakened regulation and enforcement that ultimately puts all Americans at risk, according to former agency staff.
While the agency took steps to improve its oversight of digital security issues under the Obama administration, many of these reforms have been walked back under current Chairman Ajit Pai. Pai, a former Verizon lawyer, has consistentlysignaled that he doesn’t want his agency to play a significant role in the digital security of Americans’ communications—despite security being a core agency responsibility since the FCC’s inception in 1934.
The FCC’s founding statute charges it with crafting regulations that promote the “safety of life and property through the use of wire and radio communications,” giving it broad authority to secure communications. Former FCC Chairman Tom Wheeler and many legal experts argue that this includes cyber threats.
As a regulator, the FCC carries a stick: it can hit communications companies with fines if they don’t comply with its rules. That responsibility is even more important now that “smart” devices are networking almost every aspect of our lives.
But not everyone thinks the agency’s mandate is quite so clear, especially in the telecom industry. Telecom companies fight back hard against regulation; over the last decade, they spent nearly a billion dollars lobbying Congress and federal agencies, according to data from OpenSecrets. The industry argues that the FCC’s broad mandate to secure communications doesn’t extend to cybersecurity, and it has pushed for oversight of cybersecurity to come instead from other parts of government, typically the Department of Homeland Security (DHS) or the Federal Trade Commission (FTC)—neither of which is vested with the same level of rule-making powers as the FCC.
To Wheeler, himself the former head of industry trade group CTIA, the push toward DHS seemed like an obvious ploy. “The people and companies the FCC was charged with regulating wanted to see if they could get their jurisdiction moved to someone with less regulatory authority,” he told POGO.
But Chairman Pai seems to agree with industry. In a November 2018 letter to Senator Ron Wyden (D-Ore.) about the agency’s handling of SS7 problems, provided to POGO by the senator’s office, Pai wrote that the FCC “plays a supporting role, as a partner with DHS, in identifying vulnerabilities and working with stakeholders to increase security and resiliency in communications network infrastructure.”
The FCC declined to comment for this story.
Failing to protect the “crown jewels” of telecom
How the telecom industry leveraged lawmakers’ calls for FCC reform in the wake of the SS7 outages is a case study in how corporate influence can overcome even the best of the government’s intentions.
From the beginning, industry representatives dominated membership of the advisory group now known as CSRIC—though, initially, the group only provided input on a small subset of digital communications issues. Over time, as innovations in communications raced forward with the expansion of cellular networks and the Internet, the FCC’s internal technical capabilities didn’t keep up: throughout the 1990s and early 2000s, the agency’s technical expertise was largely limited to telephone networks while the world shifted to data networks, former staffers told POGO. The few agency staffers with expertise on new technologies were siloed in different offices, making it hard to coordinate a comprehensive response to the paradigm shift in communication systems. That gap left the agency increasingly dependent on advice from CSRIC.
During the early 1990s, the SS7-based software system was just coming into wide use. Today, though, it is considered outdated and insecure. Despite that, carriers still use the technology as a backup in their networks. This leaves the people who rely on those networks vulnerable to the technology’s problems, as Jonathan Mayer, a Princeton computer science and public affairs professor and former FCC Enforcement Bureau chief technologist, explained during a Congressional hearing in June 2018.
Unlike in the 1990s, the risks now go much deeper than just service disruption. Researchers have long warned that flaws in the system allow cybercriminals or hackers—sometimes working on behalf of foreign adversaries—to turn cell phones into sophisticated geo-tracking devices or to intercept calls and text messages. Security problems with SS7 are so severe that some government agencies and some major companies like Google are moving away from using codes sent via text to help secure important accounts, such as those for email or online banking.
A panel advising President Bill Clinton raised the alarm back in 1997, saying that SS7 was among America’s networking “crown jewels” and warning that if those crown jewels were “attacked or exploited, [it] could result in a situation that threatened the security and reliability of the telecommunications infrastructure.” By 2001, security researchers argued that risks associated with SS7 were multiplying thanks to “deregulation” and “the Internet and wireless networks.” They were proved right in 2008 when other researchers demonstrated ways that hackers could use flaws in SS7 to pinpoint the location of unsuspecting cell phone users.
By 2014, it was clear that foreign governments had caught on to the disruptive promise of the problem. That year, Russian intelligence used SS7 vulnerabilities to attack a Ukrainian telecommunications company, according to a report published by NATO’s Cooperative Cyber Defence Centre of Excellence, and more research about SS7 call interception made headlines in The Washington Post and elsewhere.
Despite the increasingly dire stakes, the FCC didn’t pay much attention to the issue until the summer of 2016, after Rep. Ted Lieu (D-Calif.) allowed 60 Minutes to demonstrate how researchers could use security flaws in the SS7 protocol to spy on his phone. The FCC—then led by Wheeler—responded by essentially passing the buck to CSRIC. It created a working group to study and make security recommendations about SS7 and other so-called “legacy systems.” The result was a March 2017 report with non-binding guidance about best practices for securing against SS7 vulnerabilities, a non-public report, and the eventual creation of yet another CSRIC working group to study similar security issues.
A POGO analysis of CSRIC membership in recent years shows that its membership, which is solely appointed by the FCC chairman, leans heavily toward industry. And the authorship of the March 2017 report was even more lopsided than CSRIC, overall. Of the twenty working-group members listed in the final report, only five were from the government, including four from the Department of Homeland Security. The remaining fifteen represented private-sector interests. None were academics or consumer advocates.
The working group’s leadership was drawn entirely from industry. The group’s co-chairs came from networking infrastructure company Verisign and iconectiv, a subsidiary of Swedish telecom company Ericsson. The lead editor of the group’s final report was CTIA Vice President for Technology and Cyber Security John Marinho.
Emails from 2016 between working group members, obtained by POGO via a Freedom of Information Act request, show that the group dragged its feet on resolving SS7 security vulnerabilities despite urging from FCC officials to move quickly. The group also repeatedly ignored input from DHS technical experts.
The problem wasn’t figuring out a fix, however, according to David Simpson, a retired rear-admiral who led the FCC’s Public Safety and Homeland Security Bureau at the time. The group was quickly able to discern some best practices—primarily through using different filtering systems—that some major carriers had already deployed and that others could use to mitigate the risks associated with SS7.
“We knew the answer within the first couple months from the technical experts in the working groups,” said Simpson, who consulted with the Working Group. But ultimately, the “consensus orientation of the CSRIC unfortunately allowed” the final report to be pushed from the lame-duck session into the Trump administration—which is not generally inclined toward introducing new federal regulations.
Overall, POGO’s analysis of emails from the group and interviews with former FCC staff found that industry dominance of CSRIC appears to have contributed to a number of issues with the process and the final report, including:
Industry members of the working group successfully pushed for the final recommendations to rely on voluntary compliance, according to former FCC staffers. Security experts say that strategy ultimately leaves the entire cellular network at risk because there are thousands of smaller providers, often in rural areas, that are unlikely to prioritize rolling out the needed protections without a firm rule.
An August 2016 email shows that, early on in the process, DHS experts objected to describing the working group’s focus as being on “legacy” systems because it “conveys a message that these protocols and associated threats are going away soon and that’s not necessarily the case.” The group did not revise the legacy language, and it remained in the final report.
In an email from September 2016, an FCC official emailed Marinho, noting that edits from DHS were not being incorporated into the working draft. Marinho responded that he received them too late and planned to incorporate them in a later version. However, in a May 2018 letter to the FCC, Senator Wyden said DHS officials told his office that “the vast majority of edits to the final report” suggested by DHS experts “were rejected.”
In the emails obtained by POGO, Marinho also refers to warnings about security issues with SS7 that came from panelists at an event organized by George Washington University’s Cybersecurity Strategy and Information Management Program as “hyperbolic.”
Marinho did not respond to a series of specific questions about the Working Group’s activities. In a statement to POGO, CTIA said, “[t]he wireless industry is committed to safeguarding consumer security and privacy and collaborates closely with DHS, the FCC, and other stakeholders to combat evolving threats that could impact communications networks.”
The working group’s report acknowledged that problems remained with SS7, but it recommended voluntary measures and put the onus on telecom users to take extra steps like using apps that encrypt their phone calls and texts.
Criminals, terrorists, and spies
Just a month after the CSRIC working group released its SS7 report, DHS took a much more ominous tone, releasing a report that warned that SS7 “vulnerabilities can be exploited by criminals, terrorists, and nation-state actors/foreign intelligence organizations” and said that “many organizations appear to be sharing or selling expertise and services that could be used to spy on Americans.”
DHS wanted action.
“New laws and authorities may be needed to enable the Government to independently assess the national security and other risks associated with SS7” and other communications protocols, the agency wrote.
But DHS also admitted it wasn’t necessarily the agency that would take the lead. A footnote in that section reads: “Federal agencies such as the FCC and FTC may have authorities over some of these issues.”
“Congress and the Administration should reject the [DHS] Report’s call for greater regulation,” the trade group wrote.
When CSRIC created yet another working group in late 2017 to continue studying network reliability issues during Pai’s tenure, the DHS experts who objected to the previous working group’s report were “not invited back to participate,” according to Wyden’s May 2018 letter. The final report from that working group lists just one representative from DHS, compared to four in the previous group.
When reached for comment, DHS did not directly address questions about the agency’s experience with CSRIC.
Aside from DHS and individual members of Congress, other parts of the US government have signaled concerns about SS7. For example, as the initial CSRIC working group was starting to review the issue in the summer of 2016, the National Institute of Standards and Technology (NIST), an agency that sets standards for government best practices, released draft guidance echoing that of Google and other tech companies. It warned people away from relying on text messaging to validate identity for various online accounts and services because of the security issues.
But the draft drew pushback from the telecom industry, including CTIA.
“There is insufficient evidence at this time to support removing [text message] authentication in future versions of the Digital Identity Guidelines,” CTIA argued in comments on the NIST draft. After the pushback, NIST caved to industry pressure and removed its warning about relying on texts from the final version of its guidance.
While the government was deliberating, criminals were finding ways to exploit SS7 flaws.
In the summer of 2017, German researchers found that hackers used vulnerabilities in SS7 to drain victims’ bank accounts—exploiting essentially the same type of problems that NIST tried to flag in the scrapped draft guidance.
“This threat is not merely hypothetical—malicious attackers are already exploiting SS7 vulnerabilities,” Wyden wrote. “One of the major wireless carriers informed my office that it reported an SS7 breach, in which customer data was accessed, to law enforcement” using a portal managed by the FCC, he wrote.
The details of that incident remain unclear, presumably due to an ongoing investigation. However, the senator’s alarm highlights the fact that SS7 continues to put America’s economic and national security at risk.
Indeed, a report by The New York Times in October 2018 suggests that even the president’s own communications are vulnerable due to security problems in cellular networks, potentially including SS7. Chinese and Russian intelligence have gained valuable information about the president’s policy deliberations by intercepting calls made on his personal iPhone “as they travel through the cell towers, cables, and switches that make up national and international cellphone networks,” the Times reported.
“CISA works regularly with the FCC and the communications sector to address security vulnerabilities and enhance the resilience of the nation’s communications infrastructure,” the agency said in a statement in response to questions for this story. “Our role as the nation’s risk advisor includes working with companies to exchange threat information, mitigate vulnerabilities, and provide incident response upon request.”
Other than efforts to reduce the reliance of US networks on technology from Chinese manufacturers, driven by fears about “supply chain security,” the FCC has largely abandoned its responsibility for protecting America’s networks from looming digital threats.
As the FCC’s engagement on cybersecurity has waned, so has CSRIC’s activity. CSRIC VI, whose members were chosen and nominated by current Chairman Pai, created less than a third of the working groups of its predecessor.
CSRIC VI’s final meeting was in March 2019. It’s unclear who will be part of the group’s seventh iteration—or if they will represent the public over the telecom industry’s interests.