Cyberattacks continue to grow in scale, ferocity, and audacity. No one is safe. Large corporations are a target because hackers see the potential payoff as huge. Small companies are vulnerable too because they don’t have the financial muscle needed to invest in sophisticated security systems. Now more than ever, businesses must do whatever it takes to keep their data and tech infrastructure safe. If non-techie employees understand key cybersecurity terms, they’ll have a much better chance of making the right security decisions. There are thousands of cybersecurity terms but no one (techie or otherwise) is under obligation to know all of them. Some terms are, however, more important than others and these are the ones all staff must be aware of.
Note that knowing these cybersecurity terms is more than just mastering the definitions. Rather, it’s being able to understand the patterns and behavior that define them.
Adware is a set of programs installed without explicit user authorization that seek to inundate the user with ads. The primary aim of adware is to redirect search requests and URL clicks to advertising websites and data collection portals.
While adware mainly aims to advertise a product and monitor user browsing activity, it also slows down browsing speed, page-load speed, device performance, eats into metered data, and may even download malicious applications in the background.
Botnets are simply a collection of several (and they can number in the millions) Internet-enabled devices such as computers, smartphones, servers, routers, and IoT devices that are under a central command and control.
Botnets are infectious and can be propagated across multiple devices. Botnet is a portmanteau of “robot” and “network.” Some of the largest and most dramatic cyberattacks in recent times have involved botnets, including the destructive Mirai malware that infected IoT devices.
When you hear the term espionage, what first comes to mind is the world in a bygone era. But espionage is as alive today as it was a century ago. The difference is that thanks to the proliferation of information technology and the ubiquity of the Internet, espionage can now be executed electronically and remotely.
Cyber-espionage is the gathering of confidential information online via illegal and unauthorized means. As you’d expect, the primary target of cyber-espionage is governments as well as large corporations. China has been in the news in this regard though other world powers such as the United States and Russia have been accused of doing the same at some point.
Defense-in-depth is a cybersecurity strategy that involves creating multiple layers of protection in order to protect the organization and its assets from attack. It’s born out of a realization that even with the best and most sophisticated technical controls, no security is ever 100 percent impenetrable.
With defense-in-depth, if one security control fails to prevent unauthorized access, the intruder will run into a new barrier. It’s unlikely that many hackers will have the knowledge and skills to surmount these multiple barriers.
5. End-to-end encryption
End-to-end encryption is a means of securing and protecting data that prevents unauthorized third parties from accessing it during rest or transmission. For instance, when you shop online and pay with your credit card, your computer or smartphone has to relay the credit card number you provide to the merchant for authentication and payment processing.
If your card details fall into the wrong hands, someone could use it to make purchases without your permission. By encrypting the data during transmission, you make it harder for third parties to access your confidential information.
A firewall is a defense mechanism that is meant to keep the bad guys from penetrating your network. It’s a virtual wall that protects servers and workstations from internal and external attack. It keeps tabs on access requests, user activity, and network traffic patterns in order to determine who can and cannot be allowed to interact with the network.
Hashing is an algorithm for encrypting passwords from plain text into random strings of characters. It’s a form of security method that transforms fixed-length character strings into a shorter value that represents it. That way, if an intruder somehow got through to the password file or table, whatever they see will be text that is useless to them.
8. Identity theft
Identity theft is sometimes referred to as identity fraud. It’s the No. 1 reason why hackers seek to access confidential information and customer data especially from an organization. An identity thief hopes impersonate an individual by presenting the individual’s confidential records or authentication information as their own.
For example, an identity thief could steal credit card numbers, addresses, and email addresses then use that to fraudulently transact online, file for Social Security benefits, or submit an insurance claim.
9. Intrusion detection system (IDS)
It’s relatively uncommon for a cyberattack to be completely unprecedented or unknown in its form, pattern, and logic. From viruses to brute force attack, there are certain indicators that point to unusual activity. In addition, once your network is up and running, all network traffic and server activity will follow a relatively predictable pattern.
An IDS seeks to keep tabs on network traffic by quickly detecting malicious, suspicious, or anomalous activity before too much damage is done. The IDS blocks malicious traffic and sends an alert to the network administrator.
10. IP spoofing
IP address forgery or spoofing is an address-hijacking mechanism in which a third party pretends to be a trusted IP address in order to mimic a legitimate user’s identity, hijack an Internet browser, or otherwise gain access to a restricted network. It isn’t illegal for one to spoof an IP address. Some people do so in order to conceal their online activity and maintain anonymity (using tools such as Tor).
But IP spoofing is more often associated with illegal or malicious activity. So organizations should exercise caution and take appropriate precautions whenever they detect that a third party wants to connect to their network using a spoofed address.
Keylogger is short for keystroke logger. It’s a program that maintains a record of the keystrokes on your keyboard. The keylogger saves the log in a file, then encrypts and distributes it. While a keylogging algorithm can be used for good (some text-to-voice apps for example use keylogging mechanism to capture and translate user activity) keyloggers are often a form of malware.
A keylogger in the hands of nefarious persons is a destructive tool and is perhaps the most powerful weapon of infiltration a hacker can have. Remember, the keylogger will capture all key information such as user names, passwords, PINs, pattern locks, and financial information. With this data, the hacker can easily access your systems without breaking a sweat.
Malware is one of the cybersecurity terms you will hear the most often. It’s a catch-all word that describes all malicious programs including viruses, Trojans, spyware, adware, ransomware, and keyloggers. It’s any program that takes over some or all of the computing functions of a target computer for ill intent. Some malware is just little more than a nuisance but in many cases, malware is part of a wider hacking and data extraction scheme
13. Password sniffing
Password sniffing is the process of intercepting and reading through the transmission of a data packet that includes one or more passwords. Given the volume of network traffic relayed per second, password sniffing is most effectively done by an application referred to as a password sniffer. The sniffer captures and stores the password string for malicious and illegal purposes.
Pharming is the malicious redirection of a user to a fraudulent site that has colors, design, and features that look very similar to the original legitimate website. A user will unsuspectingly key in their data into the fake website’s input forms only to realize days, weeks, or months later that the site they gave their information to was harvesting their data to commit fraud.
Phishing is a form of social engineering and the most common type of cyberattack. Every day, more than 100 billion phishing emails are sent out globally. Phishing emails purport to originate from a credible recognizable sender such as e-Bay or Amazon or financial institutions. The email will trick the recipient into sharing their username and password on what they believe is a legitimate website but is in reality a website maintained by cyberattackers.
Knowing these cybersecurity terms is a first step in preventing cyberattacks
While technical controls are crucial, employees are the weakest link in your security architecture. Nothing makes employees better prepared for a cyberattack than security training and awareness. For most organizations, the IT department represents only a fraction of the entire workforce.
Tech staff can therefore not be everywhere to explain cybersecurity terms and help each employee make security-conscious decisions. Therefore, making sure your non-techie staff is familiar with these cybersecurity terms is fundamental.
With Zuckerberg testifying to the US Congress over Facebook’s data privacy and the implementation of GDPRfast approaching, the debate around data ownership has suddenly burst into the public psyche. Collecting user data to serve targeted advertising in a free platform is one thing, harvesting the social graphs of people interacting with apps and using it to sway an election is somewhat worse.
Suffice to say that neither of the above compare to the indiscriminate collection of ordinary civilians’ data on behalf of governments every day.
In 2013, Edward Snowden blew the whistleon the systematic US spy program he helped to architect. Perhaps the largest revelation to come out of the trove of documents he released were the details of PRISM, an NSA program that collects internet communications data from US telecommunications companies like Microsoft, Yahoo, Google, Facebook and Apple. The data collected included audio and video chat logs, photographs, emails, documents and connection logs of anyone using the services of 9 leading US internet companies. PRISM benefited from changes to FISA that allowed warrantless domestic surveillance of any target without the need for probable cause. Bill Binney, former US intelligence official, explains how, for instances where corporate control wasn’t achievable, the NSA enticed third party countries to clandestinely tap internet communication lines on the internet backbone via the RAMPART-A program.What this means is that the NSA was able to assemble near complete dossiers of all web activity carried out by anyone using the internet.
But this is just in the US right?, policies like this wouldn’t be implemented in Europe.
GCHQ, the UK’s intelligence agency allegedly collects considerably more metadata than the NSA. Under Tempora, GCHQ can intercept all internet communications from submarine fibre optic cables and store the information for 30 days at the Bude facility in Cornwall. This includes complete web histories, the contents of all emails and facebook entires and given that more than 25% of all internet communications flow through these cables, the implications are astronomical. Elsewhere, JTRIG, a unit of GCHQ have intercepted private facebook pictures, changed the results of online polls and spoofed websites in real time. A lot of these techniques have been made possible by the 2016 Investigatory Powers Act which Snowden describes as the most “extreme surveillance in the history of western democracy”.
But despite all this, the age old reprise; “if you’ve got nothing to hide, you’ve got nothing to fear” often rings out in debates over privacy.
Indeed, the idea is so pervasive that politicians often lean on the phrase to justify ever more draconian methods of surveillance. Yes, they draw upon the selfsame rhetoric of Joseph Goebbels, propaganda minister for the Nazi regime.
When levelled against the fear of terrorism and death, its easy to see how people passively accept ever greater levels of surveillance. Indeed, Naomi Klein writes extensively in Shock Doctrine how the fear of external threats can be used as a smokescreen to implement ever more invasive policy. But indiscriminate mass surveillance should never be blindly accepted, privacy should and always will be a social norm, despite what Mark Zuckerberg said in 2010. Although I’m sure he may have a different answer now.
So you just read emails and look at cat memes online, why would you care about privacy?
In the same way we’re able to close our living room curtains and be alone and unmonitored, we should be able to explore our identities online un-impinged. Its a well rehearsed idea that nowadays we’re more honest to our web browsers than we are to each other but what happens when you become cognisant that everything you do online is intercepted and catalogued? As with CCTV, when we know we’re being watched, we alter our behaviour in line with whats expected.
As soon as this happens online, the liberating quality provided by the anonymity of the internet is lost. Your thinking aligns with the status quo and we lose the boundless ability of the internet to search and develop our identities. No progress can be made when everyone thinks the same way. Difference of opinion fuels innovation.
This draws obvious comparisons with Bentham’s Panopticon, a prison blueprint for enforcing control from within. The basic setup is as follows; there is a central guard tower surrounded by cells. In the cells are prisoners. The tower shines bright light so that the watchman can see each inmate silhouetted in their cell but the prisoners cannot see the watchman. The prisoners must assume they could be observed at any point and therefore act accordingly. In literature, the common comparison is Orwell’s 1984 where omnipresent government surveillance enforces control and distorts reality. With revelations about surveillance states, the relevance of these metaphors are plain to see.
In reality, theres actually a lot more at stake here.
With the Panopticon certain individuals are watched, in 1984 everyone is watched. On the modern internet, every person, irrespective of the threat they pose, is not only watched but their information is stored and archived for analysis.
Kafka’s The Trial, in which a bureaucracy uses citizens information to make decisions about them, but denies them the ability to participate in how their information is used, therefore seems a more apt comparison. The issue here is that corporations, more so, states have been allowed to comb our data and make decisions that affect us without our consent.
Maybe, as a member of a western democracy, you don’t think this matters. But what if you’re a member of a minority group in an oppressive regime? What if you’re arrested because a computer algorithm cant separate humour from intent to harm?
On the other hand, maybe you trust the intentions of your government, but how much faith do you have in them to keep your data private? The recent hack of the SEC shows that even government systems aren’t safe from attackers. When a business database is breached, maybe your credit card details become public, when a government database that has aggregated millions of data points on every aspect of your online life is hacked, you’ve lost all control of your ability to selectively reveal yourself to the world. Just as Lyndon Johnson sought to control physical clouds, he who controls the modern cloud, will rule the world.
Perhaps you think that even this doesn’t matter, if it allows the government to protect us from those that intend to cause harm then its worth the loss of privacy. The trouble with indiscriminate surveillance is that with so much data you see everything but paradoxically, still know nothing.
Intelligence is the strategic collection of pertinent facts, bulk data collection cannot therefore be intelligent. As Bill Binney puts it “bulk data kills people” because technicians are so overwhelmed that they cant isolate whats useful. Data collection as it is can only focus on retribution rather than reduction.
Granted, GDPR is a big step forward for individual consent but will it stop corporations handing over your data to the government? Depending on how cynical you are, you might think that GDPR is just a tool to clean up and create more reliable deterministic data anyway. The nothing to hide, nothing to fear mentality renders us passive supplicants in the removal of our civil liberties. We should be thinking about how we relate to one another and to our Governments and how much power we want to have in that relationship.
To paraphrase Edward Snowden, saying you don’t care about privacy because you’ve got nothing to hide is analogous to saying you don’t care about freedom of speech because you have nothing to say.
Experts from SANS presented the five most dangerous new cyber attack techniques in their annual RSA Conference 2018 keynote session in San Francisco, and shared their views on how they work, how they can be stopped or at least slowed, and how businesses and consumers can prepare.
The five threats outlined are:
1. Repositories and cloud storage data leakage 2. Big Data analytics, de-anonymization, and correlation 3. Attackers monetize compromised systems using crypto coin miners 4. Recognition of hardware flaws 5. More malware and attacks disrupting ICS and utilities instead of seeking profit.
Repositories and cloud storage data leakage
Ed Skoudis, lead for the SANS Penetration Testing Curriculum, talked about the data leakage threats facing us from the increased use of repositories and cloud storage:
“Software today is built in a very different way than it was 10 or even 5 years ago, with vast online code repositories for collaboration and cloud data storage hosting mission-critical applications. However, attackers are increasingly targeting these kinds of repositories and cloud storage infrastructures, looking for passwords, crypto keys, access tokens, and terabytes of sensitive data.”
He continued: “Defenders need to focus on data inventories, appointing a data curator for their organization and educating system architects and developers about how to secure data assets in the cloud. Additionally, the big cloud companies have each launched an AI service to help classify and defend data in their infrastructures. And finally, a variety of free tools are available that can help prevent and detect leakage of secrets through code repositories.”
Big Data analytics, de-anonymization, and correlation
Skoudis went on to talk about the threat of Big Data Analytics and how attackers are using data from several sources to de-anonymise users:
“In the past, we battled attackers who were trying to get access to our machines to steal data for criminal use. Now the battle is shifting from hacking machines to hacking data — gathering data from disparate sources and fusing it together to de-anonymise users, find business weaknesses and opportunities, or otherwise undermine an organisation’s mission. We still need to prevent attackers from gaining shell on targets to steal data. However, defenders also need to start analysing risks associated with how their seemingly innocuous data can be combined with data from other sources to introduce business risk, all while carefully considering the privacy implications of their data and its potential to tarnish a brand or invite regulatory scrutiny.”
Attackers monetize compromised systems using crypto coin miners
Johannes Ullrich, is Dean of Research, SANS Institute and Director of SANS Internet Storm Center. He has been looking at the increasing use of crypto coin miners by cyber criminals:
“Last year, we talked about how ransomware was used to sell data back to its owner and crypto-currencies were the tool of choice to pay the ransom. More recently, we have found that attackers are no longer bothering with data. Due to the flood of stolen data offered for sale, the value of most commonly stolen data like credit card numbers of PII has dropped significantly. Attackers are instead installing crypto coin miners. These attacks are more stealthy and less likely to be discovered and attackers can earn tens of thousands of dollars a month from crypto coin miners. Defenders therefore need to learn to detect these coin miners and to identify the vulnerabilities that have been exploited in order to install them.”
Recognition of hardware flaws
Ullrich then went on to say that software developers often assume that hardware is flawless and that this is a dangerous assumption. He explains why and what needs to be done:
“Hardware is no less complex then software and mistakes have been made in developing hardware just as they are made by software developers. Patching hardware is a lot more difficult and often not possible without replacing entire systems or suffering significant performance penalties. Developers therefore need to learn to create software without relying on hardware to mitigate any security issues. Similar to the way in which software uses encryption on untrusted networks, software needs to authenticate and encrypt data within the system. Some emerging homomorphic encryption algorithms may allow developers to operate on encrypted data without having to decrypt it first.”
More malware and attacks disrupting ICS and utilities instead of seeking profit
Finally, Head of R&D, SANS Institute, James Lyne, discussed the growing trend in malware and attacks that aren’t profit centred as we have largely seen in the past, but instead are focused on disrupting Industrial Control Systems (ICS) and utilities:
“Day to day the grand majority of malicious code has undeniably been focused on fraud and profit. Yet, with the relentless deployment of technology in our societies, the opportunity for political or even military influence only grows. And rare publicly visible attacks like Triton/TriSYS show the capability and intent of those who seek to compromise some of the highest risk components of industrial environments, i.e. the safety systems which have historically prevented critical security and safety meltdowns.”
He continued: “ICS systems are relatively immature and easy to exploit in comparison to the mainstream computing world. Many ICS systems lack the mitigations of modern operating systems and applications. The reliance on obscurity or isolation (both increasingly untrue) do not position them well to withstand a heightened focus on them, and we need to address this as an industry. More worrying is that attackers have demonstrated they have the inclination and resources to diversify their attacks, targeting the sensors that are used to provide data to the industrial controllers themselves. The next few years are likely to see some painful lessons being learned as this attack domain grows, since the mitigations are inconsistent and quite embryonic.”
Apple loves a show. Over the years, the company’s staged product launches, where executives in jeans show off flashy new iPhones, have become a global cultural institution. But when Apple CEO Tim Cook kicked off this week’s keynote, he didn’t start by talking about the size of the latest phone.
“We did not expect to be in this position, at odds with our own government,” he said, referring to the company’s high-profile clash with the FBI over accessing information on the San Bernardino shooting suspect’s iPhone. “But we believe strongly that we have a responsibility to help you protect your data and protect your privacy.”
Apple opened itself to the media and the public in an unprecedented way to ensure that its side of the story was heard.
Over the past few weeks, as the legal battle between the FBI and Apple unfolded, so did the publicity war. The FBI argued that Apple’s assistance in unlocking the iPhone could help provide justice for the victims of the shooting, by potentially uncovering information about others involved and the events leading up to it. It insisted, at least at first, that it was simply a case of this one phone. Apple maintained the FBI’s demands not only threatened customers’ privacy and personal security, but also violated the company’s right to free speech. In the press and on social media, people took sides.
In late February, a Pew Research Center poll found that the majority of Americans sided with the FBI. Just a month later, after an outpouring of support for Apple—and the convenient revelation that the FBI could possibly, in fact, open the San Bernadino iPhone without Cupertino’s help—the court case is on hold, and Apple’s reputation as an ardent defender of privacy and security is stronger than ever.
How did Apple pull this off, when faced with the federal government’s onslaught and widespread public anxiety about terrorism, stoked in no small part by the San Bernardino shooting? In this case, it went off script. A company famous for its secrecy, and famous among reporters for stonewalling, opened itself to the media and the public in an unprecedented way to ensure that its side of the story was heard. For a company whose image is largely defined by its products, Apple’s PR machine churned to communicate that its principles matter too.
From Antennagate to Foxconn, Apple has dealt with crises of varying magnitude before. But longtime Apple reporters say that the response in this latest case felt different. In mid-February, the FBI said it was unable to access encrypted data on the iPhone 5c of one of the San Bernardino shooters. A California magistrate ordered Apple to help, by creating a new software tool. But Apple believed complying would set a dangerous precedent, so it took its response public. Cook published an open letter on the company’s site laying out Apple’s privacy-centered argument justifying its decision not to cooperate.
It was just the beginning. Whenever the government or Apple filed new court documents, Apple held conference calls to provide background for journalists, any reporters and editors who might touch the case, and let them lob questions at the company’s attorneys. In the process, the company expanded its PR outreach beyond the standard cadre of journalists who follow the company’s every move. Apple invited some political and policy reporters in Washington, DC, to come to Apple’s office there to hear the company’s side. (Apple declined to comment.)
‚They talk about products, but they never ever ever talk about anything else.‘
“They talk about products, but they never ever ever talk about anything else,” said Leander Kahney, a longtime Apple reporter and editor of the site Cult of Mac. “I think it’s unprecedented. I don’t think I’ve ever seen Apple do this.”
Cook too opened his literal doors, sitting down with ABC News reporter David Muir in his office to discuss the situation, an extreme rarity for the company. “I’m not sure I’ve ever done an interview in the office,” Cook says as the piece begins.
In this case, unsurprisingly, the setting was strategic. As The Wall Street Journal has noted, Cook’s office features photographs of Robert F. Kennedy and Martin Luther King, Jr. Behind Cook, you could clearly see the Ripple of Hope Award (a bust of RFK), which he was awarded last year for his commitment to social change. The intended message was clear: Cook cares as much about civil liberties as he does about selling iPhones.
“They were trying to stop the controversy from continuing before something happened, which is a sort of new PR approach,” says Mark Gurman, an Apple journalist and senior editor of 9to5Mac. In the past, he says, Apple might have addressed issues after they happened, if they addressed them at all.
That’s easier to do with a few bending iPhone 6 Plus units than it is with a precedent-setting court case. You can always quietly give an aggrieved customer a new phone. But for Apple, there’s no going back from what the FBI was asking. A preemptive strike was the only kind available.
For both the company and the government, this debate is a crucial one. The government has portrayed Apple’s stance as a hindrance to crucial national security investigations. That’s a tough rap to beat. But encryption is central to Apple’s business (personal privacy has become a key to Apple’s marketing) and philosophy. Undermining its own security wouldn’t just help the FBI get into this iPhone, Apple argues, but in any number of devices in the future.
“They don’t want their customers to say, ‘They’re abetting terrorists.’ These are pretty heavy charges,” says Howard Bragman, chairman and founder of Fifteen Minutes PR. “They need to be clear and articulate. When you don’t speak sometimes, people assume the worst.”
To fend off that assumption, Apple made its crusade about, well, you and your private information. Donald Steel, a public relations expert in crisis management, says that Apple is the master of setting an agenda. And, here, their strategy came down to making it all about putting the customer first. That’s easier to understand than the ins and outs of crypto-security—and something Apple as a wildly successful consumer company knows how to do.
Apple is eager to show its motives aren’t purely self-interested.
In the process, Apple also scores a marketing win. It reinforces the message that it can protect customer data in a way that its competitors don’t. Google publicly sided with Apple, but the Android operating system doesn’t have encryption by default. Apple also depends not on ads, like Google, Facebook, and others, but on selling hardware—and one of the ways Apple tries to sell customers on its hardware is by saying that its strong encryption ensures your data is yours and yours alone.
If Apple were to agree to assist the government in unlocking an iPhone, Gurman says that could also threaten their relationship with customers, especially enterprise customers, who have become an increasingly important source of revenue for the traditionally consumer-facing company.
All that said, Apple is eager to show its motives aren’t purely self-interested. Cook has long championed individual privacy on principle as Apple has touted encryption as an important part of the iPhone. Cook has also been an outspoken supporter for other social issues like protecting the environment and LGBTQ rights. To drive the point of corporate principle home, the story recently spread that some engineers threatened to quit if Apple lost in court.
“This is Cook’s thing,” says Philip Elmer-DeWitt, a longtime Apple reporter and founder of tech blog Apple 3.0. “He’s taken principled positions.”
As that Pew survey showed, not everyone was won over by Apple’s PR offensive, despite several major American newspapers running supportive editorials. And yet, in some ways, it’s surprising Apple garnered as much support as it did.
“It’s a genius marketing strategy to grab the American flag and wrap yourself in the mantle of free speech and privacy,” longtime crisis communications expert Sam Singer says. “Two things Americans value above all else.”
Andy Cunningham, a former Apple publicist for Steve Jobs, told The Chicago Tribune that she thinks Apple didn’t go far enough in explaining how assisting the FBI in this case could set a dangerous precedent where foreign countries could demand similar assistance down the road. “I think (Jobs) would’ve spent more time framing the issue for the (public) than I think they’ve done so far,” she said.
But others believe Apple masterfully applied the public relations apparatus it normally uses for, say, product launches, to this issue. “This is not about what color the new iPhone is going to be. I like new iPhones, but they don’t really matter,” Steel says. “But they know how to make a splash. They know how to dominate the headlines.”
In that sense, some say, it’s almost unfair to pit the FBI and Department of Justice against Apple. Brooke Hammerling, the founder of Brew Media Relations, argues that the Apple PR machine has always been “above and beyond,” one of the best. The FBI and DOJ, on the other hand, are government agencies. “The DOJ rarely can act with the swiftness of a private individual corporation,” Singer says. “Yes, Apple has done a great job, but they were fighting an opponent who had at least one arm tied behind its back.”
The Next Round
The battle over privacy and security, however, is far from over. Many believe that this is only the first round of Silicon Valley versus the government.
And Singer says it hasn’t been a total victory for Apple. He argues the FBI and DOJ were successful at raising questions about how secure Apple’s iPhones are as well, and that could prove to be a problem down the line.
So what should Apple do now? “If I were a technology company, I’d propose we have a public sit-down to discuss these issues,” Singer says, adding it could include constitutional lawyers and other experts from around the world as well as the company and federal officials. “Everyone might not agree on a single thing, but the debate is vitally important for public understanding, privacy, and for public safety.” Having that discussion during a detente period may also help the company avoid conflicts down the road.
Steel says that Apple’s ultimate objective is for whatever regulation that does come be favorable to its privacy and security goals. Public opinion will continue to matter, especially since Congress will likely have the last word. If there’s anything Apple knows, it’s that the show matters. So sometimes you have to put your showmanship to work.
I was driving 70 mph on the edge of downtown St. Louis when the exploit began to take hold.Though I hadn’t touched the dashboard, the vents in the Jeep Cherokee started blasting cold air at the maximum setting, chilling the sweat on my back through the in-seat climate control system. Next the radio switched to the local hip hop station and began blaring Skee-lo at full volume. I spun the control knob left and hit the power button, to no avail. Then the windshield wipers turned on, and wiper fluid blurred the glass.
As I tried to cope with all this, a picture of the two hackers performing these stunts appeared on the car’s digital display: Charlie Miller and Chris Valasek, wearing their trademark track suits. A nice touch, I thought.
The Jeep’s strange behavior wasn’t entirely unexpected. I’d come to St. Louis to be Miller and Valasek’s digital crash-test dummy, a willing subject on whom they could test the car-hacking research they’d been doing over the past year. The result of their work was a hacking technique—what the security industry calls a zero-day exploit—that can target Jeep Cherokees and give the attacker wireless control, via the Internet, to any of thousands of vehicles. Their code is an automaker’s nightmare: software that lets hackers send commands through the Jeep’s entertainment system to its dashboard functions, steering, brakes, and transmission, all from a laptop that may be across the country.
To better simulate the experience of driving a vehicle while it’s being hijacked by an invisible, virtual force, Miller and Valasek refused to tell me ahead of time what kinds of attacks they planned to launch from Miller’s laptop in his house 10 miles west. Instead, they merely assured me that they wouldn’t do anything life-threatening. Then they told me to drive the Jeep onto the highway. “Remember, Andy,” Miller had said through my iPhone’s speaker just before I pulled onto the Interstate 64 on-ramp, “no matter what happens, don’t panic.”1
As the two hackers remotely toyed with the air-conditioning, radio, and windshield wipers, I mentally congratulated myself on my courage under pressure. That’s when they cut the transmission.
Immediately my accelerator stopped working. As I frantically pressed the pedal and watched the RPMs climb, the Jeep lost half its speed, then slowed to a crawl. This occurred just as I reached a long overpass, with no shoulder to offer an escape. The experiment had ceased to be fun.
At that point, the interstate began to slope upward, so the Jeep lost more momentum and barely crept forward. Cars lined up behind my bumper before passing me, honking. I could see an 18-wheeler approaching in my rearview mirror. I hoped its driver saw me, too, and could tell I was paralyzed on the highway.
“You’re doomed!” Valasek shouted, but I couldn’t make out his heckling over the blast of the radio, now pumping Kanye West. The semi loomed in the mirror, bearing down on my immobilized Jeep.
I followed Miller’s advice: I didn’t panic. I did, however, drop any semblance of bravery, grab my iPhone with a clammy fist, and beg the hackers to make it stop.
This wasn’t the first time Miller and Valasek had put me behind the wheel of a compromised car. In the summer of 2013, I drove a Ford Escape and a Toyota Prius around a South Bend, Indiana, parking lot while they sat in the backseat with their laptops, cackling as they disabled my brakes, honked the horn, jerked the seat belt, and commandeered the steering wheel. “When you lose faith that a car will do what you tell it to do,” Miller observed at the time, “it really changes your whole view of how the thing works.” Back then, however, their hacks had a comforting limitation: The attacker’s PC had been wired into the vehicles’ onboard diagnostic port, a feature that normally gives repair technicians access to information about the car’s electronically controlled systems.
A mere two years later, that carjacking has gone wireless. Miller and Valasek plan to publish a portion of their exploit on the Internet, timed to a talk they’re giving at the Black Hat security conference in Las Vegas next month. It’s the latest in a series of revelations from the two hackers that have spooked the automotive industry and even helped to inspire legislation; WIRED has learned that senators Ed Markey and Richard Blumenthal plan to introduce an automotive security bill today to set new digital security standards for cars and trucks, first sparked when Markey took note of Miller and Valasek’s work in 2013.
As an auto-hacking antidote, the bill couldn’t be timelier. The attack tools Miller and Valasek developed can remotely trigger more than the dashboard and transmission tricks they used against me on the highway. They demonstrated as much on the same day as my traumatic experience on I-64; After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp, re-engaged the transmission by turning the ignition off and on, and found an empty lot where I could safely continue the experiment.
Miller and Valasek’s full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep’s brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch. The researchers say they’re working on perfecting their steering control—for now they can only hijack the wheel when the Jeep is in reverse. Their hack enables surveillance too: They can track a targeted Jeep’s GPS coordinates, measure its speed, and even drop pins on a map to trace its route.
All of this is possible only because Chrysler, like practically all carmakers, is doing its best to turn the modern automobile into a smartphone. Uconnect, an Internet-connected computer feature in hundreds of thousands of Fiat Chrysler cars, SUVs, and trucks, controls the vehicle’s entertainment and navigation, enables phone calls, and even offers a Wi-Fi hot spot. And thanks to one vulnerable element, which Miller and Valasek won’t identify until their Black Hat talk, Uconnect’s cellular connection also lets anyone who knows the car’s IP address gain access from anywhere in the country. “From an attacker’s perspective, it’s a super nice vulnerability,” Miller says.
From that entry point, Miller and Valasek’s attack pivots to an adjacent chip in the car’s head unit—the hardware for its entertainment system—silently rewriting the chip’s firmware to plant their code. That rewritten firmware is capable of sending commands through the car’s internal computer network, known as a CAN bus, to its physical components like the engine and wheels. Miller and Valasek say the attack on the entertainment system seems to work on any Chrysler vehicle with Uconnect from late 2013, all of 2014, and early 2015. They’ve only tested their full set of physical hacks, including ones targeting transmission and braking systems, on a Jeep Cherokee, though they believe that most of their attacks could be tweaked to work on any Chrysler vehicle with the vulnerable Uconnect head unit. They have yet to try remotely hacking into other makes and models of cars.
After the researchers reveal the details of their work in Vegas, only two things will prevent their tool from enabling a wave of attacks on Jeeps around the world. First, they plan to leave out the part of the attack that rewrites the chip’s firmware; hackers following in their footsteps will have to reverse-engineer that element, a process that took Miller and Valasek months. But the code they publish will enable many of the dashboard hijinks they demonstrated on me as well as GPS tracking.
Second, Miller and Valasek have been sharing their research with Chrysler for nearly nine months, enabling the company to quietly release a patch ahead of the Black Hat conference. On July 16, owners of vehicles with the Uconnect feature were notified of the patch in a post on Chrysler’s website that didn’t offer any details or acknowledge Miller and Valasek’s research. “[Fiat Chrysler Automobiles] has a program in place to continuously test vehicles systems to identify vulnerabilities and develop solutions,” reads a statement a Chrysler spokesperson sent to WIRED. “FCA is committed to providing customers with the latest software updates to secure vehicles against any potential vulnerability.”
If consumers don’t realize this is an issue, they should, and they should start complaining to carmakers. This might be the kind of software bug most likely to kill someone. Charlie Miller
Unfortunately, Chrysler’s patch must be manually implemented via a USB stick or by a dealership mechanic. (Download the update here.) That means many—if not most—of the vulnerable Jeeps will likely stay vulnerable.
Chrysler stated in a response to questions from WIRED that it “appreciates” Miller and Valasek’s work. But the company also seemed leery of their decision to publish part of their exploit. “Under no circumstances does FCA condone or believe it’s appropriate to disclose ‘how-to information’ that would potentially encourage, or help enable hackers to gain unauthorized and unlawful access to vehicle systems,” the company’s statement reads. “We appreciate the contributions of cybersecurity advocates to augment the industry’s understanding of potential vulnerabilities. However, we caution advocates that in the pursuit of improved public safety they not, in fact, compromise public safety.”
The two researchers say that even if their code makes it easier for malicious hackers to attack unpatched Jeeps, the release is nonetheless warranted because it allows their work to be proven through peer review. It also sends a message: Automakers need to be held accountable for their vehicles’ digital security. “If consumers don’t realize this is an issue, they should, and they should start complaining to carmakers,” Miller says. “This might be the kind of software bug most likely to kill someone.”
In fact, Miller and Valasek aren’t the first to hack a car over the Internet. In 2011 a team of researchers from the University of Washington and the University of California at San Diego showed that they could wirelessly disable the locks and brakes on a sedan. But those academics took a more discreet approach, keeping the identity of the hacked car secret and sharing the details of the exploit only with carmakers.
Miller and Valasek represent the second act in a good-cop/bad-cop routine. Carmakers who failed to heed polite warnings in 2011 now face the possibility of a public dump of their vehicles’ security flaws. The result could be product recalls or even civil suits, says UCSD computer science professor Stefan Savage, who worked on the 2011 study. Earlier this month, in fact, Range Rover issued a recall to fix a software security flaw that could be used to unlock vehicles’ doors. “Imagine going up against a class-action lawyer after Anonymous decides it would be fun to brick all the Jeep Cherokees in California,” Savage says.
For the auto industry and its watchdogs, in other words, Miller and Valasek’s release may be the last warning before they see a full-blown zero-day attack. “The regulators and the industry can no longer count on the idea that exploit code won’t be in the wild,” Savage says. “They’ve been thinking it wasn’t an imminent danger you needed to deal with. That implicit assumption is now dead.”
471,000 Hackable Automobiles
Sitting on a leather couch in Miller’s living room as a summer storm thunders outside, the two researchers scan the Internet for victims.
Uconnect computers are linked to the Internet by Sprint’s cellular network, and only other Sprint devices can talk to them. So Miller has a cheap Kyocera Android phone connected to his battered MacBook. He’s using the burner phone as a Wi-Fi hot spot, scouring for targets using its thin 3G bandwidth.
A set of GPS coordinates, along with a vehicle identification number, make, model, and IP address, appears on the laptop screen. It’s a Dodge Ram. Miller plugs its GPS coordinates into Google Maps to reveal that it’s cruising down a highway in Texarkana, Texas. He keeps scanning, and the next vehicle to appear on his screen is a Jeep Cherokee driving around a highway cloverleaf between San Diego and Anaheim, California. Then he locates a Dodge Durango, moving along a rural road somewhere in the Upper Peninsula of Michigan. When I ask him to keep scanning, he hesitates. Seeing the actual, mapped locations of these unwitting strangers’ vehicles—and knowing that each one is vulnerable to their remote attack—unsettles him.
When Miller and Valasek first found the Uconnect flaw, they thought it might only enable attacks over a direct Wi-Fi link, confining its range to a few dozen yards. When they discovered the Uconnect’s cellular vulnerability earlier this summer, they still thought it might work only on vehicles on the same cell tower as their scanning phone, restricting the range of the attack to a few dozen miles. But they quickly found even that wasn’t the limit. “When I saw we could do it anywhere, over the Internet, I freaked out,” Valasek says. “I was frightened. It was like, holy fuck, that’s a vehicle on a highway in the middle of the country. Car hacking got real, right then.”
That moment was the culmination of almost three years of work. In the fall of 2012, Miller, a security researcher for Twitter and a former NSA hacker, and Valasek, the director of vehicle security research at the consultancy IOActive, were inspired by the UCSD and University of Washington study to apply for a car-hacking research grant from Darpa. With the resulting $80,000, they bought a Toyota Prius and a Ford Escape. They spent the next year tearing the vehicles apart digitally and physically, mapping out their electronic control units, or ECUs—the computers that run practically every component of a modern car—and learning to speak the CAN network protocol that controls them.
When they demonstrated a wired-in attack on those vehicles at the DefCon hacker conference in 2013, though, Toyota, Ford, and others in the automotive industry downplayed the significance of their work, pointing out that the hack had required physical access to the vehicles. Toyota, in particular, argued that its systems were “robust and secure” against wireless attacks. “We didn’t have the impact with the manufacturers that we wanted,” Miller says. To get their attention, they’d need to find a way to hack a vehicle remotely.
So the next year, they signed up for mechanic’s accounts on the websites of every major automaker and downloaded dozens of vehicles’ technical manuals and wiring diagrams. Using those specs, they rated 24 cars, SUVs, and trucks on three factors they thought might determine their vulnerability to hackers: How many and what types of radios connected the vehicle’s systems to the Internet; whether the Internet-connected computers were properly isolated from critical driving systems, and whether those critical systems had “cyberphysical” components—whether digital commands could trigger physical actions like turning the wheel or activating brakes.
Based on that study, they rated Jeep Cherokee the most hackable model. Cadillac’s Escalade and Infiniti’s Q50 didn’t fare much better; Miller and Valasek ranked them second- and third-most vulnerable. When WIRED told Infiniti that at least one of Miller and Valasek’s warnings had been borne out, the company responded in a statement that its engineers “look forward to the findings of this [new] study” and will “continue to integrate security features into our vehicles to protect against cyberattacks.” Cadillac emphasized in a written statement that the company has released a new Escalade since Miller and Valasek’s last study, but that cybersecurity is “an emerging area in which we are devoting more resources and tools,” including the recent hire of a chief product cybersecurity officer.
After Miller and Valasek decided to focus on the Jeep Cherokee in 2014, it took them another year of hunting for hackable bugs and reverse-engineering to prove their educated guess. It wasn’t until June that Valasek issued a command from his laptop in Pittsburgh and turned on the windshield wipers of the Jeep in Miller’s St. Louis driveway.
Since then, Miller has scanned Sprint’s network multiple times for vulnerable vehicles and recorded their vehicle identification numbers. Plugging that data into an algorithm sometimes used for tagging and tracking wild animals to estimate their population size, he estimated that there are as many as 471,000 vehicles with vulnerable Uconnect systems on the road.
Pinpointing a vehicle belonging to a specific person isn’t easy. Miller and Valasek’s scans reveal random VINs, IP addresses, and GPS coordinates. Finding a particular victim’s vehicle out of thousands is unlikely through the slow and random probing of one Sprint-enabled phone. But enough phones scanning together, Miller says, could allow an individual to be found and targeted. Worse, he suggests, a skilled hacker could take over a group of Uconnect head units and use them to perform more scans—as with any collection of hijacked computers—worming from one dashboard to the next over Sprint’s network. The result would be a wirelessly controlled automotive botnet encompassing hundreds of thousands of vehicles.
“For all the critics in 2013 who said our work didn’t count because we were plugged into the dashboard,” Valasek says, “well, now what?”
Congress Takes on Car Hacking
Now the auto industry needs to do the unglamorous, ongoing work of actually protecting cars from hackers. And Washington may be about to force the issue.
Later today, senators Markey and Blumenthal intend to reveal new legislation designed to tighten cars’ protections against hackers. The bill (which a Markey spokesperson insists wasn’t timed to this story) will call on the National Highway Traffic Safety Administration and the Federal Trade Commission to set new security standards and create a privacy and security rating system for consumers. “Controlled demonstrations show how frightening it would be to have a hacker take over controls of a car,” Markey wrote in a statement to WIRED. “Drivers shouldn’t have to choose between being connected and being protected…We need clear rules of the road that protect cars from hackers and American families from data trackers.”
Markey has keenly followed Miller and Valasek’s research for years. Citing their 2013 Darpa-funded research and hacking demo, he sent a letter to 20 automakers, asking them to answer a series of questions about their security practices. The answers, released in February, show what Markey describes as “a clear lack of appropriate security measures to protect drivers against hackers who may be able to take control of a vehicle.” Of the 16 automakers who responded, all confirmed that virtually every vehicle they sell has some sort of wireless connection, including Bluetooth, Wi-Fi, cellular service, and radios. (Markey didn’t reveal the automakers’ individual responses.) Only seven of the companies said they hired independent security firms to test their vehicles’ digital security. Only two said their vehicles had monitoring systems that checked their CAN networks for malicious digital commands.
UCSD’s Savage says the lesson of Miller and Valasek’s research isn’t that Jeeps or any other vehicle are particularly vulnerable, but that practically any modern vehicle could be vulnerable. “I don’t think there are qualitative differences in security between vehicles today,” he says. “The Europeans are a little bit ahead. The Japanese are a little bit behind. But broadly writ, this is something everyone’s still getting their hands around.”
Aside from wireless hacks used by thieves to open car doors, only one malicious car-hacking attack has been documented: In 2010 a disgruntled employee in Austin, Texas, used a remote shutdown system meant for enforcing timely car payments to brick more than 100 vehicles. But the opportunities for real-world car hacking have only grown, as automakers add wireless connections to vehicles’ internal networks. Uconnect is just one of a dozen telematics systems, including GM Onstar, Lexus Enform, Toyota Safety Connect, Hyundai Bluelink, and Infiniti Connection.
In fact, automakers are thinking about their digital security more than ever before, says Josh Corman, the cofounder of I Am the Cavalry, a security industry organization devoted to protecting future Internet-of-things targets like automobiles and medical devices. Thanks to Markey’s letter, and another set of questions sent to automakers by the House Energy and Commerce Committee in May, Corman says, Detroit has known for months that car security regulations are coming.
But Corman cautions that the same automakers have been more focused on competing with each other to install new Internet-connected cellular services for entertainment, navigation, and safety. (Payments for those services also provide a nice monthly revenue stream.) The result is that the companies have an incentive to add Internet-enabled features—but not to secure them from digital attacks. “They’re getting worse faster than they’re getting better,” he says. “If it takes a year to introduce a new hackable feature, then it takes them four to five years to protect it.”
Corman says carmakers need to befriend hackers who expose flaws, rather than fear or antagonize them—just as companies like Microsoft have evolved from threatening hackers with lawsuits to inviting them to security conferences and paying them “bug bounties” for disclosing security vulnerabilities. For tech companies, Corman says, “that enlightenment took 15 to 20 years.” The auto industry can’t afford to take that long. “Given that my car can hurt me and my family,” he says, “I want to see that enlightenment happen in three to five years, especially since the consequences for failure are flesh and blood.”
As I drove the Jeep back toward Miller’s house from downtown St. Louis, however, the notion of car hacking hardly seemed like a threat that will wait three to five years to emerge. In fact, it seemed more like a matter of seconds; I felt the vehicle’s vulnerability, the nagging possibility that Miller and Valasek could cut the puppet’s strings again at any time.
The hackers holding the scissors agree. “We shut down your engine—a big rig was honking up on you because of something we did on our couch,” Miller says, as if I needed the reminder. “This is what everyone who thinks about car security has worried about for years. This is a reality.”