Archiv der Kategorie: Privacy

Facebook pays teens to install VPN that spies on them

facebook vpn watching

Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms.

Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits.

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.

Seven hours after this story was published, Facebook told TechCrunch it would shut down the iOS version of its Research app in the wake of our report. But on Wednesday morning, an Apple spokesperson confirmed that Facebook violated its policies, and it had blocked Facebook’s Research app on Tuesday before the social network seemingly pulled it voluntarily (without mentioning it was forced to do so). You can read our full report on the development here.

An Apple spokesperson provided this statement. “We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”

Facebook’s Research program will continue to run on Android.

Facebook’s Research app requires users to ‘Trust’ it with extensive access to their dataWe asked Guardian Mobile Firewall’s security expert Will Strafach to dig into the Facebook Research app, and he told us that “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.” It’s unclear exactly what data Facebook is concerned with, but it gets nearly limitless access to a user’s device once they install the app.

The strategy shows how far Facebook is willing to go and how much it’s willing to pay to protect its dominance — even at the risk of breaking the rules of Apple’s iOS platform on which it depends. Apple may have asked Facebook to discontinue distributing its Research app.

A more stringent punishment would be to revoke Facebook’s permission to offer employee-only apps. The situation could further chill relations between the tech giants. Apple’s Tim Cook has repeatedly criticized Facebook’s data collection practices. Facebook disobeying iOS policies to slurp up more information could become a new talking point.

Facebook’s Research program is referred to as Project Atlas on sign-up sites that don’t mention Facebook’s involvement

“The fairly technical sounding ‘install our Root Certificate’ step is appalling,” Strafach tells us. “This hands Facebook continuous access to the most sensitive data about you, and most users are going to be unable to reasonably consent to this regardless of any agreement they sign, because there is no good way to articulate just how much power is handed to Facebook when you do this.”

Facebook’s surveillance app

Facebook first got into the data-sniffing business when it acquired Onavo for around $120 million in 2014. The VPN app helped users track and minimize their mobile data plan usage, but also gave Facebook deep analytics about what other apps they were using. Internal documents acquired by Charlie Warzel and Ryan Mac of BuzzFeed News reveal that Facebook was able to leverage Onavo to learn that WhatsApp was sending more than twice as many messages per day as Facebook Messenger. Onavo allowed Facebook to spot WhatsApp’s meteoric rise and justify paying $19 billion to buy the chat startup in 2014. WhatsApp has since tripled its user base, demonstrating the power of Onavo’s foresight.

Over the years since, Onavo clued Facebook in to what apps to copy, features to build and flops to avoid. By 2018, Facebook was promoting the Onavo app in a Protect bookmark of the main Facebook app in hopes of scoring more users to snoop on. Facebook also launched the Onavo Bolt app that let you lock apps behind a passcode or fingerprint while it surveils you, but Facebook shut down the app the day it was discovered following privacy criticism. Onavo’s main app remains available on Google Play and has been installed more than 10 million times.

The backlash heated up after security expert Strafach detailed in March how Onavo Protect was reporting to Facebook when a user’s screen was on or off, and its Wi-Fi and cellular data usage in bytes even when the VPN was turned off. In June, Apple updated its developer policies to ban collecting data about usage of other apps or data that’s not necessary for an app to function. Apple proceeded to inform Facebook in August that Onavo Protect violated those data collection policies and that the social network needed to remove it from the App Store, which it did, Deepa Seetharaman of the WSJ reported.

But that didn’t stop Facebook’s data collection.

Project Atlas

TechCrunch recently received a tip that despite Onavo Protect being banished by Apple, Facebook was paying users to sideload a similar VPN app under the Facebook Research moniker from outside of the App Store. We investigated, and learned Facebook was working with three app beta testing services to distribute the Facebook Research app: BetaBound, uTest and Applause. Facebook began distributing the Research VPN app in 2016. It has been referred to as Project Atlas since at least mid-2018, around when backlash to Onavo Protect magnified and Apple instituted its new rules that prohibited Onavo. Previously, a similar program was called Project Kodiak. Facebook didn’t want to stop collecting data on people’s phone usage and so the Research program continued, in disregard for Apple banning Onavo Protect.

Facebook’s Research App on iOS

Ads (shown below) for the program run by uTest on Instagram and Snapchat sought teens 13-17 years old for a “paid social media research study.” The sign-up page for the Facebook Research program administered by Applause doesn’t mention Facebook, but seeks users “Age: 13-35 (parental consent required for ages 13-17).” If minors try to sign-up, they’re asked to get their parents’ permission with a form that reveal’s Facebook’s involvement and says “There are no known risks associated with the project, however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of apps. You will be compensated by Applause for your child’s participation.” For kids short on cash, the payments could coerce them to sell their privacy to Facebook.

The Applause site explains what data could be collected by the Facebook Research app (emphasis mine):

“By installing the software, you’re giving our client permission to collect data from your phone that will help them understand how you browse the internet, and how you use the features in the apps you’ve installed . . . This means you’re letting our client collect information such as which apps are on your phone, how and when you use them, data about your activities and content within those apps, as well as how other people interact with you or your content within those apps. You are also letting our client collect information about your internet browsing activity (including the websites you visit and data that is exchanged between your device and those websites) and your use of other online services. There are some instances when our client will collect this information even where the app uses encryption, or from within secure browser sessions.”

Meanwhile, the BetaBound sign-up page with a URL ending in “Atlas” explains that “For $20 per month (via e-gift cards), you will install an app on your phone and let it run in the background.” It also offers $20 per friend you refer. That site also doesn’t initially mention Facebook, but the instruction manual for installing Facebook Research reveals the company’s involvement.

Facebook’s intermediary uTest ran ads on Snapchat and Instagram, luring teens to the Research program with the promise of money

 

Facebook seems to have purposefully avoided TestFlight, Apple’s official beta testing system, which requires apps to be reviewed by Apple and is limited to 10,000 participants. Instead, the instruction manual reveals that users download the app from r.facebook-program.com and are told to install an Enterprise Developer Certificate and VPN and “Trust” Facebook with root access to the data their phone transmits. Apple requires that developers agree to only use this certificate system for distributing internal corporate apps to their own employees. Randomly recruiting testers and paying them a monthly fee appears to violate the spirit of that rule.

Security expert Will Strafach found Facebook’s Research app contains lots of code from Onavo Protect, the Facebook-owned app Apple banned last year

Once installed, users just had to keep the VPN running and sending data to Facebook to get paid. The Applause-administered program requested that users screenshot their Amazon orders page. This data could potentially help Facebook tie browsing habits and usage of other apps with purchase preferences and behavior. That information could be harnessed to pinpoint ad targeting and understand which types of users buy what.

TechCrunch commissioned Strafach to analyze the Facebook Research app and find out where it was sending data. He confirmed that data is routed to “vpn-sjc1.v.facebook-program.com” that is associated with Onavo’s IP address, and that the facebook-program.com domain is registered to Facebook, according to MarkMonitor. The app can update itself without interacting with the App Store, and is linked to the email address PeopleJourney@fb.com. He also discovered that the Enterprise Certificate first acquired in 2016 indicates Facebook renewed it on June 27th, 2018 — weeks after Apple announced its new rules that prohibited the similar Onavo Protect app.

“It is tricky to know what data Facebook is actually saving (without access to their servers). The only information that is knowable here is what access Facebook is capable of based on the code in the app. And it paints a very worrisome picture,” Strafach explains. “They might respond and claim to only actually retain/save very specific limited data, and that could be true, it really boils down to how much you trust Facebook’s word on it. The most charitable narrative of this situation would be that Facebook did not think too hard about the level of access they were granting to themselves . . . which is a startling level of carelessness in itself if that is the case.”

[Update: TechCrunch also found that Google’s Screenwise Meter surveillance app also breaks the Enterprise Certificate policy, though it does a better job of revealing the company’s involvement and how it works than Facebook does.]

“Flagrant defiance of Apple’s rules”

In response to TechCrunch’s inquiry, a Facebook spokesperson confirmed it’s running the program to learn how people use their phones and other services. The spokesperson told us “Like many companies, we invite people to participate in research that helps us identify things we can be doing better. Since this research is aimed at helping Facebook understand how people use their mobile devices, we’ve provided extensive information about the type of data we collect and how they can participate. We don’t share this information with others and people can stop participating at any time.”

Facebook’s Research app requires Root Certificate access, which Facebook gather almost any piece of data transmitted by your phone

Facebook’s spokesperson claimed that the Facebook Research app was in line with Apple’s Enterprise Certificate program, but didn’t explain how in the face of evidence to the contrary. They said Facebook first launched its Research app program in 2016. They tried to liken the program to a focus group and said Nielsen and comScore run similar programs, yet neither of those ask people to install a VPN or provide root access to the network. The spokesperson confirmed the Facebook Research program does recruit teens but also other age groups from around the world. They claimed that Onavo and Facebook Research are separate programs, but admitted the same team supports both as an explanation for why their code was so similar.

Facebook’s Research program requested users screenshot their Amazon order history to provide it with purchase data

However, Facebook’s claim that it doesn’t violate Apple’s Enterprise Certificate policy is directly contradicted by the terms of that policy. Those include that developers “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing”. The policy also states that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers” unless under direct supervision of employees or on company premises. Given Facebook’s customers are using the Enterprise Certificate-powered app without supervision, it appears Facebook is in violation.

Seven hours after this report was first published, Facebook updated its position and told TechCrunch that it would shut down the iOS Research app. Facebook noted that the Research app was started in 2016 and was therefore not a replacement for Onavo Protect. However, they do share similar code and could be seen as twins running in parallel. A Facebook spokesperson also provided this additional statement:

“Key facts about this market research program are being ignored. Despite early reports, there was nothing ‘secret’ about this; it was literally called the Facebook Research App. It wasn’t ‘spying’ as all of the people who signed up to participate went through a clear on-boarding process asking for their permission and were paid to participate. Finally, less than 5 percent of the people who chose to participate in this market research program were teens. All of them with signed parental consent forms.”

Facebook did not publicly promote the Research VPN itself and used intermediaries that often didn’t disclose Facebook’s involvement until users had begun the signup process. While users were given clear instructions and warnings, the program never stresses nor mentions the full extent of the data Facebook can collect through the VPN. A small fraction of the users paid may have been teens, but we stand by the newsworthiness of its choice not to exclude minors from this data collection initiative.

Facebook disobeying Apple so directly and then pulling the app could hurt their relationship. “The code in this iOS app strongly indicates that it is simply a poorly re-branded build of the banned Onavo app, now using an Enterprise Certificate owned by Facebook in direct violation of Apple’s rules, allowing Facebook to distribute this app without Apple review to as many users as they want,” Strafach tells us. ONV prefixes and mentions of graph.onavo.com, “onavoApp://” and “onavoProtect://” custom URL schemes litter the app. “This is an egregious violation on many fronts, and I hope that Apple will act expeditiously in revoking the signing certificate to render the app inoperable.”

Facebook is particularly interested in what teens do on their phones as the demographic has increasingly abandoned the social network in favor of Snapchat, YouTube and Facebook’s acquisition Instagram. Insights into how popular with teens is Chinese video music app TikTok and meme sharing led Facebook to launch a clone called Lasso and begin developing a meme-browsing feature called LOL, TechCrunch first reported. But Facebook’s desire for data about teens riles critics at a time when the company has been battered in the press. Analysts on tomorrow’s Facebook earnings call should inquire about what other ways the company has to collect competitive intelligence now that it’s ceased to run the Research program on iOS.

Last year when Tim Cook was asked what he’d do in Mark Zuckerberg’s position in the wake of the Cambridge Analytica scandal, he said “I wouldn’t be in this situation . . . The truth is we could make a ton of money if we monetized our customer, if our customer was our product. We’ve elected not to do that.” Zuckerberg told Ezra Klein that he felt Cook’s comment was “extremely glib.”

Now it’s clear that even after Apple’s warnings and the removal of Onavo Protect, Facebook was still aggressively collecting data on its competitors via Apple’s iOS platform. “I have never seen such open and flagrant defiance of Apple’s rules by an App Store developer,” Strafach concluded. Now that Facebook has ceased the program on iOS and its Android future is uncertain, it may either have to invent new ways to surveil our behavior amidst a climate of privacy scrutiny, or be left in the dark.

Additional reporting by Zack Whittaker. Updated with comment from Facebook, and on Wednesday with a statement from Apple. 

Source: https://techcrunch.com/2019/01/29/facebook-project-atlas/

Werbeanzeigen

How hackers are stealing keyless cars

Wirelessly unlocking your car is convenient, but it comes at a price. The increasing number of keyless cars on the road has led to a new kind of crime — key fob hacks!  With the aid of new cheap electronic accessories and techniques, a key fob’s signal is now relatively easy for criminals to intercept or block. Imagine a thief opening your car and driving away with it without setting off any alarms!

According to the FBI, car theft numbers have been on a downward spiral since their peak in 1991. However, numbers have been steadily inching their way up again since 2015. In fact, there was a 3.8 percent increase in car theft cases in 2015, a 7.4 increase in 2016 and another 4.1 increase in the first half of 2017.

In order to fight this upward trend and prevent your car from becoming a car theft statistic itself, awareness is definitely the key.

So arm yourself against this new wave of car crimes. Here are the top keyless car hacks everyone needs to know about.

1. Relay hack

Always-on key fobs present a serious weakness in your car’s security. As long as your keys are in range, anyone can open the car and the system will think it’s you. That’s why newer car models won’t unlock until the key fob is within a foot.

However, criminals can get relatively cheap relay boxes that capture key fob signals up to 300 feet away, and then transmit them to your car.

Here’s how this works. One thief stands near your car with a relay box while an accomplice scans your house with another one. When your key fob signal is picked up, it is transmitted to the box that’s closer to your car, prompting it to open.

In other words, your keys could be in your house, and criminals could walk up to your car and open it. This isn’t just a theory either; it’s actually happening.

According to the German Automotive Club, here are the top cars that are vulnerable to key fob relay attacks:

Audi: A3, A4, A6

BMW: 730d

Citroen: DS4 CrossBack

Ford: Galaxy, Eco-Sport

Honda: HR-V

Hyundai: Santa Fe CRDi

Kia: Optima

Lexus: RX 450h

Mazda: CX-5

Mini: Clubman

Mitsubishi: Outlander

Nissan: Qashqai, Leaf

Vauxhall: Ampera

Range Rover: Evoque

Renault: Traffic

Ssangyong: Tivoli XDi

Subaru: Levorg

Toyota: Rav4

Volkswagen: Golf GTD, Touran 5T

2. Keyless jamming

In this scenario, the crooks will block your signal so when you issue a lock command from your key fob, it won’t actually reach your car and your doors will remain unlocked. The crooks can then have free access to your vehicle.

Safety tip: To prevent this from happening to you, always manually check your car doors before stepping away. You can also install a steering wheel lock to prevent car thieves from stealing your car even if they do get inside.

3. Tire pressure sensor hijack

Here’s a novel technique, but it is happening — crooks are hijacking your tire sensors to send false tire pressure readings. Why? So they can lure you into stopping your car, creating an opportunity for them to attack you. Sounds crazy, but this scheme is out there.

Safety tip: If you have to check your tires, always pull over at a well-lit, busy public area, preferably at a gas station or a service garage so you can ask for assistance.

4. Telematics exploits

One of the current buzzwords for connected cars is something called telematics. What is telematics? Simply put, it’s a connected system that can monitor your vehicle’s behavior remotely. This data may include your car’s location, speed, mileage, tire pressure, fuel use, braking, engine/battery status, driver behavior and more.

But as usual, anything that’s connected to the internet is vulnerable to exploits and telematics is no exception. If hackers manage to intercept your connection, they can track your vehicle and even control it remotely. Quite scary!

Safety tip: Before you get a car with built-in telematics, consult with your car dealer about the cybersecurity measures they’re employing on connected cars. If you do have a connected car, make sure its software is always up-to-date.

5. Networking attacks

Aside from taking over your car via telematics, hackers can also employ old-school denial-of-service attacks to overwhelm your car and potentially shut down critical functions like airbags, anti-lock brakes, and door locks. Since some connected cars even have built-in Wi-Fi hotspot capabilities, this attack is completely feasible. As with regular home Wi-Fi networks, they can even steal your personal data if they manage to infiltrate your car’s local network.

Also, it’s a matter of physical safety. Remember, modern cars are basically run by multiple computers and Engine Control Modules (ECMs) and if hackers can shut these systems down, they can put you in grave danger.

Safety tip: Changing your car’s onboard Wi-Fi network’s password regularly is a must.

6. Onboard diagnostics (OBD) hacks

Did you know that virtually every car has an onboard diagnostics (OBD) port? This is an interface that allows mechanics to access your car’s data to read error codes, statistics and even program new keys.

It turns out, anyone can buy exploit kits that can utilize this port to replicate keys and program new ones to use them for stealing vehicles. Now, that’s something that you don’t want to be a victim of.

Safety tip: Always go to a reputable mechanic. Plus, a physical steering wheel lock can also help.

7. In-car phishing

Another old-school internet hack is also making its way to connected cars, specifically models with internet connectivity and built-in web browsers.

Yep, it’s the old phishing scheme and crooks can send you emails and messages with malicious links and attachments that can install malware on your car’s system. As usual, once malware is installed, anything’s possible. Worse yet, car systems don’t have built-in malware protections (yet), so this can be hard to spot.

Safety tip: Practice good computer safety practices even when connected to your car. Never open emails and messages nor follow links from unknown sources.

How about car insurance?

Unfortunately, this rise in car theft numbers will not only put your keyless car at increased risk, but it can also hike up your insurance rates as well.

If you have a keyless car, please check your car insurance and see if it’s covered against car hacks. Since these types of crimes are relatively new, there might be some confusion on who’s going to be liable for what — will it be the driver, the car maker or the car computer developer?

According to financial advice site MoneySupermarket, most car insurance policies currently have these in place when dealing with emerging car technologies:

  • Drivers have one insurance policy that covers both manual and autonomous (self-driving) car modes.
  • If the driver of a self-driving car inflicts injury or damage to a third party, that party can claim against that driver’s car insurer regardless of what driving mode the car was in when the accident occurred.
  • Now here’s the part that covers car theft due to key fob and wireless attacks. Apparently, drivers won’t be liable for faults and weaknesses in their car’s systems and they will be able to file a claim if they are injured or have suffered loss because of those faults.

With key fob relay car theft and hacks, MoneySupermarket said that insurance companies will pay out as long as the car owner has taken reasonable steps to protect their vehicle.

However, if your particular car model is a common target for keyless theft, car insurance companies may charge you with higher premiums.

Steps to stop relay attacks

But still, it’s important to have the best possible protection against these emerging car crimes.

There are a few easy ways to block key fob attacks. You can buy a signal-blocking pouch that can hold your keys, like a shielded RFID-blocking pouch.

Stick it in the fridge…

If you don’t want to spend any money, you can stick your key fob into the refrigerator or freezer. The multiple layers of metal will block your key fob’s signal. Just check with the fob’s manufacturer to make sure freezing your key fob won’t damage it.

…or even inside the microwave

If you’re not keen to freeze your key fob, you can do the same thing with your microwave oven. (Hint: Don’t turn it on.) Stick your key fob in there, and criminals won’t be able to pick up its signal. Like any seasoned criminal, they’ll just move on to an easier target.

Wrap your keyfob in foil

Since your key fob’s signal is blocked by metal, you can also wrap it up in aluminum foil. While that’s the easiest solution, it can also leak the signal if you don’t do it right. Plus, you might need to stock up on foil. You could also make a foil-lined box to put your keys in, if you’re in a crafting mood.

 

Source: https://www.komando.com/happening-now/495924/7-clever-ways-hackers-are-stealing-keyless-cars

Google needs to apologize for violating the trust of its users once again

SAN FRANCISCO, CA - MAY 28: Google senior vice president of product Sundar Pichai delivers the keynote address during the 2015 Google I/O conference on May 28, 2015 in San Francisco, California. The annual Google I/O conference runs through May 29. (Photo by Justin Sullivan/Getty Images)Justin Sullivan/Getty Images

  • An Associated Press investigation recently discovered that Google still collects its users‘ location data even if they have their Location History turned off.
  • After the report was published, Google quietly updated its help page to describe how location settings work.
  • Previously, the page said „with Location History off, the places you go are no longer stored.“
  • Now, the page says, „This setting does not affect other location services on your device,“ adding that „some location data may be saved as part of your activity on other services, like Search and Maps.“
  • The quiet changing of false information is a major violation of users‘ trust.
  • Google needs to do better.

Google this week acknowledged that it quietly tracks its users‘ locations, even if those people turn off their Location History — a clarification that came in the wake of an Associated Press investigation.

It’s a major violation of users‘ trust.

And yet, nothing is going to happen as a result of this episode.

It’s happened before

Google has a history of bending the rules:

  • In 2010, Google’s Street View cars were caught eavesdropping on people’s Wi-Fi connections.
  • In 2011, Google agreed to forfeit $500 million after a criminal investigation by the Justice Department found that Google illegally allowed advertisements from online Canadian pharmacies to sell their products in the US.
  • In 2012, Google circumvented the no-cookies policy on Apple’s Safari web browser and paid a $22.5 million fine to the Federal Trade Commission as a result.

Ultimately, Google came out of all of these incidents just fine. It paid some money here and there, and sat in a few courtrooms, but nothing really happened to the company’s bottom line. People continued using Google’s services.

Other companies have done it too

Remember Cambridge Analytica?

Five months ago in March, a 28-year-old named Christopher Wylie blew the whistle on his employer, the data-analytics company, Cambridge Analytica, at which he served as a director of research.

It was later revealed that Cambridge Analytica had collected the data of over 87 million Facebook users in an attempt to influence the 2016 presidential election in favor of the Republican candidate, Donald Trump.

One month later, Facebook CEO Mark Zuckerberg was summoned in front of Congress to answer questions related to the Cambridge Analytica scandal over a two-day span.

mark zuckerberg congress facebook awkwardFacebook CEO Mark Zuckerberg, takes a drink of water while testifying before a joint hearing of the Commerce and Judiciary Committees on Capitol Hill in Washington, Tuesday, April 10, 2018, about the use of Facebook data to target American voters in the 2016 election.AP Photo/Andrew Harnik

Many users felt like their trust was violated. A hashtag movement called „#DeleteFacebook“ was born.

And yet, nothing has really changed at Facebook since that scandal, which similarly involved the improper collection of user data, and the violation of users‘ trust.

Facebook seems to be doing just fine. During its Q2 earnings report in late July, Facebook reported over $13 billion in revenue — a 42% jump year-over-year — and an 11% increase in both daily and monthly active users.

In short, Facebook is not going anywhere. And neither is Google.

Too big — and too good — to fail

Just like Facebook has no equal among the hundreds of other social networks out there, the same goes for Google and competing search engines.

According to StatCounter, Google has a whopping 90% share of the global search engine market.

The next biggest search engine in the world is Microsoft’s Bing, which has a paltry 3% market share.

In other words, a cataclysmic event would have to occur for people to switch search engines. Or, another search engine would have to come along and completely unseat Google.

But that’s probably not going to happen.

GoogleUladzik Kryhin/Shutterstock

For almost 20 years now, Google dominated the search engine game. Its other services have become similarly prevalent: Gmail, and Google Docs, have all become integral parts of people’s personal and work lives. Of course, there are similar mail and productivity services out there, but using Google is far more convenient, since most people use more than one Google product, and having all of your applications talk to each other and share information is mighty convenient.

This isn’t meant to cry foul: Google is one of the top software makers in the world, but it has earned that status by constantly improving and iterating on its products, and even itself, over the past two decades. But one does wonder what event, if any, could possibly make people quit a service as big and convenient and powerful as Google once and for all.

The fact is: That probably won’t happen. People likely won’t quit Google’s services, unless there’s some major degradation of quality. But Google, as a leader in Silicon Valley, should strive to do better for its customers. Intentional or not, misleading customers about location data is a bad thing. Google failed its customers: It let users think they had more control when they did, and they only corrected their language about location data after a third-party investigation. But there was no public acknowledgement of an error, and no mea culpa.

Google owes its users a true apology. Quietly updating an online help page isn’t good enough.

 

http://uk.businessinsider.com/google-location-data-violates-user-trust-nothing-will-happen-2018-8?r=US&IR=T

Microsoft wants regulation of facial recognition technology to limit ‚abuse‘

Facial recognition put to the test
Facial recognition put to the test

Microsoft has helped innovate facial recognition software. Now it’s urging the US government to enact regulation to control the use of the technology.

In a blog post, Microsoft (MSFT)President Brad Smith said new laws are necessary given the technology’s „broad societal ramifications and potential for abuse.“

He urged lawmakers to form „a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.“

Facial recognition — a computer’s ability to identify or verify people’s faces from a photo or through a camera — has been developing rapidly. Apple (AAPL), Google (GOOG), Amazon and Microsoft are among the big tech companies developing and selling such systems. The technology is being used across a range of industries, from private businesses like hotels and casinos, to social media and law enforcement.

Supporters say facial recognition software improves safety for companies and customers and can help police track police down criminals or find missing children. Civil rights groups warn it can infringe on privacy and allow for illegal surveillance and monitoring. There is also room for error, they argue, since the still-emerging technology can result in false identifications.

The accuracy of facial recognition technologies varies, with women and people of color being identified with less accuracy, according to MIT research.

„Facial recognition raises a critical question: what role do we want this type of technology to play in everyday society?“ Smith wrote on Friday.

Smith’s call for a regulatory framework to control the technology comes as tech companies face criticism over how they’ve handled and shared customer data, as well as their cooperation with government agencies.

Last month, Microsoft was scrutinized for its working relationship with US Immigration and Customs Enforcement. ICE had been enforcing the Trump administration’s „zero tolerance“ immigration policy that separated children from their parents when they crossed the US border illegally. The administration has since abandoned the policy.

Microsoft urges Trump administration to change its policy separating families at border

Microsoft wrote a blog post in January about ICE’s use of its cloud technology Azure, saying it could help it „accelerate facial recognition and identification.“

After questions arose about whether Microsoft’s technology had been used by ICE agents to carry out the controversial border separations, the company released a statement calling the policy „cruel“ and „abusive.“

In his post, Smith reiterated Microsoft’s opposition to the policy and said he had confirmed its contract with ICE does not include facial recognition technology.

Amazon(AMZN) has also come under fire from its own shareholders and civil rights groups over local police forces using its face identifying software Rekognition, which can identify up to 100 people in a single photo.

Some Amazon shareholders coauthored a letter pressuring Amazon to stop selling the technology to the government, saying it was aiding in mass surveillance and posed a threat to privacy rights.

Amazon asked to stop selling facial recognition technology to police

And Facebook (FB) is embroiled in a class-action lawsuit that alleges the social media giant used facial recognition on photos without user permission. Its facial recognition tool scans your photos and suggests you tag friends.

Neither Amazon nor Facebook immediately responded to a request for comment about Smith’s call for new regulations on face ID technology.

Smith said companies have a responsibility to police their own innovations, control how they are deployed and ensure that they are used in a „a manner consistent with broadly held societal values.“

„It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike,“ he said.

https://money.cnn.com/2018/07/14/technology/microsoft-facial-recognition-letter-government/index.html

Hey Alexa, What Are You Doing to My Kid’s Brain?

“Unless your parents purge it, your Alexa will hold on to every bit of data you have ever given it, all the way back to the first things you shouted at it as a 2-year-old.”

Among the more modern anxieties of parents today is how virtual assistants will train their children to act. The fear is that kids who habitually order Amazon’s Alexa to read them a story or command Google’s Assistant to tell them a joke are learning to communicate not as polite, considerate citizens, but as demanding little twerps.

This worry has become so widespread that Amazon and Google both announced this week that their voice assistants can now encourage kids to punctuate their requests with „please.“ The version of Alexa that inhabits the new Echo Dot Kids Edition will thank children for „asking so nicely.“ Google Assistant’s forthcoming Pretty Please feature will remind kids to „say the magic word“ before complying with their wishes.

But many psychologists think kids being polite to virtual assistants is less of an issue than parents think—and may even be a red herring. As virtual assistants become increasingly capable, conversational, and prevalent (assistant-embodied devices are forecasted to outnumber humans), psychologists and ethicists are asking deeper, more subtle questions than will Alexa make my kid bossy. And they want parents to do the same.

„When I built my first virtual child, I got a lot of pushback and flak,“ recalls developmental psychologist Justine Cassell, director emeritus of Carnegie Mellon’s Human-Computer Interaction Institute and an expert in the development of AI interfaces for children. It was the early aughts, and Cassell, then at MIT, was studying whether a life-sized, animated kid named Sam could help flesh-and-blood children hone their cognitive, social, and behavioral skills. „Critics worried that the kids would lose track of what was real and what was pretend,“ Cassel says. „That they’d no longer be able to tell the difference between virtual children and actual ones.“

But when you asked the kids whether Sam was a real child, they’d roll their eyes. Of course Sam isn’t real, they’d say. There was zero ambiguity.

Nobody knows for sure, and Cassel emphasizes that the question deserves study, but she suspects today’s children will grow up similarly attuned to the virtual nature of our device-dwelling digital sidekicks—and, by extension, the context in which they do or do not need to be polite. Kids excel, she says, at dividing the world into categories. As long as they continue to separate humans from machines, she says, there’s no need to worry. „Because isn’t that actually what we want children to learn—not that everything that has a voice should be thanked, but that people have feelings?“

Point taken. But what about Duplex, I ask, Google’s new human-sounding, phone calling AI? Well, Cassell says, that complicates matters. When you can’t tell if a voice belongs to a human or a machine, she says, perhaps it’s best to assume you’re talking to a person, to avoid hurting a human’s feelings. But the real issue there isn’t politeness, it’s disclosure; artificial intelligences should be designed to identify themselves as such.

What’s more, the implications of a kid interacting with an AI extend far deeper than whether she recognizes it as non-human. „Of course parents worry about these devices reinforcing negative behaviors, whether it’s being sassy or teasing a virtual assistant,” says Jenny Radesky, a developmental behavioral pediatrician at the University of Michigan and co-author of the latest guidelines for media use from the American Academy of Pediatrics. “But I think there are bigger questions surrounding things like kids’ cognitive development—the way they consume information and build knowledge.”

Consider, for example, that the way kids interact with virtual assistants may not actual help them learn. This advertisement for the Echo Dot Kids Edition ends with a girl asking her smart speaker the distance to the Andromeda Galaxy. As the camera zooms out, we hear Alexa rattle off the answer: „The Andromeda Galaxy is 14 quintillion, 931 quadrillion, 389 trillion, 517 billion, 400 million miles away“:

To parents it might register as a neat feature. Alexa knows answers to questions that you don’t! But most kids don’t learn by simply receiving information. „Learning happens happens when a child is challenged,“ Cassell says, „by a parent, by another child, a teacher—and they can argue back and forth.“

Virtual assistants can’t do that yet, which highlights the importance of parents using smart devices with their kids. At least for the time being. Our digital butlers could be capable of brain-building banter sooner than you think.

This week, Google announced its smart speakers will remain activated several seconds after you issue a command, allowing you to engage in continuous conversation without repeating „Hey, Google,“ or „OK, Google.“ For now, the feature will allow your virtual assistant to keep track of contextually dependent follow-up questions. (If you ask what movies George Clooney has starred in and then ask how tall he his, Google Assistant will recognize that „he“ is in reference to George Clooney.) It’s a far cry from a dialectic exchange, but it charts a clear path toward more conversational forms of inquiry and learning.

And, perhaps, something even more. „I think it’s reasonable to ask if parenting will become a skill that, like Go or chess, is better performed by a machine,“ says John Havens, executive director of the the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. „What do we do if a kid starts saying: Look, I appreciate the parents in my house, because they put me on the map, biologically. But dad tells a lot of lame dad jokes. And mom is kind of a helicopter parent. And I really prefer the knowledge, wisdom, and insight given to me by my devices.

Havens jokes that he sounds paranoid, because he’s speculating about what-if scenarios from the future. But what about the more near-term? If you start handing duties over to the machine, how do you take them back the day your kid decides Alexa is a higher authority than you are on, say, trigonometry?

Other experts I spoke with agreed it’s not too early for parents to begin thinking deeply about the long-term implications of raising kids in the company of virtual assistants. „I think these tools can be awesome, and provide quick fixes to situations that involve answering questions and telling stories that parents might not always have time for,“ Radesky says. „But I also want parents to consider how that might come to displace some of the experiences they enjoy sharing with kids.“

Other things Radesky, Cassell, and Havens think parents should consider? The extent to which kids understand privacy issues related to internet-connected toys. How their children interact with devices at their friends‘ houses. And what information other family’s devices should be permitted to collect about their kids. In other words: How do children conceptualize the algorithms that serve up facts and entertainment; learn about them; and potentially profit from them?

„The fact is, very few of us sit down and talk with our kids about the social constructs surrounding robots and virtual assistants,“ Radesky says.

Perhaps that—more than whether their children says „please“ and „thank you“ to the smart speaker in the living room—is what parents should be thinking about.

Source:
https://www.wired.com/story/hey-alexa-what-are-you-doing-to-my-kids-brain/

Lawmakers, child development experts, and privacy advocates are expressing concerns about two new Amazon products targeting children, questioning whether they prod kids to be too dependent on technology and potentially jeopardize their privacy.

In a letter to Amazon CEO Jeff Bezos on Friday, two members of the bipartisan Congressional Privacy Caucus raised concerns about Amazon’s smart speaker Echo Dot Kids and a companion service called FreeTime Unlimited that lets kids access a children’s version of Alexa, Amazon’s voice-controlled digital assistant.

“While these types of artificial intelligence and voice recognition technology offer potentially new educational and entertainment opportunities, Americans’ privacy, particularly children’s privacy, must be paramount,” wrote Senator Ed Markey (D-Massachusetts) and Representative Joe Barton (R-Texas), both cofounders of the privacy caucus.

The letter includes a dozen questions, including requests for details about how audio of children’s interactions is recorded and saved, parental control over deleting recordings, a list of third parties with access to the data, whether data will be used for marketing purposes, and Amazon’s intentions on maintaining a profile on kids who use these products.

In a statement, Amazon said it „takes privacy and security seriously.“ The company said „Echo Dot Kids Edition uses on-device software to detect the wake word and only the wake word. Only once the wake word is detected does it start streaming to the cloud, and it will present a visual indication (the light ring at the top of the device turns blue) to show that it is streaming to the cloud.“

Echo Dot Kids is the latest in a wave of products from dominant tech players targeting children, including Facebook’s communications app Messenger Kids and Google’s YouTube Kids, both of which have been criticized by child health experts concerned about privacy and developmental issues.

Like Amazon, toy manufacturers are also interested in developing smart speakers that would live in a child’s room. In September, Mattel pulled Aristotle, a smart speaker and digital assistant aimed at children, after a similar letter from Markey and Barton, as well as a petition that garnered more than 15,000 signatures.

One of the organizers of the petition, the nonprofit group Campaign for a Commercial Free Childhood, is now spearheading a similar effort against Amazon. In a press release Friday, timed to the letter from Congress, a group of child development and privacy advocates urged parents not to purchase Echo Dot Kids because the device and companion voice service pose a threat to children’s privacy and well-being.

“Amazon wants kids to be dependent on its data-gathering device from the moment they wake up until they go to bed at night,” said the group’s executive director Josh Golin. “The Echo Dot Kids is another unnecessary ‘must-have’ gadget, and it’s also potentially harmful. AI devices raise a host of privacy concerns and interfere with the face-to-face interactions and self-driven play that children need to thrive.”

FreeTime on Alexa includes content targeted at children, like kids’ books and Alexa skills from Disney, Nickelodeon, and National Geographic. It also features parental controls, such as song filtering, bedtime limits, disabled voice purchasing, and positive reinforcement for using the word “please.”

Despite such controls, the child health experts warning against Echo Dot Kids wrote, “Ultimately, though, the device is designed to make kids dependent on Alexa for information and entertainment. Amazon even encourages kids to tell the device ‘Alexa, I’m bored,’ to which Alexa will respond with branded games and content.”

In Amazon’s April press release announcing Echo Dot Kids, the company quoted one representative from a nonprofit group focused on children that supported the product, Stephen Balkam, founder and CEO of the Family Online Safety Institute. Balkam referenced a report from his institute, which found that the majority of parents were comfortable with their child using a smart speaker. Although it was not noted in the press release, Amazon is a member of FOSI and has an executive on the board.

In a statement to WIRED, Amazon said, „We believe one of the core benefits of FreeTime and FreeTime Unlimited is that the services provide parents the tools they need to help manage the interactions between their child and Alexa as they see fit.“ Amazon said parents can review and listen to their children’s voice recordings in the Alexa app, review FreeTime Unlimited activity via the Parent Dashboard, set bedtime limits or pause the device whenever they’d like.

Balkam said his institute disclosed Amazon’s funding of its research on its website and the cover of its report. Amazon did not initiate the study. Balkam said the institute annually proposes a research project, and reaches out to its members, a group that also includes Facebook, Google, and Microsoft, who pay an annual stipend of $30,000. “Amazon stepped up and we worked with them. They gave us editorial control and we obviously gave them recognition for the financial support,” he said.

Balkam says Echo Dot Kids addresses concerns from parents about excessive screen time. “It’s screen-less, it’s very interactive, it’s kid friendly,” he said, pointing out Alexa skills that encourage kids to go outside.

In its review of the product, BuzzFeed wrote, “Unless your parents purge it, your Alexa will hold on to every bit of data you have ever given it, all the way back to the first things you shouted at it as a 2-year-old.”

Sources:
https://www.wired.com/story/congress-privacy-groups-question-amazons-echo-dot-for-kids/

Lets Get Rid of the “Nothing to Hide, Nothing to Fear” Mentality

With Zuckerberg testifying to the US Congress over Facebook’s data privacy and the implementation of GDPR fast approaching, the debate around data ownership has suddenly burst into the public psyche. Collecting user data to serve targeted advertising in a free platform is one thing, harvesting the social graphs of people interacting with apps and using it to sway an election is somewhat worse.

Suffice to say that neither of the above compare to the indiscriminate collection of ordinary civilians’ data on behalf of governments every day.

In 2013, Edward Snowden blew the whistle on the systematic US spy program he helped to architect. Perhaps the largest revelation to come out of the trove of documents he released were the details of PRISM, an NSA program that collects internet communications data from US telecommunications companies like Microsoft, Yahoo, Google, Facebook and Apple. The data collected included audio and video chat logs, photographs, emails, documents and connection logs of anyone using the services of 9 leading US internet companies. PRISM benefited from changes to FISA that allowed warrantless domestic surveillance of any target without the need for probable cause. Bill Binney, former US intelligence official, explains how, for instances where corporate control wasn’t achievable, the NSA enticed third party countries to clandestinely tap internet communication lines on the internet backbone via the RAMPART-A program.What this means is that the NSA was able to assemble near complete dossiers of all web activity carried out by anyone using the internet.

But this is just in the US right?, policies like this wouldn’t be implemented in Europe.

Wrong unfortunately.

GCHQ, the UK’s intelligence agency allegedly collects considerably more metadata than the NSA. Under Tempora, GCHQ can intercept all internet communications from submarine fibre optic cables and store the information for 30 days at the Bude facility in Cornwall. This includes complete web histories, the contents of all emails and facebook entires and given that more than 25% of all internet communications flow through these cables, the implications are astronomical. Elsewhere, JTRIG, a unit of GCHQ have intercepted private facebook pictures, changed the results of online polls and spoofed websites in real time. A lot of these techniques have been made possible by the 2016 Investigatory Powers Act which Snowden describes as the most “extreme surveillance in the history of western democracy”.

But despite all this, the age old reprise; “if you’ve got nothing to hide, you’ve got nothing to fear” often rings out in debates over privacy.

Indeed, the idea is so pervasive that politicians often lean on the phrase to justify ever more draconian methods of surveillance. Yes, they draw upon the selfsame rhetoric of Joseph Goebbels, propaganda minister for the Nazi regime.

In drafting legislation for the the Investigatory Powers Act, May said that such extremes were necessary to ensure “no area of cyberspace becomes a haven for those who seek to harm us, to plot, poison minds and peddle hatred under the radar”.

When levelled against the fear of terrorism and death, its easy to see how people passively accept ever greater levels of surveillance. Indeed, Naomi Klein writes extensively in Shock Doctrine how the fear of external threats can be used as a smokescreen to implement ever more invasive policy. But indiscriminate mass surveillance should never be blindly accepted, privacy should and always will be a social norm, despite what Mark Zuckerberg said in 2010. Although I’m sure he may have a different answer now.

So you just read emails and look at cat memes online, why would you care about privacy?

In the same way we’re able to close our living room curtains and be alone and unmonitored, we should be able to explore our identities online un-impinged. Its a well rehearsed idea that nowadays we’re more honest to our web browsers than we are to each other but what happens when you become cognisant that everything you do online is intercepted and catalogued? As with CCTV, when we know we’re being watched, we alter our behaviour in line with whats expected.

As soon as this happens online, the liberating quality provided by the anonymity of the internet is lost. Your thinking aligns with the status quo and we lose the boundless ability of the internet to search and develop our identities. No progress can be made when everyone thinks the same way. Difference of opinion fuels innovation.

This draws obvious comparisons with Bentham’s Panopticon, a prison blueprint for enforcing control from within. The basic setup is as follows; there is a central guard tower surrounded by cells. In the cells are prisoners. The tower shines bright light so that the watchman can see each inmate silhouetted in their cell but the prisoners cannot see the watchman. The prisoners must assume they could be observed at any point and therefore act accordingly. In literature, the common comparison is Orwell’s 1984 where omnipresent government surveillance enforces control and distorts reality. With revelations about surveillance states, the relevance of these metaphors are plain to see.

In reality, theres actually a lot more at stake here.

With the Panopticon certain individuals are watched, in 1984 everyone is watched. On the modern internet, every person, irrespective of the threat they pose, is not only watched but their information is stored and archived for analysis.

Kafka’s The Trial, in which a bureaucracy uses citizens information to make decisions about them, but denies them the ability to participate in how their information is used, therefore seems a more apt comparison. The issue here is that corporations, more so, states have been allowed to comb our data and make decisions that affect us without our consent.

Maybe, as a member of a western democracy, you don’t think this matters. But what if you’re a member of a minority group in an oppressive regime? What if you’re arrested because a computer algorithm cant separate humour from intent to harm?

On the other hand, maybe you trust the intentions of your government, but how much faith do you have in them to keep your data private? The recent hack of the SEC shows that even government systems aren’t safe from attackers. When a business database is breached, maybe your credit card details become public, when a government database that has aggregated millions of data points on every aspect of your online life is hacked, you’ve lost all control of your ability to selectively reveal yourself to the world. Just as Lyndon Johnson sought to control physical clouds, he who controls the modern cloud, will rule the world.

Perhaps you think that even this doesn’t matter, if it allows the government to protect us from those that intend to cause harm then its worth the loss of privacy. The trouble with indiscriminate surveillance is that with so much data you see everything but paradoxically, still know nothing.

Intelligence is the strategic collection of pertinent facts, bulk data collection cannot therefore be intelligent. As Bill Binney puts it “bulk data kills people” because technicians are so overwhelmed that they cant isolate whats useful. Data collection as it is can only focus on retribution rather than reduction.

Granted, GDPR is a big step forward for individual consent but will it stop corporations handing over your data to the government? Depending on how cynical you are, you might think that GDPR is just a tool to clean up and create more reliable deterministic data anyway. The nothing to hide, nothing to fear mentality renders us passive supplicants in the removal of our civil liberties. We should be thinking about how we relate to one another and to our Governments and how much power we want to have in that relationship.

To paraphrase Edward Snowden, saying you don’t care about privacy because you’ve got nothing to hide is analogous to saying you don’t care about freedom of speech because you have nothing to say.

http://behindthebrowser.space/index.php/2018/04/22/nothing-to-fear-nothing-to-hide/

Most dangerous attack techniques, and what’s coming next 2018

RSA Conference 2018

Experts from SANS presented the five most dangerous new cyber attack techniques in their annual RSA Conference 2018 keynote session in San Francisco, and shared their views on how they work, how they can be stopped or at least slowed, and how businesses and consumers can prepare.

dangerous attack techniques

The five threats outlined are:

1. Repositories and cloud storage data leakage
2. Big Data analytics, de-anonymization, and correlation
3. Attackers monetize compromised systems using crypto coin miners
4. Recognition of hardware flaws
5. More malware and attacks disrupting ICS and utilities instead of seeking profit.

Repositories and cloud storage data leakage

Ed Skoudis, lead for the SANS Penetration Testing Curriculum, talked about the data leakage threats facing us from the increased use of repositories and cloud storage:

“Software today is built in a very different way than it was 10 or even 5 years ago, with vast online code repositories for collaboration and cloud data storage hosting mission-critical applications. However, attackers are increasingly targeting these kinds of repositories and cloud storage infrastructures, looking for passwords, crypto keys, access tokens, and terabytes of sensitive data.”

He continued: “Defenders need to focus on data inventories, appointing a data curator for their organization and educating system architects and developers about how to secure data assets in the cloud. Additionally, the big cloud companies have each launched an AI service to help classify and defend data in their infrastructures. And finally, a variety of free tools are available that can help prevent and detect leakage of secrets through code repositories.”

Big Data analytics, de-anonymization, and correlation

Skoudis went on to talk about the threat of Big Data Analytics and how attackers are using data from several sources to de-anonymise users:

“In the past, we battled attackers who were trying to get access to our machines to steal data for criminal use. Now the battle is shifting from hacking machines to hacking data — gathering data from disparate sources and fusing it together to de-anonymise users, find business weaknesses and opportunities, or otherwise undermine an organisation’s mission. We still need to prevent attackers from gaining shell on targets to steal data. However, defenders also need to start analysing risks associated with how their seemingly innocuous data can be combined with data from other sources to introduce business risk, all while carefully considering the privacy implications of their data and its potential to tarnish a brand or invite regulatory scrutiny.”

Attackers monetize compromised systems using crypto coin miners

Johannes Ullrich, is Dean of Research, SANS Institute and Director of SANS Internet Storm Center. He has been looking at the increasing use of crypto coin miners by cyber criminals:

“Last year, we talked about how ransomware was used to sell data back to its owner and crypto-currencies were the tool of choice to pay the ransom. More recently, we have found that attackers are no longer bothering with data. Due to the flood of stolen data offered for sale, the value of most commonly stolen data like credit card numbers of PII has dropped significantly. Attackers are instead installing crypto coin miners. These attacks are more stealthy and less likely to be discovered and attackers can earn tens of thousands of dollars a month from crypto coin miners. Defenders therefore need to learn to detect these coin miners and to identify the vulnerabilities that have been exploited in order to install them.”

Recognition of hardware flaws

Ullrich then went on to say that software developers often assume that hardware is flawless and that this is a dangerous assumption. He explains why and what needs to be done:

“Hardware is no less complex then software and mistakes have been made in developing hardware just as they are made by software developers. Patching hardware is a lot more difficult and often not possible without replacing entire systems or suffering significant performance penalties. Developers therefore need to learn to create software without relying on hardware to mitigate any security issues. Similar to the way in which software uses encryption on untrusted networks, software needs to authenticate and encrypt data within the system. Some emerging homomorphic encryption algorithms may allow developers to operate on encrypted data without having to decrypt it first.”

most dangerous attack techniques

More malware and attacks disrupting ICS and utilities instead of seeking profit

Finally, Head of R&D, SANS Institute, James Lyne, discussed the growing trend in malware and attacks that aren’t profit centred as we have largely seen in the past, but instead are focused on disrupting Industrial Control Systems (ICS) and utilities:

“Day to day the grand majority of malicious code has undeniably been focused on fraud and profit. Yet, with the relentless deployment of technology in our societies, the opportunity for political or even military influence only grows. And rare publicly visible attacks like Triton/TriSYS show the capability and intent of those who seek to compromise some of the highest risk components of industrial environments, i.e. the safety systems which have historically prevented critical security and safety meltdowns.”

He continued: “ICS systems are relatively immature and easy to exploit in comparison to the mainstream computing world. Many ICS systems lack the mitigations of modern operating systems and applications. The reliance on obscurity or isolation (both increasingly untrue) do not position them well to withstand a heightened focus on them, and we need to address this as an industry. More worrying is that attackers have demonstrated they have the inclination and resources to diversify their attacks, targeting the sensors that are used to provide data to the industrial controllers themselves. The next few years are likely to see some painful lessons being learned as this attack domain grows, since the mitigations are inconsistent and quite embryonic.”

Source: https://www.helpnetsecurity.com/2018/04/23/dangerous-attack-techniques/