Archiv der Kategorie: Privacy

Google needs to apologize for violating the trust of its users once again

SAN FRANCISCO, CA - MAY 28: Google senior vice president of product Sundar Pichai delivers the keynote address during the 2015 Google I/O conference on May 28, 2015 in San Francisco, California. The annual Google I/O conference runs through May 29. (Photo by Justin Sullivan/Getty Images)Justin Sullivan/Getty Images

  • An Associated Press investigation recently discovered that Google still collects its users‘ location data even if they have their Location History turned off.
  • After the report was published, Google quietly updated its help page to describe how location settings work.
  • Previously, the page said „with Location History off, the places you go are no longer stored.“
  • Now, the page says, „This setting does not affect other location services on your device,“ adding that „some location data may be saved as part of your activity on other services, like Search and Maps.“
  • The quiet changing of false information is a major violation of users‘ trust.
  • Google needs to do better.

Google this week acknowledged that it quietly tracks its users‘ locations, even if those people turn off their Location History — a clarification that came in the wake of an Associated Press investigation.

It’s a major violation of users‘ trust.

And yet, nothing is going to happen as a result of this episode.

It’s happened before

Google has a history of bending the rules:

  • In 2010, Google’s Street View cars were caught eavesdropping on people’s Wi-Fi connections.
  • In 2011, Google agreed to forfeit $500 million after a criminal investigation by the Justice Department found that Google illegally allowed advertisements from online Canadian pharmacies to sell their products in the US.
  • In 2012, Google circumvented the no-cookies policy on Apple’s Safari web browser and paid a $22.5 million fine to the Federal Trade Commission as a result.

Ultimately, Google came out of all of these incidents just fine. It paid some money here and there, and sat in a few courtrooms, but nothing really happened to the company’s bottom line. People continued using Google’s services.

Other companies have done it too

Remember Cambridge Analytica?

Five months ago in March, a 28-year-old named Christopher Wylie blew the whistle on his employer, the data-analytics company, Cambridge Analytica, at which he served as a director of research.

It was later revealed that Cambridge Analytica had collected the data of over 87 million Facebook users in an attempt to influence the 2016 presidential election in favor of the Republican candidate, Donald Trump.

One month later, Facebook CEO Mark Zuckerberg was summoned in front of Congress to answer questions related to the Cambridge Analytica scandal over a two-day span.

mark zuckerberg congress facebook awkwardFacebook CEO Mark Zuckerberg, takes a drink of water while testifying before a joint hearing of the Commerce and Judiciary Committees on Capitol Hill in Washington, Tuesday, April 10, 2018, about the use of Facebook data to target American voters in the 2016 election.AP Photo/Andrew Harnik

Many users felt like their trust was violated. A hashtag movement called „#DeleteFacebook“ was born.

And yet, nothing has really changed at Facebook since that scandal, which similarly involved the improper collection of user data, and the violation of users‘ trust.

Facebook seems to be doing just fine. During its Q2 earnings report in late July, Facebook reported over $13 billion in revenue — a 42% jump year-over-year — and an 11% increase in both daily and monthly active users.

In short, Facebook is not going anywhere. And neither is Google.

Too big — and too good — to fail

Just like Facebook has no equal among the hundreds of other social networks out there, the same goes for Google and competing search engines.

According to StatCounter, Google has a whopping 90% share of the global search engine market.

The next biggest search engine in the world is Microsoft’s Bing, which has a paltry 3% market share.

In other words, a cataclysmic event would have to occur for people to switch search engines. Or, another search engine would have to come along and completely unseat Google.

But that’s probably not going to happen.

GoogleUladzik Kryhin/Shutterstock

For almost 20 years now, Google dominated the search engine game. Its other services have become similarly prevalent: Gmail, and Google Docs, have all become integral parts of people’s personal and work lives. Of course, there are similar mail and productivity services out there, but using Google is far more convenient, since most people use more than one Google product, and having all of your applications talk to each other and share information is mighty convenient.

This isn’t meant to cry foul: Google is one of the top software makers in the world, but it has earned that status by constantly improving and iterating on its products, and even itself, over the past two decades. But one does wonder what event, if any, could possibly make people quit a service as big and convenient and powerful as Google once and for all.

The fact is: That probably won’t happen. People likely won’t quit Google’s services, unless there’s some major degradation of quality. But Google, as a leader in Silicon Valley, should strive to do better for its customers. Intentional or not, misleading customers about location data is a bad thing. Google failed its customers: It let users think they had more control when they did, and they only corrected their language about location data after a third-party investigation. But there was no public acknowledgement of an error, and no mea culpa.

Google owes its users a true apology. Quietly updating an online help page isn’t good enough.

 

http://uk.businessinsider.com/google-location-data-violates-user-trust-nothing-will-happen-2018-8?r=US&IR=T

Advertisements

Microsoft wants regulation of facial recognition technology to limit ‚abuse‘

Facial recognition put to the test
Facial recognition put to the test

Microsoft has helped innovate facial recognition software. Now it’s urging the US government to enact regulation to control the use of the technology.

In a blog post, Microsoft (MSFT)President Brad Smith said new laws are necessary given the technology’s „broad societal ramifications and potential for abuse.“

He urged lawmakers to form „a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.“

Facial recognition — a computer’s ability to identify or verify people’s faces from a photo or through a camera — has been developing rapidly. Apple (AAPL), Google (GOOG), Amazon and Microsoft are among the big tech companies developing and selling such systems. The technology is being used across a range of industries, from private businesses like hotels and casinos, to social media and law enforcement.

Supporters say facial recognition software improves safety for companies and customers and can help police track police down criminals or find missing children. Civil rights groups warn it can infringe on privacy and allow for illegal surveillance and monitoring. There is also room for error, they argue, since the still-emerging technology can result in false identifications.

The accuracy of facial recognition technologies varies, with women and people of color being identified with less accuracy, according to MIT research.

„Facial recognition raises a critical question: what role do we want this type of technology to play in everyday society?“ Smith wrote on Friday.

Smith’s call for a regulatory framework to control the technology comes as tech companies face criticism over how they’ve handled and shared customer data, as well as their cooperation with government agencies.

Last month, Microsoft was scrutinized for its working relationship with US Immigration and Customs Enforcement. ICE had been enforcing the Trump administration’s „zero tolerance“ immigration policy that separated children from their parents when they crossed the US border illegally. The administration has since abandoned the policy.

Microsoft urges Trump administration to change its policy separating families at border

Microsoft wrote a blog post in January about ICE’s use of its cloud technology Azure, saying it could help it „accelerate facial recognition and identification.“

After questions arose about whether Microsoft’s technology had been used by ICE agents to carry out the controversial border separations, the company released a statement calling the policy „cruel“ and „abusive.“

In his post, Smith reiterated Microsoft’s opposition to the policy and said he had confirmed its contract with ICE does not include facial recognition technology.

Amazon(AMZN) has also come under fire from its own shareholders and civil rights groups over local police forces using its face identifying software Rekognition, which can identify up to 100 people in a single photo.

Some Amazon shareholders coauthored a letter pressuring Amazon to stop selling the technology to the government, saying it was aiding in mass surveillance and posed a threat to privacy rights.

Amazon asked to stop selling facial recognition technology to police

And Facebook (FB) is embroiled in a class-action lawsuit that alleges the social media giant used facial recognition on photos without user permission. Its facial recognition tool scans your photos and suggests you tag friends.

Neither Amazon nor Facebook immediately responded to a request for comment about Smith’s call for new regulations on face ID technology.

Smith said companies have a responsibility to police their own innovations, control how they are deployed and ensure that they are used in a „a manner consistent with broadly held societal values.“

„It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike,“ he said.

https://money.cnn.com/2018/07/14/technology/microsoft-facial-recognition-letter-government/index.html

Hey Alexa, What Are You Doing to My Kid’s Brain?

“Unless your parents purge it, your Alexa will hold on to every bit of data you have ever given it, all the way back to the first things you shouted at it as a 2-year-old.”

Among the more modern anxieties of parents today is how virtual assistants will train their children to act. The fear is that kids who habitually order Amazon’s Alexa to read them a story or command Google’s Assistant to tell them a joke are learning to communicate not as polite, considerate citizens, but as demanding little twerps.

This worry has become so widespread that Amazon and Google both announced this week that their voice assistants can now encourage kids to punctuate their requests with „please.“ The version of Alexa that inhabits the new Echo Dot Kids Edition will thank children for „asking so nicely.“ Google Assistant’s forthcoming Pretty Please feature will remind kids to „say the magic word“ before complying with their wishes.

But many psychologists think kids being polite to virtual assistants is less of an issue than parents think—and may even be a red herring. As virtual assistants become increasingly capable, conversational, and prevalent (assistant-embodied devices are forecasted to outnumber humans), psychologists and ethicists are asking deeper, more subtle questions than will Alexa make my kid bossy. And they want parents to do the same.

„When I built my first virtual child, I got a lot of pushback and flak,“ recalls developmental psychologist Justine Cassell, director emeritus of Carnegie Mellon’s Human-Computer Interaction Institute and an expert in the development of AI interfaces for children. It was the early aughts, and Cassell, then at MIT, was studying whether a life-sized, animated kid named Sam could help flesh-and-blood children hone their cognitive, social, and behavioral skills. „Critics worried that the kids would lose track of what was real and what was pretend,“ Cassel says. „That they’d no longer be able to tell the difference between virtual children and actual ones.“

But when you asked the kids whether Sam was a real child, they’d roll their eyes. Of course Sam isn’t real, they’d say. There was zero ambiguity.

Nobody knows for sure, and Cassel emphasizes that the question deserves study, but she suspects today’s children will grow up similarly attuned to the virtual nature of our device-dwelling digital sidekicks—and, by extension, the context in which they do or do not need to be polite. Kids excel, she says, at dividing the world into categories. As long as they continue to separate humans from machines, she says, there’s no need to worry. „Because isn’t that actually what we want children to learn—not that everything that has a voice should be thanked, but that people have feelings?“

Point taken. But what about Duplex, I ask, Google’s new human-sounding, phone calling AI? Well, Cassell says, that complicates matters. When you can’t tell if a voice belongs to a human or a machine, she says, perhaps it’s best to assume you’re talking to a person, to avoid hurting a human’s feelings. But the real issue there isn’t politeness, it’s disclosure; artificial intelligences should be designed to identify themselves as such.

What’s more, the implications of a kid interacting with an AI extend far deeper than whether she recognizes it as non-human. „Of course parents worry about these devices reinforcing negative behaviors, whether it’s being sassy or teasing a virtual assistant,” says Jenny Radesky, a developmental behavioral pediatrician at the University of Michigan and co-author of the latest guidelines for media use from the American Academy of Pediatrics. “But I think there are bigger questions surrounding things like kids’ cognitive development—the way they consume information and build knowledge.”

Consider, for example, that the way kids interact with virtual assistants may not actual help them learn. This advertisement for the Echo Dot Kids Edition ends with a girl asking her smart speaker the distance to the Andromeda Galaxy. As the camera zooms out, we hear Alexa rattle off the answer: „The Andromeda Galaxy is 14 quintillion, 931 quadrillion, 389 trillion, 517 billion, 400 million miles away“:

To parents it might register as a neat feature. Alexa knows answers to questions that you don’t! But most kids don’t learn by simply receiving information. „Learning happens happens when a child is challenged,“ Cassell says, „by a parent, by another child, a teacher—and they can argue back and forth.“

Virtual assistants can’t do that yet, which highlights the importance of parents using smart devices with their kids. At least for the time being. Our digital butlers could be capable of brain-building banter sooner than you think.

This week, Google announced its smart speakers will remain activated several seconds after you issue a command, allowing you to engage in continuous conversation without repeating „Hey, Google,“ or „OK, Google.“ For now, the feature will allow your virtual assistant to keep track of contextually dependent follow-up questions. (If you ask what movies George Clooney has starred in and then ask how tall he his, Google Assistant will recognize that „he“ is in reference to George Clooney.) It’s a far cry from a dialectic exchange, but it charts a clear path toward more conversational forms of inquiry and learning.

And, perhaps, something even more. „I think it’s reasonable to ask if parenting will become a skill that, like Go or chess, is better performed by a machine,“ says John Havens, executive director of the the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. „What do we do if a kid starts saying: Look, I appreciate the parents in my house, because they put me on the map, biologically. But dad tells a lot of lame dad jokes. And mom is kind of a helicopter parent. And I really prefer the knowledge, wisdom, and insight given to me by my devices.

Havens jokes that he sounds paranoid, because he’s speculating about what-if scenarios from the future. But what about the more near-term? If you start handing duties over to the machine, how do you take them back the day your kid decides Alexa is a higher authority than you are on, say, trigonometry?

Other experts I spoke with agreed it’s not too early for parents to begin thinking deeply about the long-term implications of raising kids in the company of virtual assistants. „I think these tools can be awesome, and provide quick fixes to situations that involve answering questions and telling stories that parents might not always have time for,“ Radesky says. „But I also want parents to consider how that might come to displace some of the experiences they enjoy sharing with kids.“

Other things Radesky, Cassell, and Havens think parents should consider? The extent to which kids understand privacy issues related to internet-connected toys. How their children interact with devices at their friends‘ houses. And what information other family’s devices should be permitted to collect about their kids. In other words: How do children conceptualize the algorithms that serve up facts and entertainment; learn about them; and potentially profit from them?

„The fact is, very few of us sit down and talk with our kids about the social constructs surrounding robots and virtual assistants,“ Radesky says.

Perhaps that—more than whether their children says „please“ and „thank you“ to the smart speaker in the living room—is what parents should be thinking about.

Source:
https://www.wired.com/story/hey-alexa-what-are-you-doing-to-my-kids-brain/

Lawmakers, child development experts, and privacy advocates are expressing concerns about two new Amazon products targeting children, questioning whether they prod kids to be too dependent on technology and potentially jeopardize their privacy.

In a letter to Amazon CEO Jeff Bezos on Friday, two members of the bipartisan Congressional Privacy Caucus raised concerns about Amazon’s smart speaker Echo Dot Kids and a companion service called FreeTime Unlimited that lets kids access a children’s version of Alexa, Amazon’s voice-controlled digital assistant.

“While these types of artificial intelligence and voice recognition technology offer potentially new educational and entertainment opportunities, Americans’ privacy, particularly children’s privacy, must be paramount,” wrote Senator Ed Markey (D-Massachusetts) and Representative Joe Barton (R-Texas), both cofounders of the privacy caucus.

The letter includes a dozen questions, including requests for details about how audio of children’s interactions is recorded and saved, parental control over deleting recordings, a list of third parties with access to the data, whether data will be used for marketing purposes, and Amazon’s intentions on maintaining a profile on kids who use these products.

In a statement, Amazon said it „takes privacy and security seriously.“ The company said „Echo Dot Kids Edition uses on-device software to detect the wake word and only the wake word. Only once the wake word is detected does it start streaming to the cloud, and it will present a visual indication (the light ring at the top of the device turns blue) to show that it is streaming to the cloud.“

Echo Dot Kids is the latest in a wave of products from dominant tech players targeting children, including Facebook’s communications app Messenger Kids and Google’s YouTube Kids, both of which have been criticized by child health experts concerned about privacy and developmental issues.

Like Amazon, toy manufacturers are also interested in developing smart speakers that would live in a child’s room. In September, Mattel pulled Aristotle, a smart speaker and digital assistant aimed at children, after a similar letter from Markey and Barton, as well as a petition that garnered more than 15,000 signatures.

One of the organizers of the petition, the nonprofit group Campaign for a Commercial Free Childhood, is now spearheading a similar effort against Amazon. In a press release Friday, timed to the letter from Congress, a group of child development and privacy advocates urged parents not to purchase Echo Dot Kids because the device and companion voice service pose a threat to children’s privacy and well-being.

“Amazon wants kids to be dependent on its data-gathering device from the moment they wake up until they go to bed at night,” said the group’s executive director Josh Golin. “The Echo Dot Kids is another unnecessary ‘must-have’ gadget, and it’s also potentially harmful. AI devices raise a host of privacy concerns and interfere with the face-to-face interactions and self-driven play that children need to thrive.”

FreeTime on Alexa includes content targeted at children, like kids’ books and Alexa skills from Disney, Nickelodeon, and National Geographic. It also features parental controls, such as song filtering, bedtime limits, disabled voice purchasing, and positive reinforcement for using the word “please.”

Despite such controls, the child health experts warning against Echo Dot Kids wrote, “Ultimately, though, the device is designed to make kids dependent on Alexa for information and entertainment. Amazon even encourages kids to tell the device ‘Alexa, I’m bored,’ to which Alexa will respond with branded games and content.”

In Amazon’s April press release announcing Echo Dot Kids, the company quoted one representative from a nonprofit group focused on children that supported the product, Stephen Balkam, founder and CEO of the Family Online Safety Institute. Balkam referenced a report from his institute, which found that the majority of parents were comfortable with their child using a smart speaker. Although it was not noted in the press release, Amazon is a member of FOSI and has an executive on the board.

In a statement to WIRED, Amazon said, „We believe one of the core benefits of FreeTime and FreeTime Unlimited is that the services provide parents the tools they need to help manage the interactions between their child and Alexa as they see fit.“ Amazon said parents can review and listen to their children’s voice recordings in the Alexa app, review FreeTime Unlimited activity via the Parent Dashboard, set bedtime limits or pause the device whenever they’d like.

Balkam said his institute disclosed Amazon’s funding of its research on its website and the cover of its report. Amazon did not initiate the study. Balkam said the institute annually proposes a research project, and reaches out to its members, a group that also includes Facebook, Google, and Microsoft, who pay an annual stipend of $30,000. “Amazon stepped up and we worked with them. They gave us editorial control and we obviously gave them recognition for the financial support,” he said.

Balkam says Echo Dot Kids addresses concerns from parents about excessive screen time. “It’s screen-less, it’s very interactive, it’s kid friendly,” he said, pointing out Alexa skills that encourage kids to go outside.

In its review of the product, BuzzFeed wrote, “Unless your parents purge it, your Alexa will hold on to every bit of data you have ever given it, all the way back to the first things you shouted at it as a 2-year-old.”

Sources:
https://www.wired.com/story/congress-privacy-groups-question-amazons-echo-dot-for-kids/

Lets Get Rid of the “Nothing to Hide, Nothing to Fear” Mentality

With Zuckerberg testifying to the US Congress over Facebook’s data privacy and the implementation of GDPR fast approaching, the debate around data ownership has suddenly burst into the public psyche. Collecting user data to serve targeted advertising in a free platform is one thing, harvesting the social graphs of people interacting with apps and using it to sway an election is somewhat worse.

Suffice to say that neither of the above compare to the indiscriminate collection of ordinary civilians’ data on behalf of governments every day.

In 2013, Edward Snowden blew the whistle on the systematic US spy program he helped to architect. Perhaps the largest revelation to come out of the trove of documents he released were the details of PRISM, an NSA program that collects internet communications data from US telecommunications companies like Microsoft, Yahoo, Google, Facebook and Apple. The data collected included audio and video chat logs, photographs, emails, documents and connection logs of anyone using the services of 9 leading US internet companies. PRISM benefited from changes to FISA that allowed warrantless domestic surveillance of any target without the need for probable cause. Bill Binney, former US intelligence official, explains how, for instances where corporate control wasn’t achievable, the NSA enticed third party countries to clandestinely tap internet communication lines on the internet backbone via the RAMPART-A program.What this means is that the NSA was able to assemble near complete dossiers of all web activity carried out by anyone using the internet.

But this is just in the US right?, policies like this wouldn’t be implemented in Europe.

Wrong unfortunately.

GCHQ, the UK’s intelligence agency allegedly collects considerably more metadata than the NSA. Under Tempora, GCHQ can intercept all internet communications from submarine fibre optic cables and store the information for 30 days at the Bude facility in Cornwall. This includes complete web histories, the contents of all emails and facebook entires and given that more than 25% of all internet communications flow through these cables, the implications are astronomical. Elsewhere, JTRIG, a unit of GCHQ have intercepted private facebook pictures, changed the results of online polls and spoofed websites in real time. A lot of these techniques have been made possible by the 2016 Investigatory Powers Act which Snowden describes as the most “extreme surveillance in the history of western democracy”.

But despite all this, the age old reprise; “if you’ve got nothing to hide, you’ve got nothing to fear” often rings out in debates over privacy.

Indeed, the idea is so pervasive that politicians often lean on the phrase to justify ever more draconian methods of surveillance. Yes, they draw upon the selfsame rhetoric of Joseph Goebbels, propaganda minister for the Nazi regime.

In drafting legislation for the the Investigatory Powers Act, May said that such extremes were necessary to ensure “no area of cyberspace becomes a haven for those who seek to harm us, to plot, poison minds and peddle hatred under the radar”.

When levelled against the fear of terrorism and death, its easy to see how people passively accept ever greater levels of surveillance. Indeed, Naomi Klein writes extensively in Shock Doctrine how the fear of external threats can be used as a smokescreen to implement ever more invasive policy. But indiscriminate mass surveillance should never be blindly accepted, privacy should and always will be a social norm, despite what Mark Zuckerberg said in 2010. Although I’m sure he may have a different answer now.

So you just read emails and look at cat memes online, why would you care about privacy?

In the same way we’re able to close our living room curtains and be alone and unmonitored, we should be able to explore our identities online un-impinged. Its a well rehearsed idea that nowadays we’re more honest to our web browsers than we are to each other but what happens when you become cognisant that everything you do online is intercepted and catalogued? As with CCTV, when we know we’re being watched, we alter our behaviour in line with whats expected.

As soon as this happens online, the liberating quality provided by the anonymity of the internet is lost. Your thinking aligns with the status quo and we lose the boundless ability of the internet to search and develop our identities. No progress can be made when everyone thinks the same way. Difference of opinion fuels innovation.

This draws obvious comparisons with Bentham’s Panopticon, a prison blueprint for enforcing control from within. The basic setup is as follows; there is a central guard tower surrounded by cells. In the cells are prisoners. The tower shines bright light so that the watchman can see each inmate silhouetted in their cell but the prisoners cannot see the watchman. The prisoners must assume they could be observed at any point and therefore act accordingly. In literature, the common comparison is Orwell’s 1984 where omnipresent government surveillance enforces control and distorts reality. With revelations about surveillance states, the relevance of these metaphors are plain to see.

In reality, theres actually a lot more at stake here.

With the Panopticon certain individuals are watched, in 1984 everyone is watched. On the modern internet, every person, irrespective of the threat they pose, is not only watched but their information is stored and archived for analysis.

Kafka’s The Trial, in which a bureaucracy uses citizens information to make decisions about them, but denies them the ability to participate in how their information is used, therefore seems a more apt comparison. The issue here is that corporations, more so, states have been allowed to comb our data and make decisions that affect us without our consent.

Maybe, as a member of a western democracy, you don’t think this matters. But what if you’re a member of a minority group in an oppressive regime? What if you’re arrested because a computer algorithm cant separate humour from intent to harm?

On the other hand, maybe you trust the intentions of your government, but how much faith do you have in them to keep your data private? The recent hack of the SEC shows that even government systems aren’t safe from attackers. When a business database is breached, maybe your credit card details become public, when a government database that has aggregated millions of data points on every aspect of your online life is hacked, you’ve lost all control of your ability to selectively reveal yourself to the world. Just as Lyndon Johnson sought to control physical clouds, he who controls the modern cloud, will rule the world.

Perhaps you think that even this doesn’t matter, if it allows the government to protect us from those that intend to cause harm then its worth the loss of privacy. The trouble with indiscriminate surveillance is that with so much data you see everything but paradoxically, still know nothing.

Intelligence is the strategic collection of pertinent facts, bulk data collection cannot therefore be intelligent. As Bill Binney puts it “bulk data kills people” because technicians are so overwhelmed that they cant isolate whats useful. Data collection as it is can only focus on retribution rather than reduction.

Granted, GDPR is a big step forward for individual consent but will it stop corporations handing over your data to the government? Depending on how cynical you are, you might think that GDPR is just a tool to clean up and create more reliable deterministic data anyway. The nothing to hide, nothing to fear mentality renders us passive supplicants in the removal of our civil liberties. We should be thinking about how we relate to one another and to our Governments and how much power we want to have in that relationship.

To paraphrase Edward Snowden, saying you don’t care about privacy because you’ve got nothing to hide is analogous to saying you don’t care about freedom of speech because you have nothing to say.

http://behindthebrowser.space/index.php/2018/04/22/nothing-to-fear-nothing-to-hide/

Most dangerous attack techniques, and what’s coming next 2018

RSA Conference 2018

Experts from SANS presented the five most dangerous new cyber attack techniques in their annual RSA Conference 2018 keynote session in San Francisco, and shared their views on how they work, how they can be stopped or at least slowed, and how businesses and consumers can prepare.

dangerous attack techniques

The five threats outlined are:

1. Repositories and cloud storage data leakage
2. Big Data analytics, de-anonymization, and correlation
3. Attackers monetize compromised systems using crypto coin miners
4. Recognition of hardware flaws
5. More malware and attacks disrupting ICS and utilities instead of seeking profit.

Repositories and cloud storage data leakage

Ed Skoudis, lead for the SANS Penetration Testing Curriculum, talked about the data leakage threats facing us from the increased use of repositories and cloud storage:

“Software today is built in a very different way than it was 10 or even 5 years ago, with vast online code repositories for collaboration and cloud data storage hosting mission-critical applications. However, attackers are increasingly targeting these kinds of repositories and cloud storage infrastructures, looking for passwords, crypto keys, access tokens, and terabytes of sensitive data.”

He continued: “Defenders need to focus on data inventories, appointing a data curator for their organization and educating system architects and developers about how to secure data assets in the cloud. Additionally, the big cloud companies have each launched an AI service to help classify and defend data in their infrastructures. And finally, a variety of free tools are available that can help prevent and detect leakage of secrets through code repositories.”

Big Data analytics, de-anonymization, and correlation

Skoudis went on to talk about the threat of Big Data Analytics and how attackers are using data from several sources to de-anonymise users:

“In the past, we battled attackers who were trying to get access to our machines to steal data for criminal use. Now the battle is shifting from hacking machines to hacking data — gathering data from disparate sources and fusing it together to de-anonymise users, find business weaknesses and opportunities, or otherwise undermine an organisation’s mission. We still need to prevent attackers from gaining shell on targets to steal data. However, defenders also need to start analysing risks associated with how their seemingly innocuous data can be combined with data from other sources to introduce business risk, all while carefully considering the privacy implications of their data and its potential to tarnish a brand or invite regulatory scrutiny.”

Attackers monetize compromised systems using crypto coin miners

Johannes Ullrich, is Dean of Research, SANS Institute and Director of SANS Internet Storm Center. He has been looking at the increasing use of crypto coin miners by cyber criminals:

“Last year, we talked about how ransomware was used to sell data back to its owner and crypto-currencies were the tool of choice to pay the ransom. More recently, we have found that attackers are no longer bothering with data. Due to the flood of stolen data offered for sale, the value of most commonly stolen data like credit card numbers of PII has dropped significantly. Attackers are instead installing crypto coin miners. These attacks are more stealthy and less likely to be discovered and attackers can earn tens of thousands of dollars a month from crypto coin miners. Defenders therefore need to learn to detect these coin miners and to identify the vulnerabilities that have been exploited in order to install them.”

Recognition of hardware flaws

Ullrich then went on to say that software developers often assume that hardware is flawless and that this is a dangerous assumption. He explains why and what needs to be done:

“Hardware is no less complex then software and mistakes have been made in developing hardware just as they are made by software developers. Patching hardware is a lot more difficult and often not possible without replacing entire systems or suffering significant performance penalties. Developers therefore need to learn to create software without relying on hardware to mitigate any security issues. Similar to the way in which software uses encryption on untrusted networks, software needs to authenticate and encrypt data within the system. Some emerging homomorphic encryption algorithms may allow developers to operate on encrypted data without having to decrypt it first.”

most dangerous attack techniques

More malware and attacks disrupting ICS and utilities instead of seeking profit

Finally, Head of R&D, SANS Institute, James Lyne, discussed the growing trend in malware and attacks that aren’t profit centred as we have largely seen in the past, but instead are focused on disrupting Industrial Control Systems (ICS) and utilities:

“Day to day the grand majority of malicious code has undeniably been focused on fraud and profit. Yet, with the relentless deployment of technology in our societies, the opportunity for political or even military influence only grows. And rare publicly visible attacks like Triton/TriSYS show the capability and intent of those who seek to compromise some of the highest risk components of industrial environments, i.e. the safety systems which have historically prevented critical security and safety meltdowns.”

He continued: “ICS systems are relatively immature and easy to exploit in comparison to the mainstream computing world. Many ICS systems lack the mitigations of modern operating systems and applications. The reliance on obscurity or isolation (both increasingly untrue) do not position them well to withstand a heightened focus on them, and we need to address this as an industry. More worrying is that attackers have demonstrated they have the inclination and resources to diversify their attacks, targeting the sensors that are used to provide data to the industrial controllers themselves. The next few years are likely to see some painful lessons being learned as this attack domain grows, since the mitigations are inconsistent and quite embryonic.”

Source: https://www.helpnetsecurity.com/2018/04/23/dangerous-attack-techniques/

Android’s trust problem

Illustration by William Joel / The Verge

Published today, a two-year study of Android security updates has revealed a distressing gap between the software patches Android companies claim to have on their devices and the ones they actually have. Your phone’s manufacturer may be lying to you about the security of your Android device. In fact, it appears that almost all of them do.

Coming at the end of a week dominated by Mark Zuckerberg’s congressional hearings and an ongoing Facebook privacy probe, this news might seem of lesser importance, but it goes to the same issue that has drawn lawmakers’ scrutiny to Facebook: the matter of trust. Facebook is the least-trusted big US tech company, and Android might just be the operating system equivalent of it: used by 2 billion people around the world, tolerated more than loved, and susceptible to major lapses in user privacy and security.

The gap between Android and its nemesis, Apple’s iOS, has always boiled down to trust. Unlike Google, Apple doesn’t make its money by tracking the behavior of its users, and unlike the vast and varied Android ecosystem, there are only ever a couple of iPhone models, each of which is updated with regularity and over a long period of time. Owning an iPhone, you can be confident that you’re among Apple’s priority users (even if Apple faces its own cohort of critics accusing it of planned obsolescence), whereas with an Android device, as evidenced today, you can’t even be sure that the security bulletins and updates you’re getting are truthful.

Android is perceived as untrustworthy in large part because it is. Beside the matter of security misrepresentations, here are some of the other major issues and villains plaguing the platform:

Version updates are slow, if they arrive at all. I’ve been covering Android since its earliest Cupcake days, and in the near-decade that’s passed, there’s never been a moment of contentment about the speed of OS updates. Things seemed to be getting even worse late last year when the November batch of new devices came loaded with 2016’s Android Nougat. Android Oreo is now nearly eight months old — meaning we’re closer to the launch of the next version of Android than the present one — and LG is still preparing to roll out that software for its 2017 flagship LG G6.

Promises about Android device updates are as ephemeral as Snapchat messages. Before it became the world’s biggest smartphone vendor, Samsung was notorious for reneging on Android upgrade promises. Sony’s Xperia Z3 infamously fell foul of an incompatibility between its Snapdragon processor and Google’s Android Nougat requirements, leaving it prematurely stuck without major OS updates. Whenever you have so many loud voices involved — carriers and chip suppliers along with Google and device manufacturers — the outcome of their collaboration is prone to becoming exactly as haphazard and unpredictable as Android software upgrades have become.

Google is obviously aware of the situation, and it’s pushing its Android One initiative to give people reassurances when buying an Android phone. Android One guarantees OS updates for at least two years and security updates for at least three years. But, as with most things Android, Android One is only available on a few devices, most of which are of the budget variety. You won’t find the big global names of Samsung, Huawei, and LG supporting it.

Some Android OEMs snoop on you. This is an ecosystem problem rather than something rooted in the operating system itself, but it still discolors Android’s public reputation. Android phone manufacturers habitually lade their devices with bloatware (stuff you really don’t want or need on your phone), and some have even taken to loading up spyware. Blu’s devices were yanked from Amazon for doing exactly that: selling phones that were vulnerable to remote takeovers and could be exploited to have the user’s text messages and call records clandestinely recorded. OnePlus also got in trouble for having an overly inquisitive user analytics program, which beamed personally identifiable information back to the company’s HQ without explicit user consent.

Huawei is perhaps the most famous example of a potentially conflicted Android phone manufacturer, with US spy agencies openly urging their citizens to avoid Huawei phones for their own security. No hard evidence has yet been presented of Huawei doing anything improper, however the US is not the only country to express concern about the company’s relationship with the Chinese government — and mistrust is based as much on smoke as it is on the actual fire.

Android remains vulnerable, thanks in part to Google’s permissiveness. It’s noteworthy that, when Facebook’s data breach became public and people started looking into what data Facebook had on them, only their Android calls and messages had been collected. Why not the iPhone? Because Apple’s walled-garden philosophy makes it much harder, practically impossible, for a user to inadvertently give consent to privacy-eroding apps like Facebook’s Messenger to dig into their devices. Your data is simply better protected on iOS, and even though Android has taken significant steps forward in making app permissions more granular and specific, it’s still comparatively easy to mislead users about what data an app is obtaining and for what purposes.

Android hardware development is chaotic and unreliable. For many, the blistering, sometimes chaotic pace of change in Android devices is part of the ecosystem’s charm. It’s entertaining to watch companies try all sorts of zany and unlikely designs, with only the best of them surviving more than a few months. But the downside of all this speed is lack of attention being paid to small details and long-term sustainability.

LG made a huge promotional push two years ago around its modular G5 flagship, which was meant to usher in a new accessory ecosystem and elevate the flexibility of LG Android devices to new heights. Within six months, that modular project was abandoned, leaving anyone that bought modular LG accessories — on the expectation of multigenerational support — high and dry. And speaking of dryness, Sony recently got itself in trouble for overpromising by calling its Xperia phones “waterproof.”

Samsung’s Galaxy Note 7 is the best and starkest example of the dire consequences that can result from a hurried and excessively ambitious hardware development cycle. The Note 7 had a fatal battery flaw that led many people’s shiny new Samsung smartphones to spontaneously catch fire. Compare that to the iPhone’s pace of usually incremental changes, implemented at predictable intervals and with excruciating fastidiousness.

Android Marshmallow official logo

Besides pledging to deliver OS updates that never come, claiming to have delivered security updates that never arrived, and taking liberties with your personal data, Android OEMs also have a tendency to exaggerate what their phones can actually do. They don’t collaborate on much, so in spite of pouring great efforts into developing their Android software experience, they also just feed the old steadfast complaint of a fragmented ecosystem.

The problem of trust with Android, much like the problem of trust in Facebook, is grounded in reality. It doesn’t matter that not all Android device makers engage in shady privacy invasion or overreaching marketing claims. The perception, like the Android brand, is collective

https://www.theverge.com/2018/4/13/17233122/android-software-patch-trust-problem

World celebrates, cyber-snoops cry as TLS 1.3 internet crypto approved

 

Image Credits: kinsta.com

Forward-secrecy protocol comes with the 28th draft

A much-needed update to internet security has finally passed at the Internet Engineering Task Force (IETF), after four years and 28 drafts.

Internet engineers meeting in London, England, approved the updated TLS 1.3 protocol despite a wave of last-minute concerns that it could cause networking nightmares.

TLS 1.3 won unanimous approval (well, one „no objection“ amid the yeses), paving the way for its widespread implementation and use in software and products from Oracle’s Java to Google’s Chrome browser.

The new protocol aims to comprehensively thwart any attempts by the NSA and other eavesdroppers to decrypt intercepted HTTPS connections and other encrypted network packets. TLS 1.3 should also speed up secure communications thanks to its streamlined approach.

The critical nature of the protocol, however, has meant that progress has been slow and, on occasion, controversial. This time last year, Google paused its plan to support the new protocol in Chrome when an IT schools administrator in Maryland reported that a third of the 50,000 Chromebooks he managed bricked themselves after being updating to use the tech.

Most recently, banks and businesses complained that, thanks to the way the new protocol does security, they will be cut off from being able to inspect and analyze TLS 1.3 encrypted traffic flowing through their networks, and so potentially be at greater risk from attack.

Unfortunately, that self-same ability to decrypt secure traffic on your own network can also be potentially used by third parties to grab and decrypt communications.

An effort to effectively insert a backdoor into the protocol was met with disdain and some anger by internet engineers, many of whom pointed out that it will still be possible to introduce middleware to monitor and analyze internal network traffic.

Nope

The backdoor proposal did not move forward, meaning the internet as a whole will become more secure and faster, while banks and similar outfits will have to do a little extra work to accommodate and inspect TLS 1.3 connections as required.

At the heart of the change – and the complaints – are two key elements: forward secrecy, and ephemeral encryption keys.

TLS – standing for Transport Layer Security – basically works by creating a secure connection between a client and a server – your laptop, for example, and a company’s website. All this is done before any real information is shared – like credit card details or personal information.

Under TLS 1.2 this is a fairly lengthy process that can take as much as half-a-second:

  • The client says hi to the server and offers a range of strong encryption systems it can work with
  • The server says hi back, explains which encryption system it will use and sends an encryption key
  • The client takes that key and uses it to encrypt and send back a random series of letters
  • Together they use this exchange to create two new keys: a master key and a session key – the master key being stronger; the session key weaker.
  • The client then says which encryption system it plans to use for the weaker, session key – which allows data to be sent much faster because it doesn’t have to be processed as much
  • The server acknowledges that system will be used, and then the two start sharing the actual information that the whole exchange is about

TLS 1.3 speeds that whole process up by bundling several steps together:

  • The client says hi, here’s the systems I plan to use
  • The server gets back saying hi, ok let’s use them, here’s my key, we should be good to go
  • The client responds saying, yep that all looks good, here are the session keys

As well as being faster, TLS 1.3 is much more secure because it ditches many of the older encryption algorithms that TLS 1.2 supports that over the years people have managed to find holes in. Effectively the older crypto-systems potentially allowed miscreants to figure out what previous keys had been used (called „non-forward secrecy“) and so decrypt previous conversations.

A little less conversation

For example, snoopers could, under TLS 1.2, force the exchange to use older and weaker encryption algorithms that they knew how to crack.

People using TLS 1.3 will only be able to use more recent systems that are much harder to crack – at least for now. Any effort to force the conversation to use a weaker 1.2 system will be detected and flagged as a problem.

Another very important advantage to TLS 1.3 – but also one that some security experts are concerned about – is called „0-RTT Resumption“ which effectively allows the client and server to remember if they have spoken before, and so forego all the checks, using previous keys to start talking immediately.

That will make connections much faster but the concern of course is that someone malicious could get hold of the „0-RTT Resumption“ information and pose as one of the parties. Although internet engineers are less concerned about this security risk – which would require getting access to a machine – than the TLS 1.2 system that allowed people to hijack and listen into a conversation.

In short, it’s a win-win but will require people to put in some effort to make it all work properly.

The big losers will be criminals and security services who will be shut out of secure communications – at least until they figure out a way to crack this new protocol. At which point the IETF will start on TLS 1.4. ®

Source: theregister.co.uk

 

 

An Overview of TLS 1.3 – Faster and More Secure

Updated on March 25, 2018

It has been over eight years since the last encryption protocol update, but the new TLS 1.3 has now been finalized as of March 21st, 2018. The exciting part for the WordPress community and customers here at Kinsta is that TLS 1.3 includes a lot of security and performance improvements. With the HTTP/2 protocol update in late 2015, and now TLS 1.3 in 2018, encrypted connections are now more secure and faster than ever. Read more below about the changes coming with TLS 1.3 and how it can benefit you as a WordPress site owner.

‚TLS 1.3: Faster, Safer, Better, Everything.‘ 👍 — Filippo ValsordaCLICK TO TWEET

What is TLS?

TLS stands for Transport Layer Security and is the successor to SSL (Secure Sockets Layer). However, both these terms are commonly thrown around a lot online and you might see them both referred to as simply SSL.  TLS provides secure communication between web browsers and servers. The connection itself is secure because symmetric cryptography is used to encrypt the data transmitted. The keys are uniquely generated for each connection and are based on a shared secret negotiated at the beginning of the session, also known as a TLS handshake. Many IP-based protocols, such as HTTPS, SMTP, POP3, FTP support TLS to encrypt data.

Web browsers utilize an SSL certificate which allows them to recognize that it belongs to a digitally signed certificate authority. Technically these are also known as TLS certificates, but most SSL providers stick with the term “SSL certificates” as this is generally more well known. SSL/TLS certificates provide the magic behind what many people simply know as the HTTPS that they see in their browser’s address bar.

https web browser address bar

TLS 1.3 vs TLS 1.2

The Internet Engineering Task Force (IETF) is the group that has been in charge of defining the TLS protocol, which has gone through many various iterations. The previous version of TLS, TLS 1.2, was defined in RFC 5246 and has been in use for the past eight years by the majority of all web browsers. As of March 21st, 2018, TLS 1.3 has now been finalized, after going through 28 drafts.

Companies such as Cloudflare are already making TLS 1.3 available to their customers. Filippo Valsorda had a great talk (see presentation below) on the differences between TLS 1.2 and TLS 1.3. In short, the major benefits of TLS 1.3 vs that of TLS 1.2 is faster speeds and improved security.

Speed Benefits of TLS 1.3

TLS and encrypted connections have always added a slight overhead when it comes to web performance. HTTP/2 definitely helped with this problem, but TLS 1.3 helps speed up encrypted connections even more with features such as TLS false start and Zero Round Trip Time (0-RTT).

To put it simply, with TLS 1.2, two round-trips have been needed to complete the TLS handshake. With 1.3, it requires only one round-trip, which in turn cuts the encryption latency in half. This helps those encrypted connections feel just a little bit snappier than before.

tls 1.3 handshake performance

TLS 1.3 handshake performance

Another advantage of is that in a sense, it remembers! On sites you have previously visited, you can now send data on the first message to the server. This is called a “zero round trip.” (0-RTT). And yes, this also results in improved load time times.

Improved Security With TLS 1.3

A big problem with TLS 1.2 is that it’s often not configured properly it leaves websites vulnerable to attacks. TLS 1.3 now removes obsolete and insecure features from TLS 1.2, including the following:

  • SHA-1
  • RC4
  • DES
  • 3DES
  • AES-CBC
  • MD5
  • Arbitrary Diffie-Hellman groups — CVE-2016-0701
  • EXPORT-strength ciphers – Responsible for FREAK and LogJam

Because the protocol is in a sense more simplified, this make it less likely for administrators and developers to misconfigure the protocol. Jessie Victors, a security consultant, specializing in privacy-enhancing systems and applied cryptography stated:

I am excited for the upcoming standard. I think we will see far fewer vulnerabilities and we will be able to trust TLS far more than we have in the past.

Google is also raising the bar, as they have started warning users in search console that they are moving to TLS version 1.2, as TLS 1 is no longer that safe. They are giving a final deadline of March 2018.

TLS 1.3 Browser Support

With Chrome 63, TLS 1.3 is enabled for outgoing connections. Support for TLS 1.3 was added back in Chrome 56 and is also supported by Chrome for Android.

TLS 1.3 is enabled by default in Firefox 52 and above (including Quantum). They are retaining an insecure fallback to TLS 1.2 until they know more about server tolerance and the 1.3 handshake.

TLS 1.3 browser support

TLS 1.3 browser support

With that being said some SSL test services on the Internet don’t support TLS 1.3 yet and neither do other browsers such as IE, Microsoft Edge, Opera, or Safari. It will be a couple more months while the protocol is being finalized and for browsers to catch up. Most of the remaining ones are in development at the moment.

Cloudflare has an excellent article on why TLS 1.3 isn’t in browsers yet.

Summary

Just like with HTTP/2, TLS 1.3 is another exciting protocol update that we can expect to benefit from for years to come. Not only will encrypted (HTTPS) connections become faster, but they will also be more secure. Here’s to moving the web forward!

Source: https://kinsta.com/blog/tls-1-3/