The Evolution of AI

Photo credit: Peg Skorpinski

Source: https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7

Artificial Intelligence — The Revolution Hasn’t Happened Yet

Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us.

There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”

We didn’t do the amniocentesis, and a healthy girl was born a few months later. But the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. And this happened day after day until it somehow got fixed. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other places and times. The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills.

Whether or not we come to understand “intelligence” any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of “artificial intelligence,” it can also be viewed more prosaically — but with no less reverence — as the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities.

While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways.

Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.

And, unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with its principles of analysis and design.

The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically.

Most of what is being called “AI” today, particularly in the public sphere, is what has been called “Machine Learning” (ML) for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase “Data Science” began to be used to refer to this phenomenon, reflecting the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, and reflecting the larger social and environmental scope of the resulting systems.

This confluence of ideas and technology trends has been rebranded as “AI” over the past few years. This rebranding is worthy of some scrutiny.

Historically, the phrase “AI” was coined in the late 1950’s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. We will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory and control theory already existed, and were often inspired by human intelligence (and animal intelligence), these fields were arguably focused on “low-level” signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. “AI” was meant to focus on something different — the “high-level” or “cognitive” capability of humans to “reason” and to “think.” Sixty years later, however, high-level reasoning and thought remain elusive. The developments which are now being called “AI” arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics — the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions.

Indeed, the famous “backpropagation” algorithm that was rediscovered by David Rumelhart in the early 1980s, and which is now viewed as being at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon.

Since the 1960s much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. Rather, as in the case of the Apollo spaceships, these ideas have often been hidden behind the scenes, and have been the handiwork of researchers focused on specific engineering challenges. Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics and A/B testing have been a major success — these are the advances that have powered companies such as Google, Netflix, Facebook and Amazon.

One could simply agree to refer to all of this as “AI,” and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play.

The past two decades have seen major progress — in industry and academia — in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of.

Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits.

For example, returning to my personal anecdote, we might imagine living our lives in a “societal-scale medical system” that sets up data flows, and data-analysis flows, between doctors and devices positioned in and around human bodies, thereby able to aid human intelligence in making diagnoses and providing care. The system would incorporate information from cells in the body, DNA, blood tests, environment, population genetics and the vast scientific literature on drugs and treatments. It would not just focus on a single patient and a doctor, but on relationships among all humans — just as current medical testing allows experiments done on one set of humans (or animals) to be brought to bear in the care of other humans. It would help maintain notions of relevance, provenance and reliability, in the way that the current banking system focuses on such challenges in the domain of finance and payment. And, while one can foresee many problems arising in such a system — involving privacy issues, liability issues, security issues, etc — these problems should properly be viewed as challenges, not show-stoppers.

We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering.

Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems. On the sufficiency side, consider self-driving cars. For such technology to be realized, a range of engineering problems will need to be solved that may have little relationship to human competencies (or human lack-of-competencies). The overall transportation system (an II system) will likely more closely resemble the current air-traffic control system than the current collection of loosely-coupled, forward-facing, inattentive human drivers. It will be vastly more complex than the current air-traffic control system, specifically in its use of massive amounts of data and adaptive statistical modeling to inform fine-grained decisions. It is those challenges that need to be in the forefront, and in such an effort a focus on human-imitative AI may be a distraction.

As for the necessity argument, it is sometimes argued that the human-imitative AI aspiration subsumes IA and II aspirations, because a human-imitative AI system would not only be able to solve the classical problems of AI (as embodied, e.g., in the Turing test), but it would also be our best bet for solving IA and II problems. Such an argument has little historical precedent. Did civil engineering develop by envisaging the creation of an artificial carpenter or bricklayer? Should chemical engineering have been framed in terms of creating an artificial chemist? Even more polemically: if our goal was to build chemical factories, should we have first created an artificial chemist who would have then worked out how to build a chemical factory?

A related argument is that human intelligence is the only kind of intelligence that we know, and that we should aim to mimic it as a first step. But humans are in fact not very good at some kinds of reasoning — we have our lapses, biases and limitations. Moreover, critically, we did not evolve to perform the kinds of large-scale decision-making that modern II systems must face, nor to cope with the kinds of uncertainty that arise in II contexts. One could argue
that an AI system would not only imitate human intelligence, but also “correct” it, and would also scale to arbitrarily large problems. But we are now in the realm of science fiction — such speculative arguments, while entertaining in the setting of fiction, should not be our principal strategy going forward in the face of the critical IA and II problems that are beginning to emerge. We need to solve IA and II problems on their own merits, not as a mere corollary to a human-imitative AI agenda.

It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms.

Of course, classical human-imitative AI problems remain of great interest as well. However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved.

IA will also remain quite essential, because for the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations. We will need well-thought-out interactions of humans and computers to solve our most pressing problems. And we will want computers to trigger new levels of human creativity, not replace human creativity (whatever that might mean).

It was John McCarthy (while a professor at Dartmouth, and soon to take a
position at MIT) who coined the term “AI,” apparently to distinguish his
budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems — a vision that was closely tied to operations research, statistics, pattern recognition, information theory and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than
in most fields.)

But we need to move beyond the particular historical perspectives of McCarthy and Wiener.

We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA and II.

This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives. Moreover, in this understanding and shaping there is a need for a diverse set of voices from all walks of life, not merely a dialog among the technologically attuned. Focusing narrowly on human-imitative AI prevents an appropriately wide range of voices from being heard.

While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other
disciplines whose contributions and perspectives are sorely needed — notably
the social sciences, the cognitive sciences and the humanities.

On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope — society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data-focused and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline.

Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often
invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be.

In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline.

I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead.

Michael I. Jordan

Source: https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7

Hey Alexa, What Are You Doing to My Kid’s Brain?

“Unless your parents purge it, your Alexa will hold on to every bit of data you have ever given it, all the way back to the first things you shouted at it as a 2-year-old.”

Among the more modern anxieties of parents today is how virtual assistants will train their children to act. The fear is that kids who habitually order Amazon’s Alexa to read them a story or command Google’s Assistant to tell them a joke are learning to communicate not as polite, considerate citizens, but as demanding little twerps.

This worry has become so widespread that Amazon and Google both announced this week that their voice assistants can now encourage kids to punctuate their requests with „please.“ The version of Alexa that inhabits the new Echo Dot Kids Edition will thank children for „asking so nicely.“ Google Assistant’s forthcoming Pretty Please feature will remind kids to „say the magic word“ before complying with their wishes.

But many psychologists think kids being polite to virtual assistants is less of an issue than parents think—and may even be a red herring. As virtual assistants become increasingly capable, conversational, and prevalent (assistant-embodied devices are forecasted to outnumber humans), psychologists and ethicists are asking deeper, more subtle questions than will Alexa make my kid bossy. And they want parents to do the same.

„When I built my first virtual child, I got a lot of pushback and flak,“ recalls developmental psychologist Justine Cassell, director emeritus of Carnegie Mellon’s Human-Computer Interaction Institute and an expert in the development of AI interfaces for children. It was the early aughts, and Cassell, then at MIT, was studying whether a life-sized, animated kid named Sam could help flesh-and-blood children hone their cognitive, social, and behavioral skills. „Critics worried that the kids would lose track of what was real and what was pretend,“ Cassel says. „That they’d no longer be able to tell the difference between virtual children and actual ones.“

But when you asked the kids whether Sam was a real child, they’d roll their eyes. Of course Sam isn’t real, they’d say. There was zero ambiguity.

Nobody knows for sure, and Cassel emphasizes that the question deserves study, but she suspects today’s children will grow up similarly attuned to the virtual nature of our device-dwelling digital sidekicks—and, by extension, the context in which they do or do not need to be polite. Kids excel, she says, at dividing the world into categories. As long as they continue to separate humans from machines, she says, there’s no need to worry. „Because isn’t that actually what we want children to learn—not that everything that has a voice should be thanked, but that people have feelings?“

Point taken. But what about Duplex, I ask, Google’s new human-sounding, phone calling AI? Well, Cassell says, that complicates matters. When you can’t tell if a voice belongs to a human or a machine, she says, perhaps it’s best to assume you’re talking to a person, to avoid hurting a human’s feelings. But the real issue there isn’t politeness, it’s disclosure; artificial intelligences should be designed to identify themselves as such.

What’s more, the implications of a kid interacting with an AI extend far deeper than whether she recognizes it as non-human. „Of course parents worry about these devices reinforcing negative behaviors, whether it’s being sassy or teasing a virtual assistant,” says Jenny Radesky, a developmental behavioral pediatrician at the University of Michigan and co-author of the latest guidelines for media use from the American Academy of Pediatrics. “But I think there are bigger questions surrounding things like kids’ cognitive development—the way they consume information and build knowledge.”

Consider, for example, that the way kids interact with virtual assistants may not actual help them learn. This advertisement for the Echo Dot Kids Edition ends with a girl asking her smart speaker the distance to the Andromeda Galaxy. As the camera zooms out, we hear Alexa rattle off the answer: „The Andromeda Galaxy is 14 quintillion, 931 quadrillion, 389 trillion, 517 billion, 400 million miles away“:

To parents it might register as a neat feature. Alexa knows answers to questions that you don’t! But most kids don’t learn by simply receiving information. „Learning happens happens when a child is challenged,“ Cassell says, „by a parent, by another child, a teacher—and they can argue back and forth.“

Virtual assistants can’t do that yet, which highlights the importance of parents using smart devices with their kids. At least for the time being. Our digital butlers could be capable of brain-building banter sooner than you think.

This week, Google announced its smart speakers will remain activated several seconds after you issue a command, allowing you to engage in continuous conversation without repeating „Hey, Google,“ or „OK, Google.“ For now, the feature will allow your virtual assistant to keep track of contextually dependent follow-up questions. (If you ask what movies George Clooney has starred in and then ask how tall he his, Google Assistant will recognize that „he“ is in reference to George Clooney.) It’s a far cry from a dialectic exchange, but it charts a clear path toward more conversational forms of inquiry and learning.

And, perhaps, something even more. „I think it’s reasonable to ask if parenting will become a skill that, like Go or chess, is better performed by a machine,“ says John Havens, executive director of the the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. „What do we do if a kid starts saying: Look, I appreciate the parents in my house, because they put me on the map, biologically. But dad tells a lot of lame dad jokes. And mom is kind of a helicopter parent. And I really prefer the knowledge, wisdom, and insight given to me by my devices.

Havens jokes that he sounds paranoid, because he’s speculating about what-if scenarios from the future. But what about the more near-term? If you start handing duties over to the machine, how do you take them back the day your kid decides Alexa is a higher authority than you are on, say, trigonometry?

Other experts I spoke with agreed it’s not too early for parents to begin thinking deeply about the long-term implications of raising kids in the company of virtual assistants. „I think these tools can be awesome, and provide quick fixes to situations that involve answering questions and telling stories that parents might not always have time for,“ Radesky says. „But I also want parents to consider how that might come to displace some of the experiences they enjoy sharing with kids.“

Other things Radesky, Cassell, and Havens think parents should consider? The extent to which kids understand privacy issues related to internet-connected toys. How their children interact with devices at their friends‘ houses. And what information other family’s devices should be permitted to collect about their kids. In other words: How do children conceptualize the algorithms that serve up facts and entertainment; learn about them; and potentially profit from them?

„The fact is, very few of us sit down and talk with our kids about the social constructs surrounding robots and virtual assistants,“ Radesky says.

Perhaps that—more than whether their children says „please“ and „thank you“ to the smart speaker in the living room—is what parents should be thinking about.

Source:
https://www.wired.com/story/hey-alexa-what-are-you-doing-to-my-kids-brain/

Lawmakers, child development experts, and privacy advocates are expressing concerns about two new Amazon products targeting children, questioning whether they prod kids to be too dependent on technology and potentially jeopardize their privacy.

In a letter to Amazon CEO Jeff Bezos on Friday, two members of the bipartisan Congressional Privacy Caucus raised concerns about Amazon’s smart speaker Echo Dot Kids and a companion service called FreeTime Unlimited that lets kids access a children’s version of Alexa, Amazon’s voice-controlled digital assistant.

“While these types of artificial intelligence and voice recognition technology offer potentially new educational and entertainment opportunities, Americans’ privacy, particularly children’s privacy, must be paramount,” wrote Senator Ed Markey (D-Massachusetts) and Representative Joe Barton (R-Texas), both cofounders of the privacy caucus.

The letter includes a dozen questions, including requests for details about how audio of children’s interactions is recorded and saved, parental control over deleting recordings, a list of third parties with access to the data, whether data will be used for marketing purposes, and Amazon’s intentions on maintaining a profile on kids who use these products.

In a statement, Amazon said it „takes privacy and security seriously.“ The company said „Echo Dot Kids Edition uses on-device software to detect the wake word and only the wake word. Only once the wake word is detected does it start streaming to the cloud, and it will present a visual indication (the light ring at the top of the device turns blue) to show that it is streaming to the cloud.“

Echo Dot Kids is the latest in a wave of products from dominant tech players targeting children, including Facebook’s communications app Messenger Kids and Google’s YouTube Kids, both of which have been criticized by child health experts concerned about privacy and developmental issues.

Like Amazon, toy manufacturers are also interested in developing smart speakers that would live in a child’s room. In September, Mattel pulled Aristotle, a smart speaker and digital assistant aimed at children, after a similar letter from Markey and Barton, as well as a petition that garnered more than 15,000 signatures.

One of the organizers of the petition, the nonprofit group Campaign for a Commercial Free Childhood, is now spearheading a similar effort against Amazon. In a press release Friday, timed to the letter from Congress, a group of child development and privacy advocates urged parents not to purchase Echo Dot Kids because the device and companion voice service pose a threat to children’s privacy and well-being.

“Amazon wants kids to be dependent on its data-gathering device from the moment they wake up until they go to bed at night,” said the group’s executive director Josh Golin. “The Echo Dot Kids is another unnecessary ‘must-have’ gadget, and it’s also potentially harmful. AI devices raise a host of privacy concerns and interfere with the face-to-face interactions and self-driven play that children need to thrive.”

FreeTime on Alexa includes content targeted at children, like kids’ books and Alexa skills from Disney, Nickelodeon, and National Geographic. It also features parental controls, such as song filtering, bedtime limits, disabled voice purchasing, and positive reinforcement for using the word “please.”

Despite such controls, the child health experts warning against Echo Dot Kids wrote, “Ultimately, though, the device is designed to make kids dependent on Alexa for information and entertainment. Amazon even encourages kids to tell the device ‘Alexa, I’m bored,’ to which Alexa will respond with branded games and content.”

In Amazon’s April press release announcing Echo Dot Kids, the company quoted one representative from a nonprofit group focused on children that supported the product, Stephen Balkam, founder and CEO of the Family Online Safety Institute. Balkam referenced a report from his institute, which found that the majority of parents were comfortable with their child using a smart speaker. Although it was not noted in the press release, Amazon is a member of FOSI and has an executive on the board.

In a statement to WIRED, Amazon said, „We believe one of the core benefits of FreeTime and FreeTime Unlimited is that the services provide parents the tools they need to help manage the interactions between their child and Alexa as they see fit.“ Amazon said parents can review and listen to their children’s voice recordings in the Alexa app, review FreeTime Unlimited activity via the Parent Dashboard, set bedtime limits or pause the device whenever they’d like.

Balkam said his institute disclosed Amazon’s funding of its research on its website and the cover of its report. Amazon did not initiate the study. Balkam said the institute annually proposes a research project, and reaches out to its members, a group that also includes Facebook, Google, and Microsoft, who pay an annual stipend of $30,000. “Amazon stepped up and we worked with them. They gave us editorial control and we obviously gave them recognition for the financial support,” he said.

Balkam says Echo Dot Kids addresses concerns from parents about excessive screen time. “It’s screen-less, it’s very interactive, it’s kid friendly,” he said, pointing out Alexa skills that encourage kids to go outside.

In its review of the product, BuzzFeed wrote, “Unless your parents purge it, your Alexa will hold on to every bit of data you have ever given it, all the way back to the first things you shouted at it as a 2-year-old.”

Sources:
https://www.wired.com/story/congress-privacy-groups-question-amazons-echo-dot-for-kids/

Lets Get Rid of the “Nothing to Hide, Nothing to Fear” Mentality

With Zuckerberg testifying to the US Congress over Facebook’s data privacy and the implementation of GDPR fast approaching, the debate around data ownership has suddenly burst into the public psyche. Collecting user data to serve targeted advertising in a free platform is one thing, harvesting the social graphs of people interacting with apps and using it to sway an election is somewhat worse.

Suffice to say that neither of the above compare to the indiscriminate collection of ordinary civilians’ data on behalf of governments every day.

In 2013, Edward Snowden blew the whistle on the systematic US spy program he helped to architect. Perhaps the largest revelation to come out of the trove of documents he released were the details of PRISM, an NSA program that collects internet communications data from US telecommunications companies like Microsoft, Yahoo, Google, Facebook and Apple. The data collected included audio and video chat logs, photographs, emails, documents and connection logs of anyone using the services of 9 leading US internet companies. PRISM benefited from changes to FISA that allowed warrantless domestic surveillance of any target without the need for probable cause. Bill Binney, former US intelligence official, explains how, for instances where corporate control wasn’t achievable, the NSA enticed third party countries to clandestinely tap internet communication lines on the internet backbone via the RAMPART-A program.What this means is that the NSA was able to assemble near complete dossiers of all web activity carried out by anyone using the internet.

But this is just in the US right?, policies like this wouldn’t be implemented in Europe.

Wrong unfortunately.

GCHQ, the UK’s intelligence agency allegedly collects considerably more metadata than the NSA. Under Tempora, GCHQ can intercept all internet communications from submarine fibre optic cables and store the information for 30 days at the Bude facility in Cornwall. This includes complete web histories, the contents of all emails and facebook entires and given that more than 25% of all internet communications flow through these cables, the implications are astronomical. Elsewhere, JTRIG, a unit of GCHQ have intercepted private facebook pictures, changed the results of online polls and spoofed websites in real time. A lot of these techniques have been made possible by the 2016 Investigatory Powers Act which Snowden describes as the most “extreme surveillance in the history of western democracy”.

But despite all this, the age old reprise; “if you’ve got nothing to hide, you’ve got nothing to fear” often rings out in debates over privacy.

Indeed, the idea is so pervasive that politicians often lean on the phrase to justify ever more draconian methods of surveillance. Yes, they draw upon the selfsame rhetoric of Joseph Goebbels, propaganda minister for the Nazi regime.

In drafting legislation for the the Investigatory Powers Act, May said that such extremes were necessary to ensure “no area of cyberspace becomes a haven for those who seek to harm us, to plot, poison minds and peddle hatred under the radar”.

When levelled against the fear of terrorism and death, its easy to see how people passively accept ever greater levels of surveillance. Indeed, Naomi Klein writes extensively in Shock Doctrine how the fear of external threats can be used as a smokescreen to implement ever more invasive policy. But indiscriminate mass surveillance should never be blindly accepted, privacy should and always will be a social norm, despite what Mark Zuckerberg said in 2010. Although I’m sure he may have a different answer now.

So you just read emails and look at cat memes online, why would you care about privacy?

In the same way we’re able to close our living room curtains and be alone and unmonitored, we should be able to explore our identities online un-impinged. Its a well rehearsed idea that nowadays we’re more honest to our web browsers than we are to each other but what happens when you become cognisant that everything you do online is intercepted and catalogued? As with CCTV, when we know we’re being watched, we alter our behaviour in line with whats expected.

As soon as this happens online, the liberating quality provided by the anonymity of the internet is lost. Your thinking aligns with the status quo and we lose the boundless ability of the internet to search and develop our identities. No progress can be made when everyone thinks the same way. Difference of opinion fuels innovation.

This draws obvious comparisons with Bentham’s Panopticon, a prison blueprint for enforcing control from within. The basic setup is as follows; there is a central guard tower surrounded by cells. In the cells are prisoners. The tower shines bright light so that the watchman can see each inmate silhouetted in their cell but the prisoners cannot see the watchman. The prisoners must assume they could be observed at any point and therefore act accordingly. In literature, the common comparison is Orwell’s 1984 where omnipresent government surveillance enforces control and distorts reality. With revelations about surveillance states, the relevance of these metaphors are plain to see.

In reality, theres actually a lot more at stake here.

With the Panopticon certain individuals are watched, in 1984 everyone is watched. On the modern internet, every person, irrespective of the threat they pose, is not only watched but their information is stored and archived for analysis.

Kafka’s The Trial, in which a bureaucracy uses citizens information to make decisions about them, but denies them the ability to participate in how their information is used, therefore seems a more apt comparison. The issue here is that corporations, more so, states have been allowed to comb our data and make decisions that affect us without our consent.

Maybe, as a member of a western democracy, you don’t think this matters. But what if you’re a member of a minority group in an oppressive regime? What if you’re arrested because a computer algorithm cant separate humour from intent to harm?

On the other hand, maybe you trust the intentions of your government, but how much faith do you have in them to keep your data private? The recent hack of the SEC shows that even government systems aren’t safe from attackers. When a business database is breached, maybe your credit card details become public, when a government database that has aggregated millions of data points on every aspect of your online life is hacked, you’ve lost all control of your ability to selectively reveal yourself to the world. Just as Lyndon Johnson sought to control physical clouds, he who controls the modern cloud, will rule the world.

Perhaps you think that even this doesn’t matter, if it allows the government to protect us from those that intend to cause harm then its worth the loss of privacy. The trouble with indiscriminate surveillance is that with so much data you see everything but paradoxically, still know nothing.

Intelligence is the strategic collection of pertinent facts, bulk data collection cannot therefore be intelligent. As Bill Binney puts it “bulk data kills people” because technicians are so overwhelmed that they cant isolate whats useful. Data collection as it is can only focus on retribution rather than reduction.

Granted, GDPR is a big step forward for individual consent but will it stop corporations handing over your data to the government? Depending on how cynical you are, you might think that GDPR is just a tool to clean up and create more reliable deterministic data anyway. The nothing to hide, nothing to fear mentality renders us passive supplicants in the removal of our civil liberties. We should be thinking about how we relate to one another and to our Governments and how much power we want to have in that relationship.

To paraphrase Edward Snowden, saying you don’t care about privacy because you’ve got nothing to hide is analogous to saying you don’t care about freedom of speech because you have nothing to say.

http://behindthebrowser.space/index.php/2018/04/22/nothing-to-fear-nothing-to-hide/

Most dangerous attack techniques, and what’s coming next 2018

RSA Conference 2018

Experts from SANS presented the five most dangerous new cyber attack techniques in their annual RSA Conference 2018 keynote session in San Francisco, and shared their views on how they work, how they can be stopped or at least slowed, and how businesses and consumers can prepare.

dangerous attack techniques

The five threats outlined are:

1. Repositories and cloud storage data leakage
2. Big Data analytics, de-anonymization, and correlation
3. Attackers monetize compromised systems using crypto coin miners
4. Recognition of hardware flaws
5. More malware and attacks disrupting ICS and utilities instead of seeking profit.

Repositories and cloud storage data leakage

Ed Skoudis, lead for the SANS Penetration Testing Curriculum, talked about the data leakage threats facing us from the increased use of repositories and cloud storage:

“Software today is built in a very different way than it was 10 or even 5 years ago, with vast online code repositories for collaboration and cloud data storage hosting mission-critical applications. However, attackers are increasingly targeting these kinds of repositories and cloud storage infrastructures, looking for passwords, crypto keys, access tokens, and terabytes of sensitive data.”

He continued: “Defenders need to focus on data inventories, appointing a data curator for their organization and educating system architects and developers about how to secure data assets in the cloud. Additionally, the big cloud companies have each launched an AI service to help classify and defend data in their infrastructures. And finally, a variety of free tools are available that can help prevent and detect leakage of secrets through code repositories.”

Big Data analytics, de-anonymization, and correlation

Skoudis went on to talk about the threat of Big Data Analytics and how attackers are using data from several sources to de-anonymise users:

“In the past, we battled attackers who were trying to get access to our machines to steal data for criminal use. Now the battle is shifting from hacking machines to hacking data — gathering data from disparate sources and fusing it together to de-anonymise users, find business weaknesses and opportunities, or otherwise undermine an organisation’s mission. We still need to prevent attackers from gaining shell on targets to steal data. However, defenders also need to start analysing risks associated with how their seemingly innocuous data can be combined with data from other sources to introduce business risk, all while carefully considering the privacy implications of their data and its potential to tarnish a brand or invite regulatory scrutiny.”

Attackers monetize compromised systems using crypto coin miners

Johannes Ullrich, is Dean of Research, SANS Institute and Director of SANS Internet Storm Center. He has been looking at the increasing use of crypto coin miners by cyber criminals:

“Last year, we talked about how ransomware was used to sell data back to its owner and crypto-currencies were the tool of choice to pay the ransom. More recently, we have found that attackers are no longer bothering with data. Due to the flood of stolen data offered for sale, the value of most commonly stolen data like credit card numbers of PII has dropped significantly. Attackers are instead installing crypto coin miners. These attacks are more stealthy and less likely to be discovered and attackers can earn tens of thousands of dollars a month from crypto coin miners. Defenders therefore need to learn to detect these coin miners and to identify the vulnerabilities that have been exploited in order to install them.”

Recognition of hardware flaws

Ullrich then went on to say that software developers often assume that hardware is flawless and that this is a dangerous assumption. He explains why and what needs to be done:

“Hardware is no less complex then software and mistakes have been made in developing hardware just as they are made by software developers. Patching hardware is a lot more difficult and often not possible without replacing entire systems or suffering significant performance penalties. Developers therefore need to learn to create software without relying on hardware to mitigate any security issues. Similar to the way in which software uses encryption on untrusted networks, software needs to authenticate and encrypt data within the system. Some emerging homomorphic encryption algorithms may allow developers to operate on encrypted data without having to decrypt it first.”

most dangerous attack techniques

More malware and attacks disrupting ICS and utilities instead of seeking profit

Finally, Head of R&D, SANS Institute, James Lyne, discussed the growing trend in malware and attacks that aren’t profit centred as we have largely seen in the past, but instead are focused on disrupting Industrial Control Systems (ICS) and utilities:

“Day to day the grand majority of malicious code has undeniably been focused on fraud and profit. Yet, with the relentless deployment of technology in our societies, the opportunity for political or even military influence only grows. And rare publicly visible attacks like Triton/TriSYS show the capability and intent of those who seek to compromise some of the highest risk components of industrial environments, i.e. the safety systems which have historically prevented critical security and safety meltdowns.”

He continued: “ICS systems are relatively immature and easy to exploit in comparison to the mainstream computing world. Many ICS systems lack the mitigations of modern operating systems and applications. The reliance on obscurity or isolation (both increasingly untrue) do not position them well to withstand a heightened focus on them, and we need to address this as an industry. More worrying is that attackers have demonstrated they have the inclination and resources to diversify their attacks, targeting the sensors that are used to provide data to the industrial controllers themselves. The next few years are likely to see some painful lessons being learned as this attack domain grows, since the mitigations are inconsistent and quite embryonic.”

Source: https://www.helpnetsecurity.com/2018/04/23/dangerous-attack-techniques/

Android’s trust problem

Illustration by William Joel / The Verge

Published today, a two-year study of Android security updates has revealed a distressing gap between the software patches Android companies claim to have on their devices and the ones they actually have. Your phone’s manufacturer may be lying to you about the security of your Android device. In fact, it appears that almost all of them do.

Coming at the end of a week dominated by Mark Zuckerberg’s congressional hearings and an ongoing Facebook privacy probe, this news might seem of lesser importance, but it goes to the same issue that has drawn lawmakers’ scrutiny to Facebook: the matter of trust. Facebook is the least-trusted big US tech company, and Android might just be the operating system equivalent of it: used by 2 billion people around the world, tolerated more than loved, and susceptible to major lapses in user privacy and security.

The gap between Android and its nemesis, Apple’s iOS, has always boiled down to trust. Unlike Google, Apple doesn’t make its money by tracking the behavior of its users, and unlike the vast and varied Android ecosystem, there are only ever a couple of iPhone models, each of which is updated with regularity and over a long period of time. Owning an iPhone, you can be confident that you’re among Apple’s priority users (even if Apple faces its own cohort of critics accusing it of planned obsolescence), whereas with an Android device, as evidenced today, you can’t even be sure that the security bulletins and updates you’re getting are truthful.

Android is perceived as untrustworthy in large part because it is. Beside the matter of security misrepresentations, here are some of the other major issues and villains plaguing the platform:

Version updates are slow, if they arrive at all. I’ve been covering Android since its earliest Cupcake days, and in the near-decade that’s passed, there’s never been a moment of contentment about the speed of OS updates. Things seemed to be getting even worse late last year when the November batch of new devices came loaded with 2016’s Android Nougat. Android Oreo is now nearly eight months old — meaning we’re closer to the launch of the next version of Android than the present one — and LG is still preparing to roll out that software for its 2017 flagship LG G6.

Promises about Android device updates are as ephemeral as Snapchat messages. Before it became the world’s biggest smartphone vendor, Samsung was notorious for reneging on Android upgrade promises. Sony’s Xperia Z3 infamously fell foul of an incompatibility between its Snapdragon processor and Google’s Android Nougat requirements, leaving it prematurely stuck without major OS updates. Whenever you have so many loud voices involved — carriers and chip suppliers along with Google and device manufacturers — the outcome of their collaboration is prone to becoming exactly as haphazard and unpredictable as Android software upgrades have become.

Google is obviously aware of the situation, and it’s pushing its Android One initiative to give people reassurances when buying an Android phone. Android One guarantees OS updates for at least two years and security updates for at least three years. But, as with most things Android, Android One is only available on a few devices, most of which are of the budget variety. You won’t find the big global names of Samsung, Huawei, and LG supporting it.

Some Android OEMs snoop on you. This is an ecosystem problem rather than something rooted in the operating system itself, but it still discolors Android’s public reputation. Android phone manufacturers habitually lade their devices with bloatware (stuff you really don’t want or need on your phone), and some have even taken to loading up spyware. Blu’s devices were yanked from Amazon for doing exactly that: selling phones that were vulnerable to remote takeovers and could be exploited to have the user’s text messages and call records clandestinely recorded. OnePlus also got in trouble for having an overly inquisitive user analytics program, which beamed personally identifiable information back to the company’s HQ without explicit user consent.

Huawei is perhaps the most famous example of a potentially conflicted Android phone manufacturer, with US spy agencies openly urging their citizens to avoid Huawei phones for their own security. No hard evidence has yet been presented of Huawei doing anything improper, however the US is not the only country to express concern about the company’s relationship with the Chinese government — and mistrust is based as much on smoke as it is on the actual fire.

Android remains vulnerable, thanks in part to Google’s permissiveness. It’s noteworthy that, when Facebook’s data breach became public and people started looking into what data Facebook had on them, only their Android calls and messages had been collected. Why not the iPhone? Because Apple’s walled-garden philosophy makes it much harder, practically impossible, for a user to inadvertently give consent to privacy-eroding apps like Facebook’s Messenger to dig into their devices. Your data is simply better protected on iOS, and even though Android has taken significant steps forward in making app permissions more granular and specific, it’s still comparatively easy to mislead users about what data an app is obtaining and for what purposes.

Android hardware development is chaotic and unreliable. For many, the blistering, sometimes chaotic pace of change in Android devices is part of the ecosystem’s charm. It’s entertaining to watch companies try all sorts of zany and unlikely designs, with only the best of them surviving more than a few months. But the downside of all this speed is lack of attention being paid to small details and long-term sustainability.

LG made a huge promotional push two years ago around its modular G5 flagship, which was meant to usher in a new accessory ecosystem and elevate the flexibility of LG Android devices to new heights. Within six months, that modular project was abandoned, leaving anyone that bought modular LG accessories — on the expectation of multigenerational support — high and dry. And speaking of dryness, Sony recently got itself in trouble for overpromising by calling its Xperia phones “waterproof.”

Samsung’s Galaxy Note 7 is the best and starkest example of the dire consequences that can result from a hurried and excessively ambitious hardware development cycle. The Note 7 had a fatal battery flaw that led many people’s shiny new Samsung smartphones to spontaneously catch fire. Compare that to the iPhone’s pace of usually incremental changes, implemented at predictable intervals and with excruciating fastidiousness.

Android Marshmallow official logo

Besides pledging to deliver OS updates that never come, claiming to have delivered security updates that never arrived, and taking liberties with your personal data, Android OEMs also have a tendency to exaggerate what their phones can actually do. They don’t collaborate on much, so in spite of pouring great efforts into developing their Android software experience, they also just feed the old steadfast complaint of a fragmented ecosystem.

The problem of trust with Android, much like the problem of trust in Facebook, is grounded in reality. It doesn’t matter that not all Android device makers engage in shady privacy invasion or overreaching marketing claims. The perception, like the Android brand, is collective

https://www.theverge.com/2018/4/13/17233122/android-software-patch-trust-problem

World celebrates, cyber-snoops cry as TLS 1.3 internet crypto approved

 

Image Credits: kinsta.com

Forward-secrecy protocol comes with the 28th draft

A much-needed update to internet security has finally passed at the Internet Engineering Task Force (IETF), after four years and 28 drafts.

Internet engineers meeting in London, England, approved the updated TLS 1.3 protocol despite a wave of last-minute concerns that it could cause networking nightmares.

TLS 1.3 won unanimous approval (well, one „no objection“ amid the yeses), paving the way for its widespread implementation and use in software and products from Oracle’s Java to Google’s Chrome browser.

The new protocol aims to comprehensively thwart any attempts by the NSA and other eavesdroppers to decrypt intercepted HTTPS connections and other encrypted network packets. TLS 1.3 should also speed up secure communications thanks to its streamlined approach.

The critical nature of the protocol, however, has meant that progress has been slow and, on occasion, controversial. This time last year, Google paused its plan to support the new protocol in Chrome when an IT schools administrator in Maryland reported that a third of the 50,000 Chromebooks he managed bricked themselves after being updating to use the tech.

Most recently, banks and businesses complained that, thanks to the way the new protocol does security, they will be cut off from being able to inspect and analyze TLS 1.3 encrypted traffic flowing through their networks, and so potentially be at greater risk from attack.

Unfortunately, that self-same ability to decrypt secure traffic on your own network can also be potentially used by third parties to grab and decrypt communications.

An effort to effectively insert a backdoor into the protocol was met with disdain and some anger by internet engineers, many of whom pointed out that it will still be possible to introduce middleware to monitor and analyze internal network traffic.

Nope

The backdoor proposal did not move forward, meaning the internet as a whole will become more secure and faster, while banks and similar outfits will have to do a little extra work to accommodate and inspect TLS 1.3 connections as required.

At the heart of the change – and the complaints – are two key elements: forward secrecy, and ephemeral encryption keys.

TLS – standing for Transport Layer Security – basically works by creating a secure connection between a client and a server – your laptop, for example, and a company’s website. All this is done before any real information is shared – like credit card details or personal information.

Under TLS 1.2 this is a fairly lengthy process that can take as much as half-a-second:

  • The client says hi to the server and offers a range of strong encryption systems it can work with
  • The server says hi back, explains which encryption system it will use and sends an encryption key
  • The client takes that key and uses it to encrypt and send back a random series of letters
  • Together they use this exchange to create two new keys: a master key and a session key – the master key being stronger; the session key weaker.
  • The client then says which encryption system it plans to use for the weaker, session key – which allows data to be sent much faster because it doesn’t have to be processed as much
  • The server acknowledges that system will be used, and then the two start sharing the actual information that the whole exchange is about

TLS 1.3 speeds that whole process up by bundling several steps together:

  • The client says hi, here’s the systems I plan to use
  • The server gets back saying hi, ok let’s use them, here’s my key, we should be good to go
  • The client responds saying, yep that all looks good, here are the session keys

As well as being faster, TLS 1.3 is much more secure because it ditches many of the older encryption algorithms that TLS 1.2 supports that over the years people have managed to find holes in. Effectively the older crypto-systems potentially allowed miscreants to figure out what previous keys had been used (called „non-forward secrecy“) and so decrypt previous conversations.

A little less conversation

For example, snoopers could, under TLS 1.2, force the exchange to use older and weaker encryption algorithms that they knew how to crack.

People using TLS 1.3 will only be able to use more recent systems that are much harder to crack – at least for now. Any effort to force the conversation to use a weaker 1.2 system will be detected and flagged as a problem.

Another very important advantage to TLS 1.3 – but also one that some security experts are concerned about – is called „0-RTT Resumption“ which effectively allows the client and server to remember if they have spoken before, and so forego all the checks, using previous keys to start talking immediately.

That will make connections much faster but the concern of course is that someone malicious could get hold of the „0-RTT Resumption“ information and pose as one of the parties. Although internet engineers are less concerned about this security risk – which would require getting access to a machine – than the TLS 1.2 system that allowed people to hijack and listen into a conversation.

In short, it’s a win-win but will require people to put in some effort to make it all work properly.

The big losers will be criminals and security services who will be shut out of secure communications – at least until they figure out a way to crack this new protocol. At which point the IETF will start on TLS 1.4. ®

Source: theregister.co.uk

 

 

An Overview of TLS 1.3 – Faster and More Secure

Updated on March 25, 2018

It has been over eight years since the last encryption protocol update, but the new TLS 1.3 has now been finalized as of March 21st, 2018. The exciting part for the WordPress community and customers here at Kinsta is that TLS 1.3 includes a lot of security and performance improvements. With the HTTP/2 protocol update in late 2015, and now TLS 1.3 in 2018, encrypted connections are now more secure and faster than ever. Read more below about the changes coming with TLS 1.3 and how it can benefit you as a WordPress site owner.

‚TLS 1.3: Faster, Safer, Better, Everything.‘ 👍 — Filippo ValsordaCLICK TO TWEET

What is TLS?

TLS stands for Transport Layer Security and is the successor to SSL (Secure Sockets Layer). However, both these terms are commonly thrown around a lot online and you might see them both referred to as simply SSL.  TLS provides secure communication between web browsers and servers. The connection itself is secure because symmetric cryptography is used to encrypt the data transmitted. The keys are uniquely generated for each connection and are based on a shared secret negotiated at the beginning of the session, also known as a TLS handshake. Many IP-based protocols, such as HTTPS, SMTP, POP3, FTP support TLS to encrypt data.

Web browsers utilize an SSL certificate which allows them to recognize that it belongs to a digitally signed certificate authority. Technically these are also known as TLS certificates, but most SSL providers stick with the term “SSL certificates” as this is generally more well known. SSL/TLS certificates provide the magic behind what many people simply know as the HTTPS that they see in their browser’s address bar.

https web browser address bar

TLS 1.3 vs TLS 1.2

The Internet Engineering Task Force (IETF) is the group that has been in charge of defining the TLS protocol, which has gone through many various iterations. The previous version of TLS, TLS 1.2, was defined in RFC 5246 and has been in use for the past eight years by the majority of all web browsers. As of March 21st, 2018, TLS 1.3 has now been finalized, after going through 28 drafts.

Companies such as Cloudflare are already making TLS 1.3 available to their customers. Filippo Valsorda had a great talk (see presentation below) on the differences between TLS 1.2 and TLS 1.3. In short, the major benefits of TLS 1.3 vs that of TLS 1.2 is faster speeds and improved security.

Speed Benefits of TLS 1.3

TLS and encrypted connections have always added a slight overhead when it comes to web performance. HTTP/2 definitely helped with this problem, but TLS 1.3 helps speed up encrypted connections even more with features such as TLS false start and Zero Round Trip Time (0-RTT).

To put it simply, with TLS 1.2, two round-trips have been needed to complete the TLS handshake. With 1.3, it requires only one round-trip, which in turn cuts the encryption latency in half. This helps those encrypted connections feel just a little bit snappier than before.

tls 1.3 handshake performance

TLS 1.3 handshake performance

Another advantage of is that in a sense, it remembers! On sites you have previously visited, you can now send data on the first message to the server. This is called a “zero round trip.” (0-RTT). And yes, this also results in improved load time times.

Improved Security With TLS 1.3

A big problem with TLS 1.2 is that it’s often not configured properly it leaves websites vulnerable to attacks. TLS 1.3 now removes obsolete and insecure features from TLS 1.2, including the following:

  • SHA-1
  • RC4
  • DES
  • 3DES
  • AES-CBC
  • MD5
  • Arbitrary Diffie-Hellman groups — CVE-2016-0701
  • EXPORT-strength ciphers – Responsible for FREAK and LogJam

Because the protocol is in a sense more simplified, this make it less likely for administrators and developers to misconfigure the protocol. Jessie Victors, a security consultant, specializing in privacy-enhancing systems and applied cryptography stated:

I am excited for the upcoming standard. I think we will see far fewer vulnerabilities and we will be able to trust TLS far more than we have in the past.

Google is also raising the bar, as they have started warning users in search console that they are moving to TLS version 1.2, as TLS 1 is no longer that safe. They are giving a final deadline of March 2018.

TLS 1.3 Browser Support

With Chrome 63, TLS 1.3 is enabled for outgoing connections. Support for TLS 1.3 was added back in Chrome 56 and is also supported by Chrome for Android.

TLS 1.3 is enabled by default in Firefox 52 and above (including Quantum). They are retaining an insecure fallback to TLS 1.2 until they know more about server tolerance and the 1.3 handshake.

TLS 1.3 browser support

TLS 1.3 browser support

With that being said some SSL test services on the Internet don’t support TLS 1.3 yet and neither do other browsers such as IE, Microsoft Edge, Opera, or Safari. It will be a couple more months while the protocol is being finalized and for browsers to catch up. Most of the remaining ones are in development at the moment.

Cloudflare has an excellent article on why TLS 1.3 isn’t in browsers yet.

Summary

Just like with HTTP/2, TLS 1.3 is another exciting protocol update that we can expect to benefit from for years to come. Not only will encrypted (HTTPS) connections become faster, but they will also be more secure. Here’s to moving the web forward!

Source: https://kinsta.com/blog/tls-1-3/

Forget Facebook

Forget Facebook

Photo Credits: oe24.at – Copyrights of oe24.at reserved

Source: Techcrunch.com

Cambridge Analytica may have used Facebook’s data to influence your political opinions. But why does least-liked tech company Facebook have all this data about its users in the first place?

Let’s put aside Instagram, WhatsApp and other Facebook products for a minute. Facebook has built the world’s biggest social network. But that’s not what they sell. You’ve probably heard the internet saying “if a product is free, it means that you are the product.”

And it’s particularly true in this case because Facebook is the world’s second biggest advertising company in the world behind Google. During the last quarter of 2017, Facebook reported $12.97 billion in revenue, including $12.78 billion from ads.

That’s 98.5 percent of Facebook’s revenue coming from ads.

Ads aren’t necessarily a bad thing. But Facebook has reached ad saturation in the newsfeed. So the company has two options — creating new products and ad formats, or optimizing those sponsored posts.

Facebook has reached ad saturation in the newsfeed

This isn’t a zero-sum game — Facebook has been doing both at the same time. That’s why you’re seeing more ads on Instagram and Messenger. And that’s also why ads on Facebook seem more relevant than ever.

If Facebook can show you relevant ads and you end up clicking more often on those ads, then advertisers will pay Facebook more money.

So Facebook has been collecting as much personal data about you as possible — it’s all about showing you the best ad. The company knows your interests, what you buy, where you go and who you’re sleeping with.

You can’t hide from Facebook

Facebook’s terms and conditions are a giant lie. They are purposely misleading, too long and too broad. So you can’t just read the company’s terms of service and understand what it knows about you.

That’s why some people have been downloading their Facebook data. You can do it too, it’s quite easy. Just head over to your Facebook settings and click the tiny link that says “Download a copy of your Facebook data.”

In that archive file, you’ll find your photos, your posts, your events, etc. But if you keep digging, you’ll also find your private messages on Messenger (by default, nothing is encrypted).

And if you keep digging a bit more, chances are you’ll also find your entire address book and even metadata about your SMS messages and phone calls.

All of this is by design and you agreed to it. Facebook has unified terms of service and share user data across all its apps and services (except WhatsApp data in Europe for now). So if you follow a clothing brand on Instagram, you could see an ad from this brand on Facebook.com.

Messaging apps are privacy traps

But Facebook has also been using this trick quite a lot with Messenger. You might not remember, but the on-boarding experience on Messenger is really aggressive.

On iOS, the app shows you a fake permission popup to access your address book that says “Ok” or “Learn More”. The company is using a fake popup because you can’t ask for permission twice.

There’s a blinking arrow below the OK button.

If you click on “Learn More”, you get a giant blue button that says “Turn On”. Everything about this screen is misleading and Messenger tries to manipulate your emotions.

“Messenger only works when you have people to talk to,” it says. Nobody wants to be lonely, that’s why Facebook implies that turning on this option will give you friends.

Even worse, it says “if you skip this step, you’ll need to add each contact one-by-one to message them.” This is simply a lie as you can automatically talk to your Facebook friends using Messenger without adding them one-by-one.

The next time you pay for a burrito with your credit card, Facebook will learn about this transaction and match this credit card number with the one you added in Messenger

If you tap on “Not Now”, Messenger will show you a fake notification every now and then to push you to enable contact syncing. If you tap on yes and disable it later, Facebook still keeps all your contacts on its servers.

On Android, you can let Messenger manage your SMS messages. Of course, you guessed it, Facebook uploads all your metadata. Facebook knows who you’re texting, when, how often.

Even if you disable it later, Facebook will keep this data for later reference.

But Facebook doesn’t stop there. The company knows a lot more about you than what you can find in your downloaded archive. The company asks you to share your location with your friends. The company tracks your web history on nearly every website on earth using embedded JavaScript.

But my favorite thing is probably peer-to-peer payments. In some countries, you can pay back your friends using Messenger. It’s free! You just have to add your card to the app.

It turns out that Facebook also buys data about your offline purchases. The next time you pay for a burrito with your credit card, Facebook will learn about this transaction and match this credit card number with the one you added in Messenger.

In other words, Messenger is a great Trojan horse designed to learn everything about you.

And the next time an app asks you to share your address book, there’s a 99-percent chance that this app is going to mine your address book to get new users, spam your friends, improve ad targeting and sell email addresses to marketing companies.

I could say the same thing about all the other permission popups on your phone. Be careful when you install an app from the Play Store or open an app for the first time on iOS. It’s easier to enable something if a feature doesn’t work without it than to find out that Facebook knows everything about you.

GDPR to the rescue

There’s one last hope. And that hope is GDPR. I encourage you to read TechCrunch’s Natasha Lomas excellent explanation of GDPR to understand what the European regulation is all about.

Many of the misleading things that are currently happening at Facebook will have to change. You can’t force people to opt in like in Messenger. Data collection should be minimized to essential features. And Facebook will have to explain why it needs all this data to its users.

If Facebook doesn’t comply, the company will have to pay up to 4 percent of its global annual turnover. But that doesn’t stop you from actively reclaiming your online privacy right now.

You can’t be invisible on the internet, but you have to be conscious about what’s happening behind your back. Every time a company asks you to tap OK, think about what’s behind this popup. You can’t say that nobody told you.

Source: Techcrunch.com

What is GDPR – General Data Protection Regulation

Source Techcrunch.com

European Union lawmakers proposed a comprehensive update to the bloc’s data protection and privacy rules in 2012.

Their aim: To take account of seismic shifts in the handling of information wrought by the rise of the digital economy in the years since the prior regime was penned — all the way back in 1995 when Yahoo was the cutting edge of online cool and cookies were still just tasty biscuits.

Here’s the EU’s executive body, the Commission, summing up the goal:

The objective of this new set of rules is to give citizens back control over of their personal data, and to simplify the regulatory environment for business. The data protection reform is a key enabler of the Digital Single Market which the Commission has prioritised. The reform will allow European citizens and businesses to fully benefit from the digital economy.

For an even shorter the EC’s theory is that consumer trust is essential to fostering growth in the digital economy. And it thinks trust can be won by giving users of digital services more information and greater control over how their data is used. Which is — frankly speaking — a pretty refreshing idea when you consider the clandestine data brokering that pervades the tech industry. Mass surveillance isn’t just something governments do.

The General Data Protection Regulation (aka GDPR) was agreed after more than three years of negotiations between the EU’s various institutions.

It’s set to apply across the 28-Member State bloc as of May 25, 2018. That means EU countries are busy transposing it into national law via their own legislative updates (such as the UK’s new Data Protection Bill — yes, despite the fact the country is currently in the process of (br)exiting the EU, the government has nonetheless committed to implementing the regulation because it needs to keep EU-UK data flowing freely in the post-brexit future. Which gives an early indication of the pulling power of GDPR.

Meanwhile businesses operating in the EU are being bombarded with ads from a freshly energized cottage industry of ‘privacy consultants’ offering to help them get ready for the new regs — in exchange for a service fee. It’s definitely a good time to be a law firm specializing in data protection.

GDPR is a significant piece of legislation whose full impact will clearly take some time to shake out. In the meanwhile, here’s our guide to the major changes incoming and some potential impacts.

Data protection + teeth

A major point of note right off the bat is that GDPR does not merely apply to EU businesses; any entities processing the personal data of EU citizens need to comply. Facebook, for example — a US company that handles massive amounts of Europeans’ personal data — is going to have to rework multiple business processes to comply with the new rules. Indeed, it’s been working on this for a long time already.

Last year the company told us it had assembled “the largest cross functional team” in the history of its family of companies to support GDPR compliance — specifying this included “senior executives from all product teams, designers and user experience/testing executives, policy executives, legal executives and executives from each of the Facebook family of companies”.

“Dozens of people at Facebook Ireland are working full time on this effort,” it said, noting too that the data protection team at its European HQ (in Dublin, Ireland) would be growing by 250% in 2017. It also said it was in the process of hiring a “top quality data protection officer” — a position the company appears to still be taking applications for.

The new EU rules require organizations to appoint a data protection officer if they process sensitive data on a large scale (which Facebook very clearly does). Or are collecting info on many consumers — such as by performing online behavioral tracking. But, really, which online businesses aren’t doing that these days?

The extra-territorial scope of GDPR casts the European Union as a global pioneer in data protection — and some legal experts suggest the regulation will force privacy standards to rise outside the EU too.

Sure, some US companies might prefer to swallow the hassle and expense of fragmenting their data handling processes, and treating personal data obtained from different geographies differently, i.e. rather than streamlining everything under a GDPR compliant process. But doing so means managing multiple data regimes. And at very least runs the risk of bad PR if you’re outed as deliberately offering a lower privacy standard to your home users vs customers abroad.

Ultimately, it may be easier (and less risky) for businesses to treat GDPR as the new ‘gold standard’ for how they handle all personal data, regardless of where it comes from.

And while not every company harvests Facebook levels of personal data, almost every company harvests some personal data. So for those with customers in the EU GDPR cannot be ignored. At very least businesses will need to carry out a data audit to understand their risks and liabilities.

Privacy experts suggest that the really big change here is around enforcement. Because while the EU has had long established data protection standards and rules — and treats privacy as a fundamental right — its regulators have lacked the teeth to command compliance.

But now, under GDPR, financial penalties for data protection violations step up massively.

The maximum fine that organizations can be hit with for the most serious infringements of the regulation is 4% of their global annual turnover (or €20M, whichever is greater). Though data protection agencies will of course be able to impose smaller fines too. And, indeed, there’s a tiered system of fines — with a lower level of penalties of up to 2% of global turnover (or €10M).

This really is a massive change. Because while data protection agencies (DPAs) in different EU Member States can impose financial penalties for breaches of existing data laws these fines are relatively small — especially set against the revenues of the private sector entities that are getting sanctioned.

In the UK, for example, the Information Commissioner’s Office (ICO) can currently impose a maximum fine of just £500,000. Compare that to the annual revenue of tech giant Google (~$90BN) and you can see why a much larger stick is needed to police data processors.

It’s not necessarily the case that individual EU Member States are getting stronger privacy laws as a consequence of GDPR (in some instances countries have arguably had higher standards in their domestic law). But the beefing up of enforcement that’s baked into the new regime means there’s a better opportunity for DPAs to start to bark and bite like proper watchdogs.

GDPR inflating the financial risks around handling personal data should naturally drive up standards — because privacy laws are suddenly a whole lot more costly to ignore.

More types of personal data that are hot to handle

So what is personal data under GDPR? It’s any information relating to an identified or identifiable person (in regulatorspeak people are known as ‘data subjects’).

While ‘processing’ can mean any operation performed on personal data — from storing it to structuring it to feeding it to your AI models. (GDPR also includes some provisions specifically related to decisions generated as a result of automated data processing but more on that below).

A new provision concerns children’s personal data — with the regulation setting a 16-year-old age limit on kids’ ability to consent to their data being processed. However individual Member States can choose (and some have) to derogate from this by writing a lower age limit into their laws.

GDPR sets a hard cap at 13-years-old — making that the defacto standard for children to be able to sign up to digital services. So the impact on teens’ social media habits seems likely to be relatively limited.

The new rules generally expand the definition of personal data — so it can include information such as location data, online identifiers (such as IP addresses) and other metadata. So again, this means businesses really need to conduct an audit to identify all the types of personal data they hold. Ignorance is not compliance.

GDPR also encourages the use of pseudonymization — such as, for example, encrypting personal data and storing the encryption key separately and securely — as a pro-privacy, pro-security technique that can help minimize the risks of processing personal data. Although pseudonymized data is likely to still be considered personal data; certainly where a risk of reidentification remains. So it does not get a general pass from requirements under the regulation.

Data has to be rendered truly anonymous to be outside the scope of the regulation. (And given how often ‘anonymized’ data-sets have been shown to be re-identifiable, relying on any anonymizing process to be robust enough to have zero risk of re-identification seems, well, risky.)

To be clear, given GDPR’s running emphasis on data protection via data security it is implicitly encouraging the use of encryption above and beyond a risk reduction technique — i.e. as a way for data controllers to fulfill its wider requirements to use “appropriate technical and organisational measures” vs the risk of the personal data they are processing.

The incoming data protection rules apply to both data controllers (i.e. entities that determine the purpose and means of processing personal data) and data processors (entities that are responsible for processing data on behalf of a data controller — aka subcontractors).

Indeed, data processors have some direct compliance obligations under GDPR, and can also be held equally responsible for data violations, with individuals able to bring compensation claims directly against them, and DPAs able to hand them fines or other sanctions.

So the intent for the regulation is there be no diminishing in responsibility down the chain of data handling subcontractors. GDPR aims to have every link in the processing chain be a robust one.

For companies that rely on a lot of subcontractors to handle data operations on their behalf there’s clearly a lot of risk assessment work to be done.

As noted above, there is a degree of leeway for EU Member States in how they implement some parts of the regulation (such as with the age of data consent for kids).

Consumer protection groups are calling for the UK government to include an optional GDPR provision on collective data redress to its DP bill, for example — a call the government has so far rebuffed.

But the wider aim is for the regulation to harmonize as much as possible data protection rules across all Member States to reduce the regulatory burden on digital businesses trading around the bloc.

On data redress, European privacy campaigner Max Schrems — most famous for his legal challenge to US government mass surveillance practices that resulted in a 15-year-old data transfer arrangement between the EU and US being struck down in 2015 — is currently running a crowdfunding campaign to set up a not-for-profit privacy enforcement organization to take advantage of the new rules and pursue strategic litigation on commercial privacy issues.

Schrems argues it’s simply not viable for individuals to take big tech giants to court to try to enforce their privacy rights, so thinks there’s a gap in the regulatory landscape for an expert organization to work on EU citizen’s behalf. Not just pursuing strategic litigation in the public interest but also promoting industry best practice.

The proposed data redress body — called noyb; short for: ‘none of your business’ — is being made possible because GDPR allows for collective enforcement of individuals’ data rights. And that provision could be crucial in spinning up a centre of enforcement gravity around the law. Because despite the position and role of DPAs being strengthened by GDPR, these bodies will still inevitably have limited resources vs the scope of the oversight task at hand.

Some may also lack the appetite to take on a fully fanged watchdog role. So campaigning consumer and privacy groups could certainly help pick up any slack.

Privacy by design and privacy by default

Another major change incoming via GDPR is ‘privacy by design’ no longer being just a nice idea; privacy by design and privacy by default become firm legal requirements.

This means there’s a requirement on data controllers to minimize processing of personal data — limiting activity to only what’s necessary for a specific purpose, carrying out privacy impact assessments and maintaining up-to-date records to prove out their compliance.

Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable. (And we’ve sure seen a whole lot of those hellish things in tech.) The core idea is that consent should be an ongoing, actively managed process; not a one-off rights grab.

As the UK’s ICO tells it, consent under GDPR for processing personal data means offering individuals “genuine choice and control” (for sensitive personal data the law requires a higher standard still — of explicit consent).

There are other legal bases for processing personal data under GDPR — such as contractual necessity; or compliance with a legal obligation under EU or Member State law; or for tasks carried out in the public interest — so it is not necessary to obtain consent in order to process someone’s personal data. But there must always be an appropriate legal basis for each processing.

Transparency is another major obligation under GDPR, which expands the notion that personal data must be lawfully and fairly processed to include a third principle of accountability. Hence the emphasis on data controllers needing to clearly communicate with data subjects — such as by informing them of the specific purpose of the data processing.

The obligation on data handlers to maintain scrupulous records of what information they hold, what they are doing with it, and how they are legally processing it, is also about being able to demonstrate compliance with GDPR’s data processing principles.

But — on the plus side for data controllers — GDPR removes the requirement to submit notifications to local DPAs about data processing activities. Instead, organizations must maintain detailed internal records — which a supervisory authority can always ask to see.

It’s also worth noting that companies processing data across borders in the EU may face scrutiny from DPAs in different Member States if they have users there (and are processing their personal data).

Although the GDPR sets out a so-called ‘one-stop-shop’ principle — that there should be a “lead” DPA to co-ordinate supervision between any “concerned” DPAs — this does not mean that, once it applies, a cross-EU-border operator like Facebook is only going to be answerable to the concerns of the Irish DPA.

Indeed, Facebook’s tactic of only claiming to be under the jurisdiction of a single EU DPA looks to be on borrowed time. And the one-stop-shop provision in the GDPR seems more about creating a co-operation mechanism to allow multiple DPAs to work together in instances where they have joint concerns, rather than offering a way for multinationals to go ‘forum shopping’ — which the regulation does not permit (per WP29 guidance).

Another change: Privacy policies that contain vague phrases like ‘We may use your personal data to develop new services’ or ‘We may use your personal data for research purposes’ will not pass muster under the new regime. So a wholesale rewriting of vague and/or confusingly worded T&Cs is something Europeans can look forward to this year.

Add to that, any changes to privacy policies must be clearly communicated to the user on an ongoing basis. Which means no more stale references in the privacy statement telling users to ‘regularly check for changes or updates’ — that just won’t be workable.

The onus is firmly on the data controller to keep the data subject fully informed of what is being done with their information. (Which almost implies that good data protection practice could end up tasting a bit like spam, from a user PoV.)

The overall intent behind GDPR is to inculcate an industry-wide shift in perspective regarding who ‘owns’ user data — disabusing companies of the notion that other people’s personal information belongs to them just because it happens to be sitting on their servers.

“Organizations should acknowledge they don’t exist to process personal data but they process personal data to do business,” is how analyst Gartner research director Bart Willemsen sums this up. “Where there is a reason to process the data, there is no problem. Where the reason ends, the processing should, too.”

The data protection officer (DPO) role that GDPR brings in as a requirement for many data handlers is intended to help them ensure compliance.

This officer, who must report to the highest level of management, is intended to operate independently within the organization, with warnings to avoid an internal appointment that could generate a conflict of interests.

Which types of organizations face the greatest liability risks under GDPR? “Those who deliberately seem to think privacy protection rights is inferior to business interest,” says Willemsen, adding: “A recent example would be Uber, regulated by the FTC and sanctioned to undergo 20 years of auditing. That may hurt perhaps similar, or even more, than a one-time financial sanction.”

“Eventually, the GDPR is like a speed limit: There not to make money off of those who speed, but to prevent people from speeding excessively as that prevents (privacy) accidents from happening,” he adds.

Another right to be forgotten

Under GDPR, people who have consented to their personal data being processed also have a suite of associated rights — including the right to access data held about them (a copy of the data must be provided to them free of charge, typically within a month of a request); the right to request rectification of incomplete or inaccurate personal data; the right to have their data deleted (another so-called ‘right to be forgotten’ — with some exemptions, such as for exercising freedom of expression and freedom of information); the right to restrict processing; the right to data portability (where relevant, a data subject’s personal data must be provided free of charge and in a structured, commonly used and machine readable form).

All these rights make it essential for organizations that process personal data to have systems in place which enable them to identify, access, edit and delete individual user data — and be able to perform these operations quickly, with a general 30 day time-limit for responding to individual rights requests.

GDPR also gives people who have consented to their data being processed the right to withdraw consent at any time. Let that one sink in.

Data controllers are also required to inform users about this right — and offer easy ways for them to withdraw consent. So no, you can’t bury a ‘revoke consent’ option in tiny lettering, five sub-menus deep. Nor can WhatsApp offer any more time-limit opt-outs for sharing user data with its parent multinational, Facebook. Users will have the right to change their mind whenever they like.

The EU lawmakers’ hope is that this suite of rights for consenting consumers will encourage respectful use of their data — given that, well, if you annoy consumers they can just tell you to sling yer hook and ask for a copy of their data to plug into your rival service to boot. So we’re back to that fostering trust idea.

Add in the ability for third party organizations to use GDPR’s provision for collective enforcement of individual data rights and there’s potential for bad actors and bad practice to become the target for some creative PR stunts that harness the power of collective action — like, say, a sudden flood of requests for a company to delete user data.

Data rights and privacy issues are certainly going to be in the news a whole lot more.

Getting serious about data breaches

But wait, there’s more! Another major change under GDPR relates to security incidents — aka data breaches (something else we’ve seen an awful, awful lot of in recent years) — with the regulation doing what the US still hasn’t been able to: Bringing in a universal standard for data breach disclosures.

GDPR requires that data controllers report any security incidents where personal data has been lost, stolen or otherwise accessed by unauthorized third parties to their DPA within 72 hours of them becoming aware of it. Yes, 72 hours. Not the best part of a year, like er Uber.

If a data breach is likely to result in a “high risk of adversely affecting individuals’ rights and freedoms” the regulation also implies you should ‘fess up even sooner than that — without “undue delay”.

Only in instances where a data controller assesses that a breach is unlikely to result in a risk to the rights and freedoms of “natural persons” are they exempt from the breach disclosure requirement (though they still need to document the incident internally, and record their reason for not informing a DPA in a document that DPAs can always ask to see).

“You should ensure you have robust breach detection, investigation and internal reporting procedures in place,” is the ICO’s guidance on this. “This will facilitate decision-making about whether or not you need to notify the relevant supervisory authority and the affected individuals.”

The new rules generally put strong emphasis on data security and on the need for data controllers to ensure that personal data is only processed in a manner that ensures it is safeguarded.

Here again, GDPR’s requirements are backed up by the risk of supersized fines. So suddenly sloppy security could cost your business big — not only in reputation terms, as now, but on the bottom line too. So it really must be a C-suite concern going forward.

Nor is subcontracting a way to shirk your data security obligations. Quite the opposite. Having a written contract in place between a data controller and a data processor was a requirement before GDPR but contract requirements are wider now and there are some specific terms that must be included in the contract, as a minimum.

Breach reporting requirements must also be set out in the contract between processor and controller. If a data controller is using a data processor and it’s the processor that suffers a breach, they’re required to inform the controller as soon as they become aware. The controller then has the same disclosure obligations as per usual.

Essentially, data controllers remain liable for their own compliance with GDPR. And the ICO warns they must only appoint processors who can provide “sufficient guarantees” that the regulatory requirements will be met and the rights of data subjects protected.

tl;dr, be careful who and how you subcontract.

Right to human review for some AI decisions

Article 22 of GDPR places certain restrictions on entirely automated decisions based on profiling individuals — but only in instances where these human-less acts have a legal or similarly significant effect on the people involved.

There are also some exemptions to the restrictions — where automated processing is necessary for entering into (or performance of) a contract between an organization and the individual; or where it’s authorized by law (e.g. for the purposes of detecting fraud or tax evasion); or where an individual has explicitly consented to the processing.

In its guidance, the ICO specifies that the restriction only applies where the decision has a “serious negative impact on an individual”.

Suggested examples of the types of AI-only decisions that will face restrictions are automatic refusal of an online credit application or an e-recruiting practices without human intervention.

Having a provision on automated decisions is not a new right, having been brought over from the 1995 data protection directive. But it has attracted fresh attention — given the rampant rise of machine learning technology — as a potential route for GDPR to place a check on the power of AI blackboxes to determine the trajectory of humankind.

The real-world impact will probably be rather more prosaic, though. And experts suggest it does not seem likely that the regulation, as drafted, equates to a right for people to be given detailed explanations of how algorithms work.

Though as AI proliferates and touches more and more decisions, and as its impacts on people and society become ever more evident, pressure may well grow for proper regulatory oversight of algorithmic blackboxes.

In the meanwhile, what GDPR does in instances where restrictions apply to automated decisions is require data controllers to provide some information to individuals about the logic of an automated decision.

They are also obliged to take steps to prevent errors, bias and discrimination. So there’s a whiff of algorithmic accountability. Though it may well take court and regulatory judgements to determine how stiff those steps need to be in practice.

Individuals do also have a right to challenge and request a (human) review of an automated decision in the restricted class.

Here again the intention is to help people understand how their data is being used. And to offer a degree of protection (in the form of a manual review) if a person feels unfairly and harmfully judged by an AI process.

The regulation also places some restrictions on the practice of using data to profile individuals if the data itself is sensitive data — e.g. health data, political belief, religious affiliation etc — requiring explicit consent for doing so. Or else that the processing is necessary for substantial public interest reasons (and lies within EU or Member State law).

While profiling based on other types of personal data does not require obtaining consent from the individuals concerned, it still needs a legal basis and there is still a transparency requirement — which means service providers will need to inform users they are being profiled, and explain what it means for them.

And people also always have the right to object to profiling activity based on their personal data.

 

Source: https://techcrunch.com/2018/01/20/wtf-is-gdpr/

Iridium – The Making of the Largest Satellite Constellation in History

The Making of the Largest Satellite Constellation in History

How Iridium rose from its ashes to launch the era of satellite megaprojects.

An artist’s rendering of an Iridium Next satellite in orbit. Image: Iridium

Matt Desch didn’t set out to change the world, but he just might do it anyway. As the CEO of Iridium, the only company that provides satellite communications to every inch of the globe, he is at the helm of Next, a fleet of telecommunications satellites that is arguably one of the most ambitious space projects ever undertaken.

By this time next year, Iridium will have sent 75 Next satellites into orbit. Each one will be replacing a first-generation Iridium satellite that has been in orbit for almost two decades. Once these new satellites are in place, they will establish radio contact with one another over thousands of miles of empty space to create the largest and most complex mesh network ever placed in orbit.

Like the first-generation network, these Iridium Next satellites will provide critical phone and data services to everyone from scientists in Antarctica and military contractors in the Middle East, to drug mules in Central America and climbers on the summit of Mount Everest.

A payload capsule filled with 10 Iridium Next satellites on board a Falcon 9 rocket ahead of the Iridium-3 launch in October 2017. Image: Daniel Oberhaus/Motherboard

But the Next constellation also comes with a suite of new features. Not only will it provide an orbital backbone for the expanding Internet of Things, the satellite network will also be tracking planes and ships in regions they’ve never been tracked before. Almost 70 percent of the Earth isn’t covered by radar, which is why the ill-fated Malaysia Airlines flight was able to disappear into the ocean in 2014 without a trace. Iridium hopes to make these sorts of tragedies obsolete.

That’s if everything goes according to plan, of course, and Iridium isn’t exactly known for its successes. A little under 20 years ago, it was the subject of one of the largest corporate bankruptcies in history. Indeed, the original fleet of satellites that Iridium is replacing with its Next constellation came within hours of a fiery demise, after the company decided to cut its losses after filing for bankruptcy and deorbit them.

In this sense, each Iridium Next launch not only represents the culmination of several years’ worth of intensive design, research, and testing by an international team of scientists and engineers, but also the dogged pursuit of a (quite literally) lofty goal in the face of overwhelmingly bad odds. In order to get a better understanding of the stakes, I followed the next generation of Iridium satellites from their birth in a warehouse in the Arizona desert to their delivery to orbit nearly 500 miles overhead.

*

Last October, I met Desch, who has been Iridium’s CEO for nearly 12 years, for breakfast at a restaurant in Solvang, California. We had just finished watching the third batch of ten Iridium satellites get delivered to orbit aboard a SpaceX Falcon 9 rocket that launched from nearby Vandenberg Air Force Base. It was well before noon, but both Desch and I had been awake for hours.

A SpaceX rocket carrying the third batch of Iridium satellites launches from Vandenberg Air Force base on October 9, 2017. Image: Daniel Oberhaus/Motherboard

As we devoured our avocado toast, Iridium engineers on the East Coast were busy maneuvering the satellites into their orbital planes while Desch explained why SpaceX and Iridium are ideal partners.

“In many ways, the Falcon 9 was built around the Iridium payload because we were the first ones to work with SpaceX,” Desch told me. “Launching is about a third of our costs and if SpaceX wasn’t around, I just couldn’t afford it.”

On the other hand, Iridium, which was SpaceX’s first customer and remains its largest, provides SpaceX with a major source of revenue that it won’t find anywhere else in the commercial sector. As Desch bluntly put it, “Nobody launches 75 satellites.”

Desch’s faith in Elon Musk’s rockets hasn’t wavered over the years, but watching the Falcon 9 rocket explode on the launch pad in September 2016 made an impression on him. The first ten Iridium satellites had been scheduled to head to orbit a few weeks later, but the explosion delayed deployment until last January.

SpaceX touts its “flight proven” rockets as a cost-saving measure, but the extra risk that comes with flying on a used rocket is part of the reason why Iridium’s original contract with the private spaceflight company specified that its cargo would never be flown on a Falcon 9 booster that had previously been to space. On the other hand, the rocket that claimed a Facebook satellite that September had been brand new, a forceful reminder that when it comes to space travel there are no guarantees.

Flying “used” would save Iridum some money, but it’s a gamble: Each of Iridium’s payloads is worth a quarter billion dollars and the loss of even a single one would be disastrous. After lengthy talks with SpaceX and his insurance providers, however, Desch recently opted to send the fourth and fifth batches of Iridium satellites on flight-proven rockets. Given the high stakes of each launch, this speaks volumes about his confidence in Musk’s company.

On December 22, SpaceX carried a batch of Iridium satellites to orbit on a previously flown rocket for the first time. Whatever Desch’s anxieties were about flying used, they proved to be unfounded—the launch went flawlessly.

How to Build a Satellite

Iridium’s journey to space begins in a nondescript building in pastoral Gilbert, Arizona, a Phoenix suburb. This building is home to Orbital ATK, the aerospace company overseeing the manufacturing process of the Iridium Next constellation, and sits just across the street from a farm where a handful of cows spend their days idly chewing cud. The sterile hallways of the Orbital ATK building are cavernous and lined with doors plastered with warnings that these rooms house strictly controlled materials.

Technicians prepare an Iridium satellite at the Orbital factory. Image: Iridium

Under the International Traffic in Arms Regulations (ITAR), many satellite parts are subject to the same stringent oversight as weapons like tanks and hand grenades. These technologies are usually off limits to civilian eyes and can’t be shared with foreign governments. This regulation has been a pain the side of the space industry for years, but for a visitor to Orbital ATK’s facilities like myself, it mostly meant I wouldn’t be bringing a camera inside.

The facility consists of five massive clean rooms for storing and assembling satellites. Orbital ATK has registered each of these as a Foreign Trade Zone, a legal distinction that is the industrial equivalent to a duty free shop at an airport and saves the company paying steep taxes on imported parts. The largest FTZ is reserved for the manufacture of Iridium satellites, consists of 18 workstations, and has technicians in cleanroom suits working on site 24/7.

Most satellites are unique and their production is a painstaking process that can last years. This wasn’t going to work for Iridium: The company needed to manufacture 81 satellites (75 to be placed in orbit with six remaining on the ground as spares) in the amount of time it usually takes to make one. In short, Iridium tasked Orbital ATK with doing for aerospace what Henry Ford had done for the automobile.

This was a challenge for a relatively small company, but it had some experience in the area. In the late 90s, Orbital also oversaw the construction of the first constellation of Iridium satellites, when the idea of mass producing satellites was totally unprecedented and deemed by many to be impossible.

Many of the engineers that worked on that first constellation are still at the company today, but this time the design of the Next satellites was overseen by the French aerospace company Thales.

Thales had a legacy satellite design that was adapted for the Iridium payloads, but according to the Orbital ATK and Thales engineers I spoke with, the design collaboration process could still be painstakingly slow due to ITAR restrictions. Often, Thales would send its preliminary designs to Orbital ATK, only to find a number of adjustments needed to be made, even though the exact nature of these adjustments was unclear due to restrictions on the sharing of component designs.

After a drawn-out revision process, the satellite designs were passed off to Orbital ATK, which began to manufacture seven of the planned 81 satellites. These seven satellites are the only ones that run the gamut of testing, which includes subjecting them to intense vibrations, electromagnetic interference and acoustic tests that Michael Pickett, Orbital ATK’s director of program management, described to me as “blasting the satellites with the biggest, baddest rock concert speakers you can imagine.”

The point of these tests was to validate the design process—if the satellites still worked after this mechanical torture, it meant the other 74 satellites that would pass through the assembly line should work fine, too.

After the design validation tests, Orbital ATK kicked its 18-station assembly line into high gear. Until the last one is finished, sometime in early 2018, the assembly line will see five to six satellites from start to completion each month. The process starts with assembling the different parts of the satellite body—called the “bus”—which is about the size of a Mini Cooper. Once the bus is completed, Orbital ATK technicians begin testing the satellite’s electronic components and communication modules, antenna alignment, and insulation.

Toward the end of the assembly line, each satellite is placed in a thermal chamber for 12 days and is subjected to extremely high and low temperatures to see how it will hold up in the hostile space environment. If the satellite survives, it progresses to station 15, where the star tracker that will be used to track the satellite’s position in orbit is installed. This station also holds sentimental value for the Orbital ATK engineers—the point where a plate dedicating the craft to a specific employee or investor is installed on the craft.

10 Iridium satellites are loaded into a specially designed payload capsule to be placed atop a SpaceX Falcon 9 rocket. Image: Iridium

Next, the satellites’ fuel tanks are filled with helium and placed in an airtight tent that is pumped full of nitrogen. Due to the size difference of these molecules, the engineers are able to look for any leaks that might’ve cropped up during the assembly process.

Finally, the solar panels that will serve as a power supply for the satellite are installed and the craft is basically ready for orbit. Iridium then has to decide whether or not the satellite will be one of the ten going up on the next SpaceX rocket. If not, it will join a few dozen other satellites stacked two-high and covered with black tarps inside a large storage room, until they’re ready for launch.

If a finished satellite is destined to go up on the next launch, its batteries will be removed from cold storage and installed on the craft at the last station. Here, the satellite’s software is uploaded to the craft and Iridium technicians at its Satellite Network Operations Center in Virginia establish contact with the satellite to make sure they will be able to communicate with the craft while it’s in orbit. The satellite is then loaded into a custom shipping container for its journey to Vandenberg Air Force Base in California, its final stop on Earth.

After liftoff, Iridium’s global network of technicians takes over to make sure the satellites are communicating with one another and the gateways on Earth as they are maneuvered into orbit—a process that can take up to a month. Once they are properly in orbit, the satellites will not only begin talking to one another and Iridium devices on the ground, but also one of the three gateways that link the satellite constellation to the telecommunications infrastructure that most of us use on a daily basis.

*

When I visited Iridium’s main gateway in Tempe, Arizona, it was a hive of activity. (The other two gateways are exclusively for the US Department of Defense and Russia, respectively.) Iridium technicians sat in a darkened control room watching satellites tear across a projection of the Earth as they monitored calls being routed through the network for any signs of an error.

Iridium’s main satellite network operations center in Virginia. Image: Iridium

According to Stuart Fankhauser, Iridium’s vice president of network operations, it wasn’t so long ago that the operating room was a dead zone as the company teetered on the brink of bankruptcy.

“I was one of the last guys here,” Fankhauser told me. “It was just me and four others, and we ran everything. It was very quiet, very eerie.”

Iridium was spun off as a subsidiary of Motorola in the 1990s. Its first-generation constellation was originally expected to cost around $3.5 billion, but by the time the satellites were functional, Motorola had sunk almost twice that much into the revolutionary project.

To make matters worse, in the early 90s there were hardly any customers for Iridium’s satellite phone service, the constellation’s raison d’être. This was partly due to the unwieldy size of the company’s satellite phone (affectionately known as “the brick”), but mostly a consequence of its cost: thousands of dollars for the phone and then around $7/minute to make a call. Before it filed for bankruptcy, Iridium had roughly 60,000 customers, but increasing cell phone coverage in the United States and Europe meant that the very populations that could afford the device had increasingly little need for it.

In August 1999, Motorola pulled the plug on its ambitious satellite project and Iridium filed for bankruptcy, just nine months after the satellite constellation went live. Unless another company bought it out—and by 1999, no investor in their right mind would touch this space-age Icarus—Iridium would be done for.

Enter Dan Colussy, the former president of Pan American airlines who had been monitoring Iridium’s troubles with great interest. Just hours before Motorola was scheduled to begin the deorbiting process, Colussy was busy negotiating terms with the Department of Defense, Saudi financiers, and Motorola executives that would allow him to formally purchase the company for around $35 million, a fraction of the billions of dollars the tech giant had sunk into it.

Iridium VP of satellite operations Walt Everetts with one of the company’s grounded test satellites in Tempe, Arizona. Image: Daniel Oberhaus/Motherboard

As detailed in Eccentric Orbits, journalist John Bloom’s tell-all memoir of Iridium’s unlikely rise and spectacular failure, Colussy, who didn’t respond to my requests for comment, knew the satellite constellation would be good for something, he just wasn’t sure what. Iridium was an unprecedented technological feat that simply seemed too valuable to allow it to burn up in the atmosphere. Sixteen years later, Colussy’s intuition is finally being proven correct, but Iridium’s employees told me it was an uphill battle to make it here.

“We had to be pretty scrappy back then,” Fankhauser told me as we walked through Iridium’s Tempe gateway facility. “We were burning our investors’ money so we bought supplies off eBay and liquidators from failing dotcoms. It was humbling.”

The gateway is one half of Iridium’s backend operations. The other half, a test facility just down the road, is constantly probing the satellite network for weaknesses or seeking ways to improve Iridium’s service. Inside the test facility are two large Faraday cages filled with every type of Iridium phone and IoT device that operates on the Iridium network.

These devices are used to call two partially disassembled satellites in another room that have been programmed to “think” they’re in orbit in order to test the load-bearing capacity of the operational network and the compatibility of various devices. On the day I visited, Iridium engineers had successfully managed to place 1,700 simultaneous calls through a single satellite, a company record.

Walt Everetts is Iridium’s vice president of satellite operations and ground development and oversees the day to day activities at the test center. He showed me a large interactive map in the test center’s lobby that depicted calls on the network in real time. Small colored dots indicated different types of services: someone in Dubai calling someone in China or a sailor in the Atlantic checking their email. But the vast majority of the dots indicated machines talking to other machines.

Iridium satellites waiting to be loaded onto a Falcon 9 rocket. The white box at the top is the Aireon hosted payload, which will track planes in regions of the world that are outside of ground-based RADAR range. Image: Iridium

As Everetts and Desch both pointed out, the growth of the Internet of Things is a key reason why they believe Iridium will be a viable company this time around. Although facilitating calls between humans is still a core component of the company’s business, most of the network is used to route information between computers, whether these are tsunami monitoring buoys in the middle of the ocean, or chips implanted to track endangered animals. In fact, Desch said over half of Iridium’s almost 1 million subscriptions are now IoT companies looking to connect their devices.

End of an Era

Iridium is one of the few companies in the aerospace sector, alongside SpaceX, that can claim to have anything close to a fan club. Aside from the customers actually using its products, there are a number of astroheads around the world cataloging a phenomenon known as Iridium flares. The original satellites were designed with a large reflective surface, meaning at certain angles the sunlight produces a bright flare that can be seen with the naked eye at night, even in light polluted cities.

This passion was also expressed by Iridium technicians I spoke with, who had an uncanny habit of referring to the satellites in the same manner a parent might speak about their child. Indeed, each one has a name—Everetts has named two after his sons—and this adds a degree of gravitas to the deorbiting process, which can take anywhere from a few days to several months, depending on how much fuel is left in a satellite. At the time of writing, six of the original Iridium satellites launched in the late 90s have been successfully deorbited. Several others will meet their demise later this year.

Yet the end of the original Iridium network is also the beginning of something new—not just for the company, but for space exploration as a whole.

Today, companies like OneWeb, Boeing, and SpaceX are in a race to create massive broadband networks consisting of hundreds of satellites, but Iridium doesn’t see them as much of a threat. In the first place, most of these planned constellations are for consumer broadband, whereas Iridium provides specialized services. More importantly, however, none of these companies has even come close to getting those projects off the ground.

This race to take the internet to space calls to mind other ambitious and ill-fated satellite telecommunications companies of the 90s, like Teledesic and Globalstar. Both these companies collapsed after Iridium filed for bankruptcy and demonstrated that there wasn’t a real market for satellite phones. It’s highly likely this new crop is carefully watching Iridium, just like the space industry did back in the 90s, to see whether its ambitious program will work. So far, everything has gone off without a hitch.

But 35 more satellites still need to be launched. And there is plenty of opportunity for error. The gamble is a big one, and so is the reward—being the first, and only, company to provide communications coverage to every inch of the globe.

“We’re the only company that’s ever launched this many satellites and we’re the only one still doing it today,” Desch told me. “But because of our success, we’ve inspired a whole new industry: This is the era of Low Earth Orbit satellite megaconstellations.”

Source: https://motherboard.vice.com/en_us/article/d34bmk/iridium-next-satellite-constellation-spacex-desch

Secure your Privacy – HERE’S WHY YOU SHOULD USE SIGNAL

Source: https://www.wired.com/story/ditch-all-those-other-messaging-apps-heres-why-you-should-use-signal/

STOP ME IF you’ve heard this before. You text a friend to finalize plans, anxiously awaiting their reply, only to get a message from them on Snapchat to say your latest story was hilarious. So, you move the conversation over to Snapchat, decide to meet up at 10:30, but then you close the app and can’t remember if you agreed on meeting at Hannegan’s or that poppin‘ new brewery downtown. You can’t go back and look at the message since Snapchat messages have a short shelf life, so you send a text, but your friend has already proven to be an unreliable texter. You’d be lucky if they got back to you by midnight.

All of this illustrates a plain truth. There are just too many messaging apps. As conversations can bounce between Snapchat, iMessage, Skype, Instagram, Twitter, and Hangouts/Allo or whatever Google’s latest attempt at messaging is, they’re rendered confusing and unsearchable. We could stick to SMS, but it’s pretty limited compared to other options, and it has some security holes. Rather than just chugging along with a dozen chat apps, letting your notifications pile up, it’s time to pick one messaging app and get all of your friends on board. That way, everyone can just pick up their phones and shoot a message to anyone without hesitation.

Here comes the easy part. There’s one messaging app we should all be using: Signal. It has strong encryption, it’s free, it works on every mobile platform, and the developers are committed to keeping it simple and fast by not mucking up the experience with ads, web-tracking, stickers, or animated poop emoji.

Tales From the Crypto

Signal looks and works a lot like other basic messaging apps, so it’s easy to get started. It’s especially convenient if you have friends and family overseas because, like iMessage and WhatsApp, Signal lets you sidestep expensive international SMS fees. It also supports voice and video calls, so you can cut out Skype and FaceTime. Sure, you don’t get fancy stickers or games like some of the competition, but you can still send pictures, videos, and documents. It’s available on iOS, Android, and desktop.

But plenty of apps have all that stuff. The thing that actually makes Signal superior is that it’s easy to ensure that the contents of every chat remain private and unable to be read by anyone else. As long as both parties are using the app to message each other, every single message sent with Signal is encrypted. Also, the encryption Signal uses is available under an open-source license, so experts have had the chance to test and poke the app to make sure it stays as secure as what’s intended.

If you’re super concerned about messages being read by the wrong eyes, Signal lets you force individual conversations to delete themselves after a designated amount of time. Signal’s security doesn’t stop at texts. All of your calls are encrypted, so nobody can listen in. Even if you have nothing to hide, it’s nice to know that your private life is kept, you know, private.

WhatAbout WhatsApp

Yes, this list of features sounds a lot like WhatsApp. It’s true, the Facebook-owned messaging app has over a billion users, offers most of the same features, and even employs Signal’s encryption to keep chats private. But WhatsApp raises a few concerns that Signal doesn’t. First, it’s owned by Facebook, a company whose primary interest is in collecting information about you to sell you ads. That alone may steer away those who feel Facebook already knows too much about us. Even though the content of your WhatsApp messages are encrypted, Facebook can still extract metadata from your habits, like who you’re talking to and how frequently.

Still, if you use WhatsApp, chances are you already know a lot of other people who are using it. Getting all of them to switch to Signal is highly unlikely. And you know, that’s OK—WhatsApp really is the next-best option to Signal. The encryption is just as strong, and while it isn’t as cleanly stripped of extraneous features as Signal, that massive user base makes it easy to reach almost anyone in your contact list.

Chat Heads

While we’re talking about Facebook, it’s worth noting that the company’s Messenger app isn’t the safest place to keep your conversations. Aside from all the clutter inside the app, the two biggest issues with Facebook Messenger are that you have to encrypt conversations individually by flipping on the „Secret Conversations“ option (good luck remembering to do that), and that anyone with a Facebook profile can just search for your name and send you a message. (Yikes!) There are too many variables in the app, and a lot the security is out of your hands. iMessage may seem like a solid remedy to all of these woes, but it’s tucked behind Apple’s walled iOS garden, so you’re bound to leave out your closest friends who use Android devices. And if you ever switch platforms, say bye-bye to your chat history.

Signal isn’t going to win a lot of fans among those who’ve grown used to the more novel features inside their chat apps. There are no stickers, and no animoji. Still, as privacy issues come to the fore in the minds of users, and as mobile messaging options proliferate, and as notifications pile up, everyone will be searching for a path to sanity. It’s easy to invite people to Signal. Once you’re using it, just tap the „invite“ button inside the chat window, and your friend will be sent a link to download the app. Even stubborn people who only send texts can get into it—Signal can be set as your phone’s default SMS client, so the pain involved in the switch is minimal.

So let’s make a pact right now. Let’s all switch to Signal, keep our messages private, and finally put an end to the untenable multi-app shuffle that’s gone on far too long.