Android’s trust problem

Illustration by William Joel / The Verge

Published today, a two-year study of Android security updates has revealed a distressing gap between the software patches Android companies claim to have on their devices and the ones they actually have. Your phone’s manufacturer may be lying to you about the security of your Android device. In fact, it appears that almost all of them do.

Coming at the end of a week dominated by Mark Zuckerberg’s congressional hearings and an ongoing Facebook privacy probe, this news might seem of lesser importance, but it goes to the same issue that has drawn lawmakers’ scrutiny to Facebook: the matter of trust. Facebook is the least-trusted big US tech company, and Android might just be the operating system equivalent of it: used by 2 billion people around the world, tolerated more than loved, and susceptible to major lapses in user privacy and security.

The gap between Android and its nemesis, Apple’s iOS, has always boiled down to trust. Unlike Google, Apple doesn’t make its money by tracking the behavior of its users, and unlike the vast and varied Android ecosystem, there are only ever a couple of iPhone models, each of which is updated with regularity and over a long period of time. Owning an iPhone, you can be confident that you’re among Apple’s priority users (even if Apple faces its own cohort of critics accusing it of planned obsolescence), whereas with an Android device, as evidenced today, you can’t even be sure that the security bulletins and updates you’re getting are truthful.

Android is perceived as untrustworthy in large part because it is. Beside the matter of security misrepresentations, here are some of the other major issues and villains plaguing the platform:

Version updates are slow, if they arrive at all. I’ve been covering Android since its earliest Cupcake days, and in the near-decade that’s passed, there’s never been a moment of contentment about the speed of OS updates. Things seemed to be getting even worse late last year when the November batch of new devices came loaded with 2016’s Android Nougat. Android Oreo is now nearly eight months old — meaning we’re closer to the launch of the next version of Android than the present one — and LG is still preparing to roll out that software for its 2017 flagship LG G6.

Promises about Android device updates are as ephemeral as Snapchat messages. Before it became the world’s biggest smartphone vendor, Samsung was notorious for reneging on Android upgrade promises. Sony’s Xperia Z3 infamously fell foul of an incompatibility between its Snapdragon processor and Google’s Android Nougat requirements, leaving it prematurely stuck without major OS updates. Whenever you have so many loud voices involved — carriers and chip suppliers along with Google and device manufacturers — the outcome of their collaboration is prone to becoming exactly as haphazard and unpredictable as Android software upgrades have become.

Google is obviously aware of the situation, and it’s pushing its Android One initiative to give people reassurances when buying an Android phone. Android One guarantees OS updates for at least two years and security updates for at least three years. But, as with most things Android, Android One is only available on a few devices, most of which are of the budget variety. You won’t find the big global names of Samsung, Huawei, and LG supporting it.

Some Android OEMs snoop on you. This is an ecosystem problem rather than something rooted in the operating system itself, but it still discolors Android’s public reputation. Android phone manufacturers habitually lade their devices with bloatware (stuff you really don’t want or need on your phone), and some have even taken to loading up spyware. Blu’s devices were yanked from Amazon for doing exactly that: selling phones that were vulnerable to remote takeovers and could be exploited to have the user’s text messages and call records clandestinely recorded. OnePlus also got in trouble for having an overly inquisitive user analytics program, which beamed personally identifiable information back to the company’s HQ without explicit user consent.

Huawei is perhaps the most famous example of a potentially conflicted Android phone manufacturer, with US spy agencies openly urging their citizens to avoid Huawei phones for their own security. No hard evidence has yet been presented of Huawei doing anything improper, however the US is not the only country to express concern about the company’s relationship with the Chinese government — and mistrust is based as much on smoke as it is on the actual fire.

Android remains vulnerable, thanks in part to Google’s permissiveness. It’s noteworthy that, when Facebook’s data breach became public and people started looking into what data Facebook had on them, only their Android calls and messages had been collected. Why not the iPhone? Because Apple’s walled-garden philosophy makes it much harder, practically impossible, for a user to inadvertently give consent to privacy-eroding apps like Facebook’s Messenger to dig into their devices. Your data is simply better protected on iOS, and even though Android has taken significant steps forward in making app permissions more granular and specific, it’s still comparatively easy to mislead users about what data an app is obtaining and for what purposes.

Android hardware development is chaotic and unreliable. For many, the blistering, sometimes chaotic pace of change in Android devices is part of the ecosystem’s charm. It’s entertaining to watch companies try all sorts of zany and unlikely designs, with only the best of them surviving more than a few months. But the downside of all this speed is lack of attention being paid to small details and long-term sustainability.

LG made a huge promotional push two years ago around its modular G5 flagship, which was meant to usher in a new accessory ecosystem and elevate the flexibility of LG Android devices to new heights. Within six months, that modular project was abandoned, leaving anyone that bought modular LG accessories — on the expectation of multigenerational support — high and dry. And speaking of dryness, Sony recently got itself in trouble for overpromising by calling its Xperia phones “waterproof.”

Samsung’s Galaxy Note 7 is the best and starkest example of the dire consequences that can result from a hurried and excessively ambitious hardware development cycle. The Note 7 had a fatal battery flaw that led many people’s shiny new Samsung smartphones to spontaneously catch fire. Compare that to the iPhone’s pace of usually incremental changes, implemented at predictable intervals and with excruciating fastidiousness.

Android Marshmallow official logo

Besides pledging to deliver OS updates that never come, claiming to have delivered security updates that never arrived, and taking liberties with your personal data, Android OEMs also have a tendency to exaggerate what their phones can actually do. They don’t collaborate on much, so in spite of pouring great efforts into developing their Android software experience, they also just feed the old steadfast complaint of a fragmented ecosystem.

The problem of trust with Android, much like the problem of trust in Facebook, is grounded in reality. It doesn’t matter that not all Android device makers engage in shady privacy invasion or overreaching marketing claims. The perception, like the Android brand, is collective

https://www.theverge.com/2018/4/13/17233122/android-software-patch-trust-problem

Advertisements

World celebrates, cyber-snoops cry as TLS 1.3 internet crypto approved

 

Image Credits: kinsta.com

Forward-secrecy protocol comes with the 28th draft

A much-needed update to internet security has finally passed at the Internet Engineering Task Force (IETF), after four years and 28 drafts.

Internet engineers meeting in London, England, approved the updated TLS 1.3 protocol despite a wave of last-minute concerns that it could cause networking nightmares.

TLS 1.3 won unanimous approval (well, one „no objection“ amid the yeses), paving the way for its widespread implementation and use in software and products from Oracle’s Java to Google’s Chrome browser.

The new protocol aims to comprehensively thwart any attempts by the NSA and other eavesdroppers to decrypt intercepted HTTPS connections and other encrypted network packets. TLS 1.3 should also speed up secure communications thanks to its streamlined approach.

The critical nature of the protocol, however, has meant that progress has been slow and, on occasion, controversial. This time last year, Google paused its plan to support the new protocol in Chrome when an IT schools administrator in Maryland reported that a third of the 50,000 Chromebooks he managed bricked themselves after being updating to use the tech.

Most recently, banks and businesses complained that, thanks to the way the new protocol does security, they will be cut off from being able to inspect and analyze TLS 1.3 encrypted traffic flowing through their networks, and so potentially be at greater risk from attack.

Unfortunately, that self-same ability to decrypt secure traffic on your own network can also be potentially used by third parties to grab and decrypt communications.

An effort to effectively insert a backdoor into the protocol was met with disdain and some anger by internet engineers, many of whom pointed out that it will still be possible to introduce middleware to monitor and analyze internal network traffic.

Nope

The backdoor proposal did not move forward, meaning the internet as a whole will become more secure and faster, while banks and similar outfits will have to do a little extra work to accommodate and inspect TLS 1.3 connections as required.

At the heart of the change – and the complaints – are two key elements: forward secrecy, and ephemeral encryption keys.

TLS – standing for Transport Layer Security – basically works by creating a secure connection between a client and a server – your laptop, for example, and a company’s website. All this is done before any real information is shared – like credit card details or personal information.

Under TLS 1.2 this is a fairly lengthy process that can take as much as half-a-second:

  • The client says hi to the server and offers a range of strong encryption systems it can work with
  • The server says hi back, explains which encryption system it will use and sends an encryption key
  • The client takes that key and uses it to encrypt and send back a random series of letters
  • Together they use this exchange to create two new keys: a master key and a session key – the master key being stronger; the session key weaker.
  • The client then says which encryption system it plans to use for the weaker, session key – which allows data to be sent much faster because it doesn’t have to be processed as much
  • The server acknowledges that system will be used, and then the two start sharing the actual information that the whole exchange is about

TLS 1.3 speeds that whole process up by bundling several steps together:

  • The client says hi, here’s the systems I plan to use
  • The server gets back saying hi, ok let’s use them, here’s my key, we should be good to go
  • The client responds saying, yep that all looks good, here are the session keys

As well as being faster, TLS 1.3 is much more secure because it ditches many of the older encryption algorithms that TLS 1.2 supports that over the years people have managed to find holes in. Effectively the older crypto-systems potentially allowed miscreants to figure out what previous keys had been used (called „non-forward secrecy“) and so decrypt previous conversations.

A little less conversation

For example, snoopers could, under TLS 1.2, force the exchange to use older and weaker encryption algorithms that they knew how to crack.

People using TLS 1.3 will only be able to use more recent systems that are much harder to crack – at least for now. Any effort to force the conversation to use a weaker 1.2 system will be detected and flagged as a problem.

Another very important advantage to TLS 1.3 – but also one that some security experts are concerned about – is called „0-RTT Resumption“ which effectively allows the client and server to remember if they have spoken before, and so forego all the checks, using previous keys to start talking immediately.

That will make connections much faster but the concern of course is that someone malicious could get hold of the „0-RTT Resumption“ information and pose as one of the parties. Although internet engineers are less concerned about this security risk – which would require getting access to a machine – than the TLS 1.2 system that allowed people to hijack and listen into a conversation.

In short, it’s a win-win but will require people to put in some effort to make it all work properly.

The big losers will be criminals and security services who will be shut out of secure communications – at least until they figure out a way to crack this new protocol. At which point the IETF will start on TLS 1.4. ®

Source: theregister.co.uk

 

 

An Overview of TLS 1.3 – Faster and More Secure

Updated on March 25, 2018

It has been over eight years since the last encryption protocol update, but the new TLS 1.3 has now been finalized as of March 21st, 2018. The exciting part for the WordPress community and customers here at Kinsta is that TLS 1.3 includes a lot of security and performance improvements. With the HTTP/2 protocol update in late 2015, and now TLS 1.3 in 2018, encrypted connections are now more secure and faster than ever. Read more below about the changes coming with TLS 1.3 and how it can benefit you as a WordPress site owner.

‚TLS 1.3: Faster, Safer, Better, Everything.‘ 👍 — Filippo ValsordaCLICK TO TWEET

What is TLS?

TLS stands for Transport Layer Security and is the successor to SSL (Secure Sockets Layer). However, both these terms are commonly thrown around a lot online and you might see them both referred to as simply SSL.  TLS provides secure communication between web browsers and servers. The connection itself is secure because symmetric cryptography is used to encrypt the data transmitted. The keys are uniquely generated for each connection and are based on a shared secret negotiated at the beginning of the session, also known as a TLS handshake. Many IP-based protocols, such as HTTPS, SMTP, POP3, FTP support TLS to encrypt data.

Web browsers utilize an SSL certificate which allows them to recognize that it belongs to a digitally signed certificate authority. Technically these are also known as TLS certificates, but most SSL providers stick with the term “SSL certificates” as this is generally more well known. SSL/TLS certificates provide the magic behind what many people simply know as the HTTPS that they see in their browser’s address bar.

https web browser address bar

TLS 1.3 vs TLS 1.2

The Internet Engineering Task Force (IETF) is the group that has been in charge of defining the TLS protocol, which has gone through many various iterations. The previous version of TLS, TLS 1.2, was defined in RFC 5246 and has been in use for the past eight years by the majority of all web browsers. As of March 21st, 2018, TLS 1.3 has now been finalized, after going through 28 drafts.

Companies such as Cloudflare are already making TLS 1.3 available to their customers. Filippo Valsorda had a great talk (see presentation below) on the differences between TLS 1.2 and TLS 1.3. In short, the major benefits of TLS 1.3 vs that of TLS 1.2 is faster speeds and improved security.

Speed Benefits of TLS 1.3

TLS and encrypted connections have always added a slight overhead when it comes to web performance. HTTP/2 definitely helped with this problem, but TLS 1.3 helps speed up encrypted connections even more with features such as TLS false start and Zero Round Trip Time (0-RTT).

To put it simply, with TLS 1.2, two round-trips have been needed to complete the TLS handshake. With 1.3, it requires only one round-trip, which in turn cuts the encryption latency in half. This helps those encrypted connections feel just a little bit snappier than before.

tls 1.3 handshake performance

TLS 1.3 handshake performance

Another advantage of is that in a sense, it remembers! On sites you have previously visited, you can now send data on the first message to the server. This is called a “zero round trip.” (0-RTT). And yes, this also results in improved load time times.

Improved Security With TLS 1.3

A big problem with TLS 1.2 is that it’s often not configured properly it leaves websites vulnerable to attacks. TLS 1.3 now removes obsolete and insecure features from TLS 1.2, including the following:

  • SHA-1
  • RC4
  • DES
  • 3DES
  • AES-CBC
  • MD5
  • Arbitrary Diffie-Hellman groups — CVE-2016-0701
  • EXPORT-strength ciphers – Responsible for FREAK and LogJam

Because the protocol is in a sense more simplified, this make it less likely for administrators and developers to misconfigure the protocol. Jessie Victors, a security consultant, specializing in privacy-enhancing systems and applied cryptography stated:

I am excited for the upcoming standard. I think we will see far fewer vulnerabilities and we will be able to trust TLS far more than we have in the past.

Google is also raising the bar, as they have started warning users in search console that they are moving to TLS version 1.2, as TLS 1 is no longer that safe. They are giving a final deadline of March 2018.

TLS 1.3 Browser Support

With Chrome 63, TLS 1.3 is enabled for outgoing connections. Support for TLS 1.3 was added back in Chrome 56 and is also supported by Chrome for Android.

TLS 1.3 is enabled by default in Firefox 52 and above (including Quantum). They are retaining an insecure fallback to TLS 1.2 until they know more about server tolerance and the 1.3 handshake.

TLS 1.3 browser support

TLS 1.3 browser support

With that being said some SSL test services on the Internet don’t support TLS 1.3 yet and neither do other browsers such as IE, Microsoft Edge, Opera, or Safari. It will be a couple more months while the protocol is being finalized and for browsers to catch up. Most of the remaining ones are in development at the moment.

Cloudflare has an excellent article on why TLS 1.3 isn’t in browsers yet.

Summary

Just like with HTTP/2, TLS 1.3 is another exciting protocol update that we can expect to benefit from for years to come. Not only will encrypted (HTTPS) connections become faster, but they will also be more secure. Here’s to moving the web forward!

Source: https://kinsta.com/blog/tls-1-3/

Forget Facebook

Forget Facebook

Photo Credits: oe24.at – Copyrights of oe24.at reserved

Source: Techcrunch.com

Cambridge Analytica may have used Facebook’s data to influence your political opinions. But why does least-liked tech company Facebook have all this data about its users in the first place?

Let’s put aside Instagram, WhatsApp and other Facebook products for a minute. Facebook has built the world’s biggest social network. But that’s not what they sell. You’ve probably heard the internet saying “if a product is free, it means that you are the product.”

And it’s particularly true in this case because Facebook is the world’s second biggest advertising company in the world behind Google. During the last quarter of 2017, Facebook reported $12.97 billion in revenue, including $12.78 billion from ads.

That’s 98.5 percent of Facebook’s revenue coming from ads.

Ads aren’t necessarily a bad thing. But Facebook has reached ad saturation in the newsfeed. So the company has two options — creating new products and ad formats, or optimizing those sponsored posts.

Facebook has reached ad saturation in the newsfeed

This isn’t a zero-sum game — Facebook has been doing both at the same time. That’s why you’re seeing more ads on Instagram and Messenger. And that’s also why ads on Facebook seem more relevant than ever.

If Facebook can show you relevant ads and you end up clicking more often on those ads, then advertisers will pay Facebook more money.

So Facebook has been collecting as much personal data about you as possible — it’s all about showing you the best ad. The company knows your interests, what you buy, where you go and who you’re sleeping with.

You can’t hide from Facebook

Facebook’s terms and conditions are a giant lie. They are purposely misleading, too long and too broad. So you can’t just read the company’s terms of service and understand what it knows about you.

That’s why some people have been downloading their Facebook data. You can do it too, it’s quite easy. Just head over to your Facebook settings and click the tiny link that says “Download a copy of your Facebook data.”

In that archive file, you’ll find your photos, your posts, your events, etc. But if you keep digging, you’ll also find your private messages on Messenger (by default, nothing is encrypted).

And if you keep digging a bit more, chances are you’ll also find your entire address book and even metadata about your SMS messages and phone calls.

All of this is by design and you agreed to it. Facebook has unified terms of service and share user data across all its apps and services (except WhatsApp data in Europe for now). So if you follow a clothing brand on Instagram, you could see an ad from this brand on Facebook.com.

Messaging apps are privacy traps

But Facebook has also been using this trick quite a lot with Messenger. You might not remember, but the on-boarding experience on Messenger is really aggressive.

On iOS, the app shows you a fake permission popup to access your address book that says “Ok” or “Learn More”. The company is using a fake popup because you can’t ask for permission twice.

There’s a blinking arrow below the OK button.

If you click on “Learn More”, you get a giant blue button that says “Turn On”. Everything about this screen is misleading and Messenger tries to manipulate your emotions.

“Messenger only works when you have people to talk to,” it says. Nobody wants to be lonely, that’s why Facebook implies that turning on this option will give you friends.

Even worse, it says “if you skip this step, you’ll need to add each contact one-by-one to message them.” This is simply a lie as you can automatically talk to your Facebook friends using Messenger without adding them one-by-one.

The next time you pay for a burrito with your credit card, Facebook will learn about this transaction and match this credit card number with the one you added in Messenger

If you tap on “Not Now”, Messenger will show you a fake notification every now and then to push you to enable contact syncing. If you tap on yes and disable it later, Facebook still keeps all your contacts on its servers.

On Android, you can let Messenger manage your SMS messages. Of course, you guessed it, Facebook uploads all your metadata. Facebook knows who you’re texting, when, how often.

Even if you disable it later, Facebook will keep this data for later reference.

But Facebook doesn’t stop there. The company knows a lot more about you than what you can find in your downloaded archive. The company asks you to share your location with your friends. The company tracks your web history on nearly every website on earth using embedded JavaScript.

But my favorite thing is probably peer-to-peer payments. In some countries, you can pay back your friends using Messenger. It’s free! You just have to add your card to the app.

It turns out that Facebook also buys data about your offline purchases. The next time you pay for a burrito with your credit card, Facebook will learn about this transaction and match this credit card number with the one you added in Messenger.

In other words, Messenger is a great Trojan horse designed to learn everything about you.

And the next time an app asks you to share your address book, there’s a 99-percent chance that this app is going to mine your address book to get new users, spam your friends, improve ad targeting and sell email addresses to marketing companies.

I could say the same thing about all the other permission popups on your phone. Be careful when you install an app from the Play Store or open an app for the first time on iOS. It’s easier to enable something if a feature doesn’t work without it than to find out that Facebook knows everything about you.

GDPR to the rescue

There’s one last hope. And that hope is GDPR. I encourage you to read TechCrunch’s Natasha Lomas excellent explanation of GDPR to understand what the European regulation is all about.

Many of the misleading things that are currently happening at Facebook will have to change. You can’t force people to opt in like in Messenger. Data collection should be minimized to essential features. And Facebook will have to explain why it needs all this data to its users.

If Facebook doesn’t comply, the company will have to pay up to 4 percent of its global annual turnover. But that doesn’t stop you from actively reclaiming your online privacy right now.

You can’t be invisible on the internet, but you have to be conscious about what’s happening behind your back. Every time a company asks you to tap OK, think about what’s behind this popup. You can’t say that nobody told you.

Source: Techcrunch.com

What is GDPR – General Data Protection Regulation

Source Techcrunch.com

European Union lawmakers proposed a comprehensive update to the bloc’s data protection and privacy rules in 2012.

Their aim: To take account of seismic shifts in the handling of information wrought by the rise of the digital economy in the years since the prior regime was penned — all the way back in 1995 when Yahoo was the cutting edge of online cool and cookies were still just tasty biscuits.

Here’s the EU’s executive body, the Commission, summing up the goal:

The objective of this new set of rules is to give citizens back control over of their personal data, and to simplify the regulatory environment for business. The data protection reform is a key enabler of the Digital Single Market which the Commission has prioritised. The reform will allow European citizens and businesses to fully benefit from the digital economy.

For an even shorter the EC’s theory is that consumer trust is essential to fostering growth in the digital economy. And it thinks trust can be won by giving users of digital services more information and greater control over how their data is used. Which is — frankly speaking — a pretty refreshing idea when you consider the clandestine data brokering that pervades the tech industry. Mass surveillance isn’t just something governments do.

The General Data Protection Regulation (aka GDPR) was agreed after more than three years of negotiations between the EU’s various institutions.

It’s set to apply across the 28-Member State bloc as of May 25, 2018. That means EU countries are busy transposing it into national law via their own legislative updates (such as the UK’s new Data Protection Bill — yes, despite the fact the country is currently in the process of (br)exiting the EU, the government has nonetheless committed to implementing the regulation because it needs to keep EU-UK data flowing freely in the post-brexit future. Which gives an early indication of the pulling power of GDPR.

Meanwhile businesses operating in the EU are being bombarded with ads from a freshly energized cottage industry of ‘privacy consultants’ offering to help them get ready for the new regs — in exchange for a service fee. It’s definitely a good time to be a law firm specializing in data protection.

GDPR is a significant piece of legislation whose full impact will clearly take some time to shake out. In the meanwhile, here’s our guide to the major changes incoming and some potential impacts.

Data protection + teeth

A major point of note right off the bat is that GDPR does not merely apply to EU businesses; any entities processing the personal data of EU citizens need to comply. Facebook, for example — a US company that handles massive amounts of Europeans’ personal data — is going to have to rework multiple business processes to comply with the new rules. Indeed, it’s been working on this for a long time already.

Last year the company told us it had assembled “the largest cross functional team” in the history of its family of companies to support GDPR compliance — specifying this included “senior executives from all product teams, designers and user experience/testing executives, policy executives, legal executives and executives from each of the Facebook family of companies”.

“Dozens of people at Facebook Ireland are working full time on this effort,” it said, noting too that the data protection team at its European HQ (in Dublin, Ireland) would be growing by 250% in 2017. It also said it was in the process of hiring a “top quality data protection officer” — a position the company appears to still be taking applications for.

The new EU rules require organizations to appoint a data protection officer if they process sensitive data on a large scale (which Facebook very clearly does). Or are collecting info on many consumers — such as by performing online behavioral tracking. But, really, which online businesses aren’t doing that these days?

The extra-territorial scope of GDPR casts the European Union as a global pioneer in data protection — and some legal experts suggest the regulation will force privacy standards to rise outside the EU too.

Sure, some US companies might prefer to swallow the hassle and expense of fragmenting their data handling processes, and treating personal data obtained from different geographies differently, i.e. rather than streamlining everything under a GDPR compliant process. But doing so means managing multiple data regimes. And at very least runs the risk of bad PR if you’re outed as deliberately offering a lower privacy standard to your home users vs customers abroad.

Ultimately, it may be easier (and less risky) for businesses to treat GDPR as the new ‘gold standard’ for how they handle all personal data, regardless of where it comes from.

And while not every company harvests Facebook levels of personal data, almost every company harvests some personal data. So for those with customers in the EU GDPR cannot be ignored. At very least businesses will need to carry out a data audit to understand their risks and liabilities.

Privacy experts suggest that the really big change here is around enforcement. Because while the EU has had long established data protection standards and rules — and treats privacy as a fundamental right — its regulators have lacked the teeth to command compliance.

But now, under GDPR, financial penalties for data protection violations step up massively.

The maximum fine that organizations can be hit with for the most serious infringements of the regulation is 4% of their global annual turnover (or €20M, whichever is greater). Though data protection agencies will of course be able to impose smaller fines too. And, indeed, there’s a tiered system of fines — with a lower level of penalties of up to 2% of global turnover (or €10M).

This really is a massive change. Because while data protection agencies (DPAs) in different EU Member States can impose financial penalties for breaches of existing data laws these fines are relatively small — especially set against the revenues of the private sector entities that are getting sanctioned.

In the UK, for example, the Information Commissioner’s Office (ICO) can currently impose a maximum fine of just £500,000. Compare that to the annual revenue of tech giant Google (~$90BN) and you can see why a much larger stick is needed to police data processors.

It’s not necessarily the case that individual EU Member States are getting stronger privacy laws as a consequence of GDPR (in some instances countries have arguably had higher standards in their domestic law). But the beefing up of enforcement that’s baked into the new regime means there’s a better opportunity for DPAs to start to bark and bite like proper watchdogs.

GDPR inflating the financial risks around handling personal data should naturally drive up standards — because privacy laws are suddenly a whole lot more costly to ignore.

More types of personal data that are hot to handle

So what is personal data under GDPR? It’s any information relating to an identified or identifiable person (in regulatorspeak people are known as ‘data subjects’).

While ‘processing’ can mean any operation performed on personal data — from storing it to structuring it to feeding it to your AI models. (GDPR also includes some provisions specifically related to decisions generated as a result of automated data processing but more on that below).

A new provision concerns children’s personal data — with the regulation setting a 16-year-old age limit on kids’ ability to consent to their data being processed. However individual Member States can choose (and some have) to derogate from this by writing a lower age limit into their laws.

GDPR sets a hard cap at 13-years-old — making that the defacto standard for children to be able to sign up to digital services. So the impact on teens’ social media habits seems likely to be relatively limited.

The new rules generally expand the definition of personal data — so it can include information such as location data, online identifiers (such as IP addresses) and other metadata. So again, this means businesses really need to conduct an audit to identify all the types of personal data they hold. Ignorance is not compliance.

GDPR also encourages the use of pseudonymization — such as, for example, encrypting personal data and storing the encryption key separately and securely — as a pro-privacy, pro-security technique that can help minimize the risks of processing personal data. Although pseudonymized data is likely to still be considered personal data; certainly where a risk of reidentification remains. So it does not get a general pass from requirements under the regulation.

Data has to be rendered truly anonymous to be outside the scope of the regulation. (And given how often ‘anonymized’ data-sets have been shown to be re-identifiable, relying on any anonymizing process to be robust enough to have zero risk of re-identification seems, well, risky.)

To be clear, given GDPR’s running emphasis on data protection via data security it is implicitly encouraging the use of encryption above and beyond a risk reduction technique — i.e. as a way for data controllers to fulfill its wider requirements to use “appropriate technical and organisational measures” vs the risk of the personal data they are processing.

The incoming data protection rules apply to both data controllers (i.e. entities that determine the purpose and means of processing personal data) and data processors (entities that are responsible for processing data on behalf of a data controller — aka subcontractors).

Indeed, data processors have some direct compliance obligations under GDPR, and can also be held equally responsible for data violations, with individuals able to bring compensation claims directly against them, and DPAs able to hand them fines or other sanctions.

So the intent for the regulation is there be no diminishing in responsibility down the chain of data handling subcontractors. GDPR aims to have every link in the processing chain be a robust one.

For companies that rely on a lot of subcontractors to handle data operations on their behalf there’s clearly a lot of risk assessment work to be done.

As noted above, there is a degree of leeway for EU Member States in how they implement some parts of the regulation (such as with the age of data consent for kids).

Consumer protection groups are calling for the UK government to include an optional GDPR provision on collective data redress to its DP bill, for example — a call the government has so far rebuffed.

But the wider aim is for the regulation to harmonize as much as possible data protection rules across all Member States to reduce the regulatory burden on digital businesses trading around the bloc.

On data redress, European privacy campaigner Max Schrems — most famous for his legal challenge to US government mass surveillance practices that resulted in a 15-year-old data transfer arrangement between the EU and US being struck down in 2015 — is currently running a crowdfunding campaign to set up a not-for-profit privacy enforcement organization to take advantage of the new rules and pursue strategic litigation on commercial privacy issues.

Schrems argues it’s simply not viable for individuals to take big tech giants to court to try to enforce their privacy rights, so thinks there’s a gap in the regulatory landscape for an expert organization to work on EU citizen’s behalf. Not just pursuing strategic litigation in the public interest but also promoting industry best practice.

The proposed data redress body — called noyb; short for: ‘none of your business’ — is being made possible because GDPR allows for collective enforcement of individuals’ data rights. And that provision could be crucial in spinning up a centre of enforcement gravity around the law. Because despite the position and role of DPAs being strengthened by GDPR, these bodies will still inevitably have limited resources vs the scope of the oversight task at hand.

Some may also lack the appetite to take on a fully fanged watchdog role. So campaigning consumer and privacy groups could certainly help pick up any slack.

Privacy by design and privacy by default

Another major change incoming via GDPR is ‘privacy by design’ no longer being just a nice idea; privacy by design and privacy by default become firm legal requirements.

This means there’s a requirement on data controllers to minimize processing of personal data — limiting activity to only what’s necessary for a specific purpose, carrying out privacy impact assessments and maintaining up-to-date records to prove out their compliance.

Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable. (And we’ve sure seen a whole lot of those hellish things in tech.) The core idea is that consent should be an ongoing, actively managed process; not a one-off rights grab.

As the UK’s ICO tells it, consent under GDPR for processing personal data means offering individuals “genuine choice and control” (for sensitive personal data the law requires a higher standard still — of explicit consent).

There are other legal bases for processing personal data under GDPR — such as contractual necessity; or compliance with a legal obligation under EU or Member State law; or for tasks carried out in the public interest — so it is not necessary to obtain consent in order to process someone’s personal data. But there must always be an appropriate legal basis for each processing.

Transparency is another major obligation under GDPR, which expands the notion that personal data must be lawfully and fairly processed to include a third principle of accountability. Hence the emphasis on data controllers needing to clearly communicate with data subjects — such as by informing them of the specific purpose of the data processing.

The obligation on data handlers to maintain scrupulous records of what information they hold, what they are doing with it, and how they are legally processing it, is also about being able to demonstrate compliance with GDPR’s data processing principles.

But — on the plus side for data controllers — GDPR removes the requirement to submit notifications to local DPAs about data processing activities. Instead, organizations must maintain detailed internal records — which a supervisory authority can always ask to see.

It’s also worth noting that companies processing data across borders in the EU may face scrutiny from DPAs in different Member States if they have users there (and are processing their personal data).

Although the GDPR sets out a so-called ‘one-stop-shop’ principle — that there should be a “lead” DPA to co-ordinate supervision between any “concerned” DPAs — this does not mean that, once it applies, a cross-EU-border operator like Facebook is only going to be answerable to the concerns of the Irish DPA.

Indeed, Facebook’s tactic of only claiming to be under the jurisdiction of a single EU DPA looks to be on borrowed time. And the one-stop-shop provision in the GDPR seems more about creating a co-operation mechanism to allow multiple DPAs to work together in instances where they have joint concerns, rather than offering a way for multinationals to go ‘forum shopping’ — which the regulation does not permit (per WP29 guidance).

Another change: Privacy policies that contain vague phrases like ‘We may use your personal data to develop new services’ or ‘We may use your personal data for research purposes’ will not pass muster under the new regime. So a wholesale rewriting of vague and/or confusingly worded T&Cs is something Europeans can look forward to this year.

Add to that, any changes to privacy policies must be clearly communicated to the user on an ongoing basis. Which means no more stale references in the privacy statement telling users to ‘regularly check for changes or updates’ — that just won’t be workable.

The onus is firmly on the data controller to keep the data subject fully informed of what is being done with their information. (Which almost implies that good data protection practice could end up tasting a bit like spam, from a user PoV.)

The overall intent behind GDPR is to inculcate an industry-wide shift in perspective regarding who ‘owns’ user data — disabusing companies of the notion that other people’s personal information belongs to them just because it happens to be sitting on their servers.

“Organizations should acknowledge they don’t exist to process personal data but they process personal data to do business,” is how analyst Gartner research director Bart Willemsen sums this up. “Where there is a reason to process the data, there is no problem. Where the reason ends, the processing should, too.”

The data protection officer (DPO) role that GDPR brings in as a requirement for many data handlers is intended to help them ensure compliance.

This officer, who must report to the highest level of management, is intended to operate independently within the organization, with warnings to avoid an internal appointment that could generate a conflict of interests.

Which types of organizations face the greatest liability risks under GDPR? “Those who deliberately seem to think privacy protection rights is inferior to business interest,” says Willemsen, adding: “A recent example would be Uber, regulated by the FTC and sanctioned to undergo 20 years of auditing. That may hurt perhaps similar, or even more, than a one-time financial sanction.”

“Eventually, the GDPR is like a speed limit: There not to make money off of those who speed, but to prevent people from speeding excessively as that prevents (privacy) accidents from happening,” he adds.

Another right to be forgotten

Under GDPR, people who have consented to their personal data being processed also have a suite of associated rights — including the right to access data held about them (a copy of the data must be provided to them free of charge, typically within a month of a request); the right to request rectification of incomplete or inaccurate personal data; the right to have their data deleted (another so-called ‘right to be forgotten’ — with some exemptions, such as for exercising freedom of expression and freedom of information); the right to restrict processing; the right to data portability (where relevant, a data subject’s personal data must be provided free of charge and in a structured, commonly used and machine readable form).

All these rights make it essential for organizations that process personal data to have systems in place which enable them to identify, access, edit and delete individual user data — and be able to perform these operations quickly, with a general 30 day time-limit for responding to individual rights requests.

GDPR also gives people who have consented to their data being processed the right to withdraw consent at any time. Let that one sink in.

Data controllers are also required to inform users about this right — and offer easy ways for them to withdraw consent. So no, you can’t bury a ‘revoke consent’ option in tiny lettering, five sub-menus deep. Nor can WhatsApp offer any more time-limit opt-outs for sharing user data with its parent multinational, Facebook. Users will have the right to change their mind whenever they like.

The EU lawmakers’ hope is that this suite of rights for consenting consumers will encourage respectful use of their data — given that, well, if you annoy consumers they can just tell you to sling yer hook and ask for a copy of their data to plug into your rival service to boot. So we’re back to that fostering trust idea.

Add in the ability for third party organizations to use GDPR’s provision for collective enforcement of individual data rights and there’s potential for bad actors and bad practice to become the target for some creative PR stunts that harness the power of collective action — like, say, a sudden flood of requests for a company to delete user data.

Data rights and privacy issues are certainly going to be in the news a whole lot more.

Getting serious about data breaches

But wait, there’s more! Another major change under GDPR relates to security incidents — aka data breaches (something else we’ve seen an awful, awful lot of in recent years) — with the regulation doing what the US still hasn’t been able to: Bringing in a universal standard for data breach disclosures.

GDPR requires that data controllers report any security incidents where personal data has been lost, stolen or otherwise accessed by unauthorized third parties to their DPA within 72 hours of them becoming aware of it. Yes, 72 hours. Not the best part of a year, like er Uber.

If a data breach is likely to result in a “high risk of adversely affecting individuals’ rights and freedoms” the regulation also implies you should ‘fess up even sooner than that — without “undue delay”.

Only in instances where a data controller assesses that a breach is unlikely to result in a risk to the rights and freedoms of “natural persons” are they exempt from the breach disclosure requirement (though they still need to document the incident internally, and record their reason for not informing a DPA in a document that DPAs can always ask to see).

“You should ensure you have robust breach detection, investigation and internal reporting procedures in place,” is the ICO’s guidance on this. “This will facilitate decision-making about whether or not you need to notify the relevant supervisory authority and the affected individuals.”

The new rules generally put strong emphasis on data security and on the need for data controllers to ensure that personal data is only processed in a manner that ensures it is safeguarded.

Here again, GDPR’s requirements are backed up by the risk of supersized fines. So suddenly sloppy security could cost your business big — not only in reputation terms, as now, but on the bottom line too. So it really must be a C-suite concern going forward.

Nor is subcontracting a way to shirk your data security obligations. Quite the opposite. Having a written contract in place between a data controller and a data processor was a requirement before GDPR but contract requirements are wider now and there are some specific terms that must be included in the contract, as a minimum.

Breach reporting requirements must also be set out in the contract between processor and controller. If a data controller is using a data processor and it’s the processor that suffers a breach, they’re required to inform the controller as soon as they become aware. The controller then has the same disclosure obligations as per usual.

Essentially, data controllers remain liable for their own compliance with GDPR. And the ICO warns they must only appoint processors who can provide “sufficient guarantees” that the regulatory requirements will be met and the rights of data subjects protected.

tl;dr, be careful who and how you subcontract.

Right to human review for some AI decisions

Article 22 of GDPR places certain restrictions on entirely automated decisions based on profiling individuals — but only in instances where these human-less acts have a legal or similarly significant effect on the people involved.

There are also some exemptions to the restrictions — where automated processing is necessary for entering into (or performance of) a contract between an organization and the individual; or where it’s authorized by law (e.g. for the purposes of detecting fraud or tax evasion); or where an individual has explicitly consented to the processing.

In its guidance, the ICO specifies that the restriction only applies where the decision has a “serious negative impact on an individual”.

Suggested examples of the types of AI-only decisions that will face restrictions are automatic refusal of an online credit application or an e-recruiting practices without human intervention.

Having a provision on automated decisions is not a new right, having been brought over from the 1995 data protection directive. But it has attracted fresh attention — given the rampant rise of machine learning technology — as a potential route for GDPR to place a check on the power of AI blackboxes to determine the trajectory of humankind.

The real-world impact will probably be rather more prosaic, though. And experts suggest it does not seem likely that the regulation, as drafted, equates to a right for people to be given detailed explanations of how algorithms work.

Though as AI proliferates and touches more and more decisions, and as its impacts on people and society become ever more evident, pressure may well grow for proper regulatory oversight of algorithmic blackboxes.

In the meanwhile, what GDPR does in instances where restrictions apply to automated decisions is require data controllers to provide some information to individuals about the logic of an automated decision.

They are also obliged to take steps to prevent errors, bias and discrimination. So there’s a whiff of algorithmic accountability. Though it may well take court and regulatory judgements to determine how stiff those steps need to be in practice.

Individuals do also have a right to challenge and request a (human) review of an automated decision in the restricted class.

Here again the intention is to help people understand how their data is being used. And to offer a degree of protection (in the form of a manual review) if a person feels unfairly and harmfully judged by an AI process.

The regulation also places some restrictions on the practice of using data to profile individuals if the data itself is sensitive data — e.g. health data, political belief, religious affiliation etc — requiring explicit consent for doing so. Or else that the processing is necessary for substantial public interest reasons (and lies within EU or Member State law).

While profiling based on other types of personal data does not require obtaining consent from the individuals concerned, it still needs a legal basis and there is still a transparency requirement — which means service providers will need to inform users they are being profiled, and explain what it means for them.

And people also always have the right to object to profiling activity based on their personal data.

 

Source: https://techcrunch.com/2018/01/20/wtf-is-gdpr/

Iridium – The Making of the Largest Satellite Constellation in History

The Making of the Largest Satellite Constellation in History

How Iridium rose from its ashes to launch the era of satellite megaprojects.

An artist’s rendering of an Iridium Next satellite in orbit. Image: Iridium

Matt Desch didn’t set out to change the world, but he just might do it anyway. As the CEO of Iridium, the only company that provides satellite communications to every inch of the globe, he is at the helm of Next, a fleet of telecommunications satellites that is arguably one of the most ambitious space projects ever undertaken.

By this time next year, Iridium will have sent 75 Next satellites into orbit. Each one will be replacing a first-generation Iridium satellite that has been in orbit for almost two decades. Once these new satellites are in place, they will establish radio contact with one another over thousands of miles of empty space to create the largest and most complex mesh network ever placed in orbit.

Like the first-generation network, these Iridium Next satellites will provide critical phone and data services to everyone from scientists in Antarctica and military contractors in the Middle East, to drug mules in Central America and climbers on the summit of Mount Everest.

A payload capsule filled with 10 Iridium Next satellites on board a Falcon 9 rocket ahead of the Iridium-3 launch in October 2017. Image: Daniel Oberhaus/Motherboard

But the Next constellation also comes with a suite of new features. Not only will it provide an orbital backbone for the expanding Internet of Things, the satellite network will also be tracking planes and ships in regions they’ve never been tracked before. Almost 70 percent of the Earth isn’t covered by radar, which is why the ill-fated Malaysia Airlines flight was able to disappear into the ocean in 2014 without a trace. Iridium hopes to make these sorts of tragedies obsolete.

That’s if everything goes according to plan, of course, and Iridium isn’t exactly known for its successes. A little under 20 years ago, it was the subject of one of the largest corporate bankruptcies in history. Indeed, the original fleet of satellites that Iridium is replacing with its Next constellation came within hours of a fiery demise, after the company decided to cut its losses after filing for bankruptcy and deorbit them.

In this sense, each Iridium Next launch not only represents the culmination of several years’ worth of intensive design, research, and testing by an international team of scientists and engineers, but also the dogged pursuit of a (quite literally) lofty goal in the face of overwhelmingly bad odds. In order to get a better understanding of the stakes, I followed the next generation of Iridium satellites from their birth in a warehouse in the Arizona desert to their delivery to orbit nearly 500 miles overhead.

*

Last October, I met Desch, who has been Iridium’s CEO for nearly 12 years, for breakfast at a restaurant in Solvang, California. We had just finished watching the third batch of ten Iridium satellites get delivered to orbit aboard a SpaceX Falcon 9 rocket that launched from nearby Vandenberg Air Force Base. It was well before noon, but both Desch and I had been awake for hours.

A SpaceX rocket carrying the third batch of Iridium satellites launches from Vandenberg Air Force base on October 9, 2017. Image: Daniel Oberhaus/Motherboard

As we devoured our avocado toast, Iridium engineers on the East Coast were busy maneuvering the satellites into their orbital planes while Desch explained why SpaceX and Iridium are ideal partners.

“In many ways, the Falcon 9 was built around the Iridium payload because we were the first ones to work with SpaceX,” Desch told me. “Launching is about a third of our costs and if SpaceX wasn’t around, I just couldn’t afford it.”

On the other hand, Iridium, which was SpaceX’s first customer and remains its largest, provides SpaceX with a major source of revenue that it won’t find anywhere else in the commercial sector. As Desch bluntly put it, “Nobody launches 75 satellites.”

Desch’s faith in Elon Musk’s rockets hasn’t wavered over the years, but watching the Falcon 9 rocket explode on the launch pad in September 2016 made an impression on him. The first ten Iridium satellites had been scheduled to head to orbit a few weeks later, but the explosion delayed deployment until last January.

SpaceX touts its “flight proven” rockets as a cost-saving measure, but the extra risk that comes with flying on a used rocket is part of the reason why Iridium’s original contract with the private spaceflight company specified that its cargo would never be flown on a Falcon 9 booster that had previously been to space. On the other hand, the rocket that claimed a Facebook satellite that September had been brand new, a forceful reminder that when it comes to space travel there are no guarantees.

Flying “used” would save Iridum some money, but it’s a gamble: Each of Iridium’s payloads is worth a quarter billion dollars and the loss of even a single one would be disastrous. After lengthy talks with SpaceX and his insurance providers, however, Desch recently opted to send the fourth and fifth batches of Iridium satellites on flight-proven rockets. Given the high stakes of each launch, this speaks volumes about his confidence in Musk’s company.

On December 22, SpaceX carried a batch of Iridium satellites to orbit on a previously flown rocket for the first time. Whatever Desch’s anxieties were about flying used, they proved to be unfounded—the launch went flawlessly.

How to Build a Satellite

Iridium’s journey to space begins in a nondescript building in pastoral Gilbert, Arizona, a Phoenix suburb. This building is home to Orbital ATK, the aerospace company overseeing the manufacturing process of the Iridium Next constellation, and sits just across the street from a farm where a handful of cows spend their days idly chewing cud. The sterile hallways of the Orbital ATK building are cavernous and lined with doors plastered with warnings that these rooms house strictly controlled materials.

Technicians prepare an Iridium satellite at the Orbital factory. Image: Iridium

Under the International Traffic in Arms Regulations (ITAR), many satellite parts are subject to the same stringent oversight as weapons like tanks and hand grenades. These technologies are usually off limits to civilian eyes and can’t be shared with foreign governments. This regulation has been a pain the side of the space industry for years, but for a visitor to Orbital ATK’s facilities like myself, it mostly meant I wouldn’t be bringing a camera inside.

The facility consists of five massive clean rooms for storing and assembling satellites. Orbital ATK has registered each of these as a Foreign Trade Zone, a legal distinction that is the industrial equivalent to a duty free shop at an airport and saves the company paying steep taxes on imported parts. The largest FTZ is reserved for the manufacture of Iridium satellites, consists of 18 workstations, and has technicians in cleanroom suits working on site 24/7.

Most satellites are unique and their production is a painstaking process that can last years. This wasn’t going to work for Iridium: The company needed to manufacture 81 satellites (75 to be placed in orbit with six remaining on the ground as spares) in the amount of time it usually takes to make one. In short, Iridium tasked Orbital ATK with doing for aerospace what Henry Ford had done for the automobile.

This was a challenge for a relatively small company, but it had some experience in the area. In the late 90s, Orbital also oversaw the construction of the first constellation of Iridium satellites, when the idea of mass producing satellites was totally unprecedented and deemed by many to be impossible.

Many of the engineers that worked on that first constellation are still at the company today, but this time the design of the Next satellites was overseen by the French aerospace company Thales.

Thales had a legacy satellite design that was adapted for the Iridium payloads, but according to the Orbital ATK and Thales engineers I spoke with, the design collaboration process could still be painstakingly slow due to ITAR restrictions. Often, Thales would send its preliminary designs to Orbital ATK, only to find a number of adjustments needed to be made, even though the exact nature of these adjustments was unclear due to restrictions on the sharing of component designs.

After a drawn-out revision process, the satellite designs were passed off to Orbital ATK, which began to manufacture seven of the planned 81 satellites. These seven satellites are the only ones that run the gamut of testing, which includes subjecting them to intense vibrations, electromagnetic interference and acoustic tests that Michael Pickett, Orbital ATK’s director of program management, described to me as “blasting the satellites with the biggest, baddest rock concert speakers you can imagine.”

The point of these tests was to validate the design process—if the satellites still worked after this mechanical torture, it meant the other 74 satellites that would pass through the assembly line should work fine, too.

After the design validation tests, Orbital ATK kicked its 18-station assembly line into high gear. Until the last one is finished, sometime in early 2018, the assembly line will see five to six satellites from start to completion each month. The process starts with assembling the different parts of the satellite body—called the “bus”—which is about the size of a Mini Cooper. Once the bus is completed, Orbital ATK technicians begin testing the satellite’s electronic components and communication modules, antenna alignment, and insulation.

Toward the end of the assembly line, each satellite is placed in a thermal chamber for 12 days and is subjected to extremely high and low temperatures to see how it will hold up in the hostile space environment. If the satellite survives, it progresses to station 15, where the star tracker that will be used to track the satellite’s position in orbit is installed. This station also holds sentimental value for the Orbital ATK engineers—the point where a plate dedicating the craft to a specific employee or investor is installed on the craft.

10 Iridium satellites are loaded into a specially designed payload capsule to be placed atop a SpaceX Falcon 9 rocket. Image: Iridium

Next, the satellites’ fuel tanks are filled with helium and placed in an airtight tent that is pumped full of nitrogen. Due to the size difference of these molecules, the engineers are able to look for any leaks that might’ve cropped up during the assembly process.

Finally, the solar panels that will serve as a power supply for the satellite are installed and the craft is basically ready for orbit. Iridium then has to decide whether or not the satellite will be one of the ten going up on the next SpaceX rocket. If not, it will join a few dozen other satellites stacked two-high and covered with black tarps inside a large storage room, until they’re ready for launch.

If a finished satellite is destined to go up on the next launch, its batteries will be removed from cold storage and installed on the craft at the last station. Here, the satellite’s software is uploaded to the craft and Iridium technicians at its Satellite Network Operations Center in Virginia establish contact with the satellite to make sure they will be able to communicate with the craft while it’s in orbit. The satellite is then loaded into a custom shipping container for its journey to Vandenberg Air Force Base in California, its final stop on Earth.

After liftoff, Iridium’s global network of technicians takes over to make sure the satellites are communicating with one another and the gateways on Earth as they are maneuvered into orbit—a process that can take up to a month. Once they are properly in orbit, the satellites will not only begin talking to one another and Iridium devices on the ground, but also one of the three gateways that link the satellite constellation to the telecommunications infrastructure that most of us use on a daily basis.

*

When I visited Iridium’s main gateway in Tempe, Arizona, it was a hive of activity. (The other two gateways are exclusively for the US Department of Defense and Russia, respectively.) Iridium technicians sat in a darkened control room watching satellites tear across a projection of the Earth as they monitored calls being routed through the network for any signs of an error.

Iridium’s main satellite network operations center in Virginia. Image: Iridium

According to Stuart Fankhauser, Iridium’s vice president of network operations, it wasn’t so long ago that the operating room was a dead zone as the company teetered on the brink of bankruptcy.

“I was one of the last guys here,” Fankhauser told me. “It was just me and four others, and we ran everything. It was very quiet, very eerie.”

Iridium was spun off as a subsidiary of Motorola in the 1990s. Its first-generation constellation was originally expected to cost around $3.5 billion, but by the time the satellites were functional, Motorola had sunk almost twice that much into the revolutionary project.

To make matters worse, in the early 90s there were hardly any customers for Iridium’s satellite phone service, the constellation’s raison d’être. This was partly due to the unwieldy size of the company’s satellite phone (affectionately known as “the brick”), but mostly a consequence of its cost: thousands of dollars for the phone and then around $7/minute to make a call. Before it filed for bankruptcy, Iridium had roughly 60,000 customers, but increasing cell phone coverage in the United States and Europe meant that the very populations that could afford the device had increasingly little need for it.

In August 1999, Motorola pulled the plug on its ambitious satellite project and Iridium filed for bankruptcy, just nine months after the satellite constellation went live. Unless another company bought it out—and by 1999, no investor in their right mind would touch this space-age Icarus—Iridium would be done for.

Enter Dan Colussy, the former president of Pan American airlines who had been monitoring Iridium’s troubles with great interest. Just hours before Motorola was scheduled to begin the deorbiting process, Colussy was busy negotiating terms with the Department of Defense, Saudi financiers, and Motorola executives that would allow him to formally purchase the company for around $35 million, a fraction of the billions of dollars the tech giant had sunk into it.

Iridium VP of satellite operations Walt Everetts with one of the company’s grounded test satellites in Tempe, Arizona. Image: Daniel Oberhaus/Motherboard

As detailed in Eccentric Orbits, journalist John Bloom’s tell-all memoir of Iridium’s unlikely rise and spectacular failure, Colussy, who didn’t respond to my requests for comment, knew the satellite constellation would be good for something, he just wasn’t sure what. Iridium was an unprecedented technological feat that simply seemed too valuable to allow it to burn up in the atmosphere. Sixteen years later, Colussy’s intuition is finally being proven correct, but Iridium’s employees told me it was an uphill battle to make it here.

“We had to be pretty scrappy back then,” Fankhauser told me as we walked through Iridium’s Tempe gateway facility. “We were burning our investors’ money so we bought supplies off eBay and liquidators from failing dotcoms. It was humbling.”

The gateway is one half of Iridium’s backend operations. The other half, a test facility just down the road, is constantly probing the satellite network for weaknesses or seeking ways to improve Iridium’s service. Inside the test facility are two large Faraday cages filled with every type of Iridium phone and IoT device that operates on the Iridium network.

These devices are used to call two partially disassembled satellites in another room that have been programmed to “think” they’re in orbit in order to test the load-bearing capacity of the operational network and the compatibility of various devices. On the day I visited, Iridium engineers had successfully managed to place 1,700 simultaneous calls through a single satellite, a company record.

Walt Everetts is Iridium’s vice president of satellite operations and ground development and oversees the day to day activities at the test center. He showed me a large interactive map in the test center’s lobby that depicted calls on the network in real time. Small colored dots indicated different types of services: someone in Dubai calling someone in China or a sailor in the Atlantic checking their email. But the vast majority of the dots indicated machines talking to other machines.

Iridium satellites waiting to be loaded onto a Falcon 9 rocket. The white box at the top is the Aireon hosted payload, which will track planes in regions of the world that are outside of ground-based RADAR range. Image: Iridium

As Everetts and Desch both pointed out, the growth of the Internet of Things is a key reason why they believe Iridium will be a viable company this time around. Although facilitating calls between humans is still a core component of the company’s business, most of the network is used to route information between computers, whether these are tsunami monitoring buoys in the middle of the ocean, or chips implanted to track endangered animals. In fact, Desch said over half of Iridium’s almost 1 million subscriptions are now IoT companies looking to connect their devices.

End of an Era

Iridium is one of the few companies in the aerospace sector, alongside SpaceX, that can claim to have anything close to a fan club. Aside from the customers actually using its products, there are a number of astroheads around the world cataloging a phenomenon known as Iridium flares. The original satellites were designed with a large reflective surface, meaning at certain angles the sunlight produces a bright flare that can be seen with the naked eye at night, even in light polluted cities.

This passion was also expressed by Iridium technicians I spoke with, who had an uncanny habit of referring to the satellites in the same manner a parent might speak about their child. Indeed, each one has a name—Everetts has named two after his sons—and this adds a degree of gravitas to the deorbiting process, which can take anywhere from a few days to several months, depending on how much fuel is left in a satellite. At the time of writing, six of the original Iridium satellites launched in the late 90s have been successfully deorbited. Several others will meet their demise later this year.

Yet the end of the original Iridium network is also the beginning of something new—not just for the company, but for space exploration as a whole.

Today, companies like OneWeb, Boeing, and SpaceX are in a race to create massive broadband networks consisting of hundreds of satellites, but Iridium doesn’t see them as much of a threat. In the first place, most of these planned constellations are for consumer broadband, whereas Iridium provides specialized services. More importantly, however, none of these companies has even come close to getting those projects off the ground.

This race to take the internet to space calls to mind other ambitious and ill-fated satellite telecommunications companies of the 90s, like Teledesic and Globalstar. Both these companies collapsed after Iridium filed for bankruptcy and demonstrated that there wasn’t a real market for satellite phones. It’s highly likely this new crop is carefully watching Iridium, just like the space industry did back in the 90s, to see whether its ambitious program will work. So far, everything has gone off without a hitch.

But 35 more satellites still need to be launched. And there is plenty of opportunity for error. The gamble is a big one, and so is the reward—being the first, and only, company to provide communications coverage to every inch of the globe.

“We’re the only company that’s ever launched this many satellites and we’re the only one still doing it today,” Desch told me. “But because of our success, we’ve inspired a whole new industry: This is the era of Low Earth Orbit satellite megaconstellations.”

Source: https://motherboard.vice.com/en_us/article/d34bmk/iridium-next-satellite-constellation-spacex-desch

Secure your Privacy – HERE’S WHY YOU SHOULD USE SIGNAL

Source: https://www.wired.com/story/ditch-all-those-other-messaging-apps-heres-why-you-should-use-signal/

STOP ME IF you’ve heard this before. You text a friend to finalize plans, anxiously awaiting their reply, only to get a message from them on Snapchat to say your latest story was hilarious. So, you move the conversation over to Snapchat, decide to meet up at 10:30, but then you close the app and can’t remember if you agreed on meeting at Hannegan’s or that poppin‘ new brewery downtown. You can’t go back and look at the message since Snapchat messages have a short shelf life, so you send a text, but your friend has already proven to be an unreliable texter. You’d be lucky if they got back to you by midnight.

All of this illustrates a plain truth. There are just too many messaging apps. As conversations can bounce between Snapchat, iMessage, Skype, Instagram, Twitter, and Hangouts/Allo or whatever Google’s latest attempt at messaging is, they’re rendered confusing and unsearchable. We could stick to SMS, but it’s pretty limited compared to other options, and it has some security holes. Rather than just chugging along with a dozen chat apps, letting your notifications pile up, it’s time to pick one messaging app and get all of your friends on board. That way, everyone can just pick up their phones and shoot a message to anyone without hesitation.

Here comes the easy part. There’s one messaging app we should all be using: Signal. It has strong encryption, it’s free, it works on every mobile platform, and the developers are committed to keeping it simple and fast by not mucking up the experience with ads, web-tracking, stickers, or animated poop emoji.

Tales From the Crypto

Signal looks and works a lot like other basic messaging apps, so it’s easy to get started. It’s especially convenient if you have friends and family overseas because, like iMessage and WhatsApp, Signal lets you sidestep expensive international SMS fees. It also supports voice and video calls, so you can cut out Skype and FaceTime. Sure, you don’t get fancy stickers or games like some of the competition, but you can still send pictures, videos, and documents. It’s available on iOS, Android, and desktop.

But plenty of apps have all that stuff. The thing that actually makes Signal superior is that it’s easy to ensure that the contents of every chat remain private and unable to be read by anyone else. As long as both parties are using the app to message each other, every single message sent with Signal is encrypted. Also, the encryption Signal uses is available under an open-source license, so experts have had the chance to test and poke the app to make sure it stays as secure as what’s intended.

If you’re super concerned about messages being read by the wrong eyes, Signal lets you force individual conversations to delete themselves after a designated amount of time. Signal’s security doesn’t stop at texts. All of your calls are encrypted, so nobody can listen in. Even if you have nothing to hide, it’s nice to know that your private life is kept, you know, private.

WhatAbout WhatsApp

Yes, this list of features sounds a lot like WhatsApp. It’s true, the Facebook-owned messaging app has over a billion users, offers most of the same features, and even employs Signal’s encryption to keep chats private. But WhatsApp raises a few concerns that Signal doesn’t. First, it’s owned by Facebook, a company whose primary interest is in collecting information about you to sell you ads. That alone may steer away those who feel Facebook already knows too much about us. Even though the content of your WhatsApp messages are encrypted, Facebook can still extract metadata from your habits, like who you’re talking to and how frequently.

Still, if you use WhatsApp, chances are you already know a lot of other people who are using it. Getting all of them to switch to Signal is highly unlikely. And you know, that’s OK—WhatsApp really is the next-best option to Signal. The encryption is just as strong, and while it isn’t as cleanly stripped of extraneous features as Signal, that massive user base makes it easy to reach almost anyone in your contact list.

Chat Heads

While we’re talking about Facebook, it’s worth noting that the company’s Messenger app isn’t the safest place to keep your conversations. Aside from all the clutter inside the app, the two biggest issues with Facebook Messenger are that you have to encrypt conversations individually by flipping on the „Secret Conversations“ option (good luck remembering to do that), and that anyone with a Facebook profile can just search for your name and send you a message. (Yikes!) There are too many variables in the app, and a lot the security is out of your hands. iMessage may seem like a solid remedy to all of these woes, but it’s tucked behind Apple’s walled iOS garden, so you’re bound to leave out your closest friends who use Android devices. And if you ever switch platforms, say bye-bye to your chat history.

Signal isn’t going to win a lot of fans among those who’ve grown used to the more novel features inside their chat apps. There are no stickers, and no animoji. Still, as privacy issues come to the fore in the minds of users, and as mobile messaging options proliferate, and as notifications pile up, everyone will be searching for a path to sanity. It’s easy to invite people to Signal. Once you’re using it, just tap the „invite“ button inside the chat window, and your friend will be sent a link to download the app. Even stubborn people who only send texts can get into it—Signal can be set as your phone’s default SMS client, so the pain involved in the switch is minimal.

So let’s make a pact right now. Let’s all switch to Signal, keep our messages private, and finally put an end to the untenable multi-app shuffle that’s gone on far too long.

Alibaba and Amazon hit India

Source: https://www.economist.com/news/special-report/21730539-e-commerce-giants-are-trying-export-their-success-alibaba-and-amazon-look-go-global

IN SEPTEMBER 2014 Jeff Bezos announced his first big investment in India, hopping aboard a colourful bus in Bangalore. It was the start of a rapid $5bn investment in India, part of Mr Bezos’s plans to take Amazon global. Two months later Alibaba’s Jack Ma appeared in Delhi. “We will invest more in India,” he declared. The following year Alibaba put $500m into Paytm, an Indian digital-payments company. This year it led a fundraising round for Paytm’s e-commerce arm. The two giants seem set for an epic clash in India.

But in their home markets they have so far stayed out of each other’s way. Amazon has only a tiny business in China. Alibaba’s strategy in the United States has been to help American businesses sell in China and vice versa. “People always ask me, when will you go to the US?” says Alibaba’s CEO, Mr Zhang. “And I say, why the US? Amazon did a fantastic job.” The two firms have mostly invested in different foreign markets: Alibaba across South-East Asia and Amazon across Europe. But much of the rest of the world is still up for grabs.

The biggest tussles will probably be over growing economies and cross-border commerce. Alibaba aspires to serve 2bn customers around the world within 20 years—a benevolent empire that supports businesses. In some cases it has begun with digital payments, as in India with Paytm. In others it has invested in e-commerce sites, as with Lazada, in South-East Asia. But it intends to build a broad range of services within each market, including payments, e-commerce and travel services, and then link local platforms with Alibaba’s in China.

Mr Ma wants to enable small firms to operate just as nimbly as big ones on the global stage. Alibaba helps Chinese companies sell in places such as Brazil and Russia, and assists foreign firms with marketing, logistics and customs in China. Eventually it hopes to use its technology to link logistics networks around the world so that any product can reach any buyer anywhere within 72 hours. That is still a long way off, but it gives a glimpse of the company’s staggering ambition.

Amazon already earns more than one-third of its revenue from e-commerce outside North America. Germany is its second-biggest market, followed by Japan and Britain. This year it bought Souq, an e-commerce firm in the Middle East. Its criteria for expansion elsewhere include the size of the population and the economy and the density of internet use, says Russ Grandinetti, head of Amazon’s international business. India has been one of its main testing grounds.

Amazon, like Alibaba, also wants to help suppliers in any country to sell their products abroad. An Amazon shopper in Mexico, for instance, can buy goods from America. Mr Grandinetti sees such cross-border sales as an increasingly important component of Amazon’s value to consumers and sellers alike.

Yet both companies run the risk that strategies which did well in their home countries may not succeed elsewhere. In China, for instance, the popularity of e-commerce relied on a number of special factors. China’s manufacturers often found themselves with excess supplies of clothes and shoes; Alibaba provided a place to sell them. Alipay thrived because few consumers had credit cards. China has also benefited from having cheap labour and lots of big cities—more than 100 of them with over 1m people—creating a density of demand that made it worthwhile for logistics firms to build distribution networks.

As they expand, however, Amazon’s and Alibaba’s business models may shift and, in some markets, start to converge. So far the companies have differed in important ways. Amazon owns inventory and warehouses; Alibaba does not. But Alibaba has a broader reach than Amazon, particularly with Ant Financial’s giant payments business. As Amazon grows, it may become more like Alibaba. In India, for instance, regulations prevent it from owning inventory directly. And Amazon recently won a licence from the Reserve Bank of India for a digital wallet. Alibaba, for its part, may become more like Amazon. As the Chinese firm set its sights on South-East Asia, it invested in SingPost, Singapore’s state postal system. In September it became the majority owner in Cainiao, a Chinese logistics network, and said it plans to spend $15bn on logistics in the next five years.

Their advances may be slowed by other rivals. Smaller firms can flourish in niches. Flipkart, whose backers include Naspers and SoftBank, is competing fiercely with Amazon in India; the two companies routinely bicker over which has the bigger market share. Yoox Net-a-Porter, an online luxury-goods seller, is also expanding around the world.

Among the questions facing the two giants are whether other technology firms will pour more money into e-commerce, and what partnerships might emerge. Tencent’s WeChat Pay is already challenging Alipay in China. About one-third of WeChat’s users in China shop on that platform. Tencent is trying to recruit shops to accept its payment app in other countries, too, and recently took a stake in Flipkart. In deploying its services abroad, Tencent might get a helping hand from Naspers. The South African company owns about one-third of Tencent and has backed e-commerce firms around the world. Facebook is now muscling in on this business by making it easier for its users to buy goods through its messaging service as well as its other platforms, WhatsApp and Instagram.

The A-list still stands

For now, however, Amazon and Alibaba remain each other’s most formidable international rivals. Success in e-commerce requires scale, which needs lots of capital. Local e-commerce firms in India have come under pressure from investors to boost profitability. Amazon has no problems on that score. As Amit Agarwal, head of Amazon India, puts it: “We will invest whatever it takes to make sure we provide a great customer experience.”

Big firms also have a natural advantage as they expand, because technologies developed for one market can be introduced across many. “It’s like a Lego set,” says Lazada’s chief executive, Maximilian Bittner. He can use pieces of Alibaba’s model, such as algorithms for product recommendations, to improve Lazada’s operations. Amazon’s investments in machine learning have myriad applications anywhere in the world.

That does not mean that Amazon and Alibaba will dominate every country around the world, nor that they will crush every competitor. Bob Van Dijk, chief executive of Naspers, maintains there is room for many operators: “I don’t believe in absolute hegemony.” But given the two giants’ ambitions and the benefits of scale, they are bound to become more powerful and compete directly in more places. That has implications for all sorts of industries, but particularly the retail sector.