Archiv der Kategorie: Innovation

What is GDPR – General Data Protection Regulation

Source Techcrunch.com

European Union lawmakers proposed a comprehensive update to the bloc’s data protection and privacy rules in 2012.

Their aim: To take account of seismic shifts in the handling of information wrought by the rise of the digital economy in the years since the prior regime was penned — all the way back in 1995 when Yahoo was the cutting edge of online cool and cookies were still just tasty biscuits.

Here’s the EU’s executive body, the Commission, summing up the goal:

The objective of this new set of rules is to give citizens back control over of their personal data, and to simplify the regulatory environment for business. The data protection reform is a key enabler of the Digital Single Market which the Commission has prioritised. The reform will allow European citizens and businesses to fully benefit from the digital economy.

For an even shorter the EC’s theory is that consumer trust is essential to fostering growth in the digital economy. And it thinks trust can be won by giving users of digital services more information and greater control over how their data is used. Which is — frankly speaking — a pretty refreshing idea when you consider the clandestine data brokering that pervades the tech industry. Mass surveillance isn’t just something governments do.

The General Data Protection Regulation (aka GDPR) was agreed after more than three years of negotiations between the EU’s various institutions.

It’s set to apply across the 28-Member State bloc as of May 25, 2018. That means EU countries are busy transposing it into national law via their own legislative updates (such as the UK’s new Data Protection Bill — yes, despite the fact the country is currently in the process of (br)exiting the EU, the government has nonetheless committed to implementing the regulation because it needs to keep EU-UK data flowing freely in the post-brexit future. Which gives an early indication of the pulling power of GDPR.

Meanwhile businesses operating in the EU are being bombarded with ads from a freshly energized cottage industry of ‘privacy consultants’ offering to help them get ready for the new regs — in exchange for a service fee. It’s definitely a good time to be a law firm specializing in data protection.

GDPR is a significant piece of legislation whose full impact will clearly take some time to shake out. In the meanwhile, here’s our guide to the major changes incoming and some potential impacts.

Data protection + teeth

A major point of note right off the bat is that GDPR does not merely apply to EU businesses; any entities processing the personal data of EU citizens need to comply. Facebook, for example — a US company that handles massive amounts of Europeans’ personal data — is going to have to rework multiple business processes to comply with the new rules. Indeed, it’s been working on this for a long time already.

Last year the company told us it had assembled “the largest cross functional team” in the history of its family of companies to support GDPR compliance — specifying this included “senior executives from all product teams, designers and user experience/testing executives, policy executives, legal executives and executives from each of the Facebook family of companies”.

“Dozens of people at Facebook Ireland are working full time on this effort,” it said, noting too that the data protection team at its European HQ (in Dublin, Ireland) would be growing by 250% in 2017. It also said it was in the process of hiring a “top quality data protection officer” — a position the company appears to still be taking applications for.

The new EU rules require organizations to appoint a data protection officer if they process sensitive data on a large scale (which Facebook very clearly does). Or are collecting info on many consumers — such as by performing online behavioral tracking. But, really, which online businesses aren’t doing that these days?

The extra-territorial scope of GDPR casts the European Union as a global pioneer in data protection — and some legal experts suggest the regulation will force privacy standards to rise outside the EU too.

Sure, some US companies might prefer to swallow the hassle and expense of fragmenting their data handling processes, and treating personal data obtained from different geographies differently, i.e. rather than streamlining everything under a GDPR compliant process. But doing so means managing multiple data regimes. And at very least runs the risk of bad PR if you’re outed as deliberately offering a lower privacy standard to your home users vs customers abroad.

Ultimately, it may be easier (and less risky) for businesses to treat GDPR as the new ‘gold standard’ for how they handle all personal data, regardless of where it comes from.

And while not every company harvests Facebook levels of personal data, almost every company harvests some personal data. So for those with customers in the EU GDPR cannot be ignored. At very least businesses will need to carry out a data audit to understand their risks and liabilities.

Privacy experts suggest that the really big change here is around enforcement. Because while the EU has had long established data protection standards and rules — and treats privacy as a fundamental right — its regulators have lacked the teeth to command compliance.

But now, under GDPR, financial penalties for data protection violations step up massively.

The maximum fine that organizations can be hit with for the most serious infringements of the regulation is 4% of their global annual turnover (or €20M, whichever is greater). Though data protection agencies will of course be able to impose smaller fines too. And, indeed, there’s a tiered system of fines — with a lower level of penalties of up to 2% of global turnover (or €10M).

This really is a massive change. Because while data protection agencies (DPAs) in different EU Member States can impose financial penalties for breaches of existing data laws these fines are relatively small — especially set against the revenues of the private sector entities that are getting sanctioned.

In the UK, for example, the Information Commissioner’s Office (ICO) can currently impose a maximum fine of just £500,000. Compare that to the annual revenue of tech giant Google (~$90BN) and you can see why a much larger stick is needed to police data processors.

It’s not necessarily the case that individual EU Member States are getting stronger privacy laws as a consequence of GDPR (in some instances countries have arguably had higher standards in their domestic law). But the beefing up of enforcement that’s baked into the new regime means there’s a better opportunity for DPAs to start to bark and bite like proper watchdogs.

GDPR inflating the financial risks around handling personal data should naturally drive up standards — because privacy laws are suddenly a whole lot more costly to ignore.

More types of personal data that are hot to handle

So what is personal data under GDPR? It’s any information relating to an identified or identifiable person (in regulatorspeak people are known as ‘data subjects’).

While ‘processing’ can mean any operation performed on personal data — from storing it to structuring it to feeding it to your AI models. (GDPR also includes some provisions specifically related to decisions generated as a result of automated data processing but more on that below).

A new provision concerns children’s personal data — with the regulation setting a 16-year-old age limit on kids’ ability to consent to their data being processed. However individual Member States can choose (and some have) to derogate from this by writing a lower age limit into their laws.

GDPR sets a hard cap at 13-years-old — making that the defacto standard for children to be able to sign up to digital services. So the impact on teens’ social media habits seems likely to be relatively limited.

The new rules generally expand the definition of personal data — so it can include information such as location data, online identifiers (such as IP addresses) and other metadata. So again, this means businesses really need to conduct an audit to identify all the types of personal data they hold. Ignorance is not compliance.

GDPR also encourages the use of pseudonymization — such as, for example, encrypting personal data and storing the encryption key separately and securely — as a pro-privacy, pro-security technique that can help minimize the risks of processing personal data. Although pseudonymized data is likely to still be considered personal data; certainly where a risk of reidentification remains. So it does not get a general pass from requirements under the regulation.

Data has to be rendered truly anonymous to be outside the scope of the regulation. (And given how often ‘anonymized’ data-sets have been shown to be re-identifiable, relying on any anonymizing process to be robust enough to have zero risk of re-identification seems, well, risky.)

To be clear, given GDPR’s running emphasis on data protection via data security it is implicitly encouraging the use of encryption above and beyond a risk reduction technique — i.e. as a way for data controllers to fulfill its wider requirements to use “appropriate technical and organisational measures” vs the risk of the personal data they are processing.

The incoming data protection rules apply to both data controllers (i.e. entities that determine the purpose and means of processing personal data) and data processors (entities that are responsible for processing data on behalf of a data controller — aka subcontractors).

Indeed, data processors have some direct compliance obligations under GDPR, and can also be held equally responsible for data violations, with individuals able to bring compensation claims directly against them, and DPAs able to hand them fines or other sanctions.

So the intent for the regulation is there be no diminishing in responsibility down the chain of data handling subcontractors. GDPR aims to have every link in the processing chain be a robust one.

For companies that rely on a lot of subcontractors to handle data operations on their behalf there’s clearly a lot of risk assessment work to be done.

As noted above, there is a degree of leeway for EU Member States in how they implement some parts of the regulation (such as with the age of data consent for kids).

Consumer protection groups are calling for the UK government to include an optional GDPR provision on collective data redress to its DP bill, for example — a call the government has so far rebuffed.

But the wider aim is for the regulation to harmonize as much as possible data protection rules across all Member States to reduce the regulatory burden on digital businesses trading around the bloc.

On data redress, European privacy campaigner Max Schrems — most famous for his legal challenge to US government mass surveillance practices that resulted in a 15-year-old data transfer arrangement between the EU and US being struck down in 2015 — is currently running a crowdfunding campaign to set up a not-for-profit privacy enforcement organization to take advantage of the new rules and pursue strategic litigation on commercial privacy issues.

Schrems argues it’s simply not viable for individuals to take big tech giants to court to try to enforce their privacy rights, so thinks there’s a gap in the regulatory landscape for an expert organization to work on EU citizen’s behalf. Not just pursuing strategic litigation in the public interest but also promoting industry best practice.

The proposed data redress body — called noyb; short for: ‘none of your business’ — is being made possible because GDPR allows for collective enforcement of individuals’ data rights. And that provision could be crucial in spinning up a centre of enforcement gravity around the law. Because despite the position and role of DPAs being strengthened by GDPR, these bodies will still inevitably have limited resources vs the scope of the oversight task at hand.

Some may also lack the appetite to take on a fully fanged watchdog role. So campaigning consumer and privacy groups could certainly help pick up any slack.

Privacy by design and privacy by default

Another major change incoming via GDPR is ‘privacy by design’ no longer being just a nice idea; privacy by design and privacy by default become firm legal requirements.

This means there’s a requirement on data controllers to minimize processing of personal data — limiting activity to only what’s necessary for a specific purpose, carrying out privacy impact assessments and maintaining up-to-date records to prove out their compliance.

Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable. (And we’ve sure seen a whole lot of those hellish things in tech.) The core idea is that consent should be an ongoing, actively managed process; not a one-off rights grab.

As the UK’s ICO tells it, consent under GDPR for processing personal data means offering individuals “genuine choice and control” (for sensitive personal data the law requires a higher standard still — of explicit consent).

There are other legal bases for processing personal data under GDPR — such as contractual necessity; or compliance with a legal obligation under EU or Member State law; or for tasks carried out in the public interest — so it is not necessary to obtain consent in order to process someone’s personal data. But there must always be an appropriate legal basis for each processing.

Transparency is another major obligation under GDPR, which expands the notion that personal data must be lawfully and fairly processed to include a third principle of accountability. Hence the emphasis on data controllers needing to clearly communicate with data subjects — such as by informing them of the specific purpose of the data processing.

The obligation on data handlers to maintain scrupulous records of what information they hold, what they are doing with it, and how they are legally processing it, is also about being able to demonstrate compliance with GDPR’s data processing principles.

But — on the plus side for data controllers — GDPR removes the requirement to submit notifications to local DPAs about data processing activities. Instead, organizations must maintain detailed internal records — which a supervisory authority can always ask to see.

It’s also worth noting that companies processing data across borders in the EU may face scrutiny from DPAs in different Member States if they have users there (and are processing their personal data).

Although the GDPR sets out a so-called ‘one-stop-shop’ principle — that there should be a “lead” DPA to co-ordinate supervision between any “concerned” DPAs — this does not mean that, once it applies, a cross-EU-border operator like Facebook is only going to be answerable to the concerns of the Irish DPA.

Indeed, Facebook’s tactic of only claiming to be under the jurisdiction of a single EU DPA looks to be on borrowed time. And the one-stop-shop provision in the GDPR seems more about creating a co-operation mechanism to allow multiple DPAs to work together in instances where they have joint concerns, rather than offering a way for multinationals to go ‘forum shopping’ — which the regulation does not permit (per WP29 guidance).

Another change: Privacy policies that contain vague phrases like ‘We may use your personal data to develop new services’ or ‘We may use your personal data for research purposes’ will not pass muster under the new regime. So a wholesale rewriting of vague and/or confusingly worded T&Cs is something Europeans can look forward to this year.

Add to that, any changes to privacy policies must be clearly communicated to the user on an ongoing basis. Which means no more stale references in the privacy statement telling users to ‘regularly check for changes or updates’ — that just won’t be workable.

The onus is firmly on the data controller to keep the data subject fully informed of what is being done with their information. (Which almost implies that good data protection practice could end up tasting a bit like spam, from a user PoV.)

The overall intent behind GDPR is to inculcate an industry-wide shift in perspective regarding who ‘owns’ user data — disabusing companies of the notion that other people’s personal information belongs to them just because it happens to be sitting on their servers.

“Organizations should acknowledge they don’t exist to process personal data but they process personal data to do business,” is how analyst Gartner research director Bart Willemsen sums this up. “Where there is a reason to process the data, there is no problem. Where the reason ends, the processing should, too.”

The data protection officer (DPO) role that GDPR brings in as a requirement for many data handlers is intended to help them ensure compliance.

This officer, who must report to the highest level of management, is intended to operate independently within the organization, with warnings to avoid an internal appointment that could generate a conflict of interests.

Which types of organizations face the greatest liability risks under GDPR? “Those who deliberately seem to think privacy protection rights is inferior to business interest,” says Willemsen, adding: “A recent example would be Uber, regulated by the FTC and sanctioned to undergo 20 years of auditing. That may hurt perhaps similar, or even more, than a one-time financial sanction.”

“Eventually, the GDPR is like a speed limit: There not to make money off of those who speed, but to prevent people from speeding excessively as that prevents (privacy) accidents from happening,” he adds.

Another right to be forgotten

Under GDPR, people who have consented to their personal data being processed also have a suite of associated rights — including the right to access data held about them (a copy of the data must be provided to them free of charge, typically within a month of a request); the right to request rectification of incomplete or inaccurate personal data; the right to have their data deleted (another so-called ‘right to be forgotten’ — with some exemptions, such as for exercising freedom of expression and freedom of information); the right to restrict processing; the right to data portability (where relevant, a data subject’s personal data must be provided free of charge and in a structured, commonly used and machine readable form).

All these rights make it essential for organizations that process personal data to have systems in place which enable them to identify, access, edit and delete individual user data — and be able to perform these operations quickly, with a general 30 day time-limit for responding to individual rights requests.

GDPR also gives people who have consented to their data being processed the right to withdraw consent at any time. Let that one sink in.

Data controllers are also required to inform users about this right — and offer easy ways for them to withdraw consent. So no, you can’t bury a ‘revoke consent’ option in tiny lettering, five sub-menus deep. Nor can WhatsApp offer any more time-limit opt-outs for sharing user data with its parent multinational, Facebook. Users will have the right to change their mind whenever they like.

The EU lawmakers’ hope is that this suite of rights for consenting consumers will encourage respectful use of their data — given that, well, if you annoy consumers they can just tell you to sling yer hook and ask for a copy of their data to plug into your rival service to boot. So we’re back to that fostering trust idea.

Add in the ability for third party organizations to use GDPR’s provision for collective enforcement of individual data rights and there’s potential for bad actors and bad practice to become the target for some creative PR stunts that harness the power of collective action — like, say, a sudden flood of requests for a company to delete user data.

Data rights and privacy issues are certainly going to be in the news a whole lot more.

Getting serious about data breaches

But wait, there’s more! Another major change under GDPR relates to security incidents — aka data breaches (something else we’ve seen an awful, awful lot of in recent years) — with the regulation doing what the US still hasn’t been able to: Bringing in a universal standard for data breach disclosures.

GDPR requires that data controllers report any security incidents where personal data has been lost, stolen or otherwise accessed by unauthorized third parties to their DPA within 72 hours of them becoming aware of it. Yes, 72 hours. Not the best part of a year, like er Uber.

If a data breach is likely to result in a “high risk of adversely affecting individuals’ rights and freedoms” the regulation also implies you should ‘fess up even sooner than that — without “undue delay”.

Only in instances where a data controller assesses that a breach is unlikely to result in a risk to the rights and freedoms of “natural persons” are they exempt from the breach disclosure requirement (though they still need to document the incident internally, and record their reason for not informing a DPA in a document that DPAs can always ask to see).

“You should ensure you have robust breach detection, investigation and internal reporting procedures in place,” is the ICO’s guidance on this. “This will facilitate decision-making about whether or not you need to notify the relevant supervisory authority and the affected individuals.”

The new rules generally put strong emphasis on data security and on the need for data controllers to ensure that personal data is only processed in a manner that ensures it is safeguarded.

Here again, GDPR’s requirements are backed up by the risk of supersized fines. So suddenly sloppy security could cost your business big — not only in reputation terms, as now, but on the bottom line too. So it really must be a C-suite concern going forward.

Nor is subcontracting a way to shirk your data security obligations. Quite the opposite. Having a written contract in place between a data controller and a data processor was a requirement before GDPR but contract requirements are wider now and there are some specific terms that must be included in the contract, as a minimum.

Breach reporting requirements must also be set out in the contract between processor and controller. If a data controller is using a data processor and it’s the processor that suffers a breach, they’re required to inform the controller as soon as they become aware. The controller then has the same disclosure obligations as per usual.

Essentially, data controllers remain liable for their own compliance with GDPR. And the ICO warns they must only appoint processors who can provide “sufficient guarantees” that the regulatory requirements will be met and the rights of data subjects protected.

tl;dr, be careful who and how you subcontract.

Right to human review for some AI decisions

Article 22 of GDPR places certain restrictions on entirely automated decisions based on profiling individuals — but only in instances where these human-less acts have a legal or similarly significant effect on the people involved.

There are also some exemptions to the restrictions — where automated processing is necessary for entering into (or performance of) a contract between an organization and the individual; or where it’s authorized by law (e.g. for the purposes of detecting fraud or tax evasion); or where an individual has explicitly consented to the processing.

In its guidance, the ICO specifies that the restriction only applies where the decision has a “serious negative impact on an individual”.

Suggested examples of the types of AI-only decisions that will face restrictions are automatic refusal of an online credit application or an e-recruiting practices without human intervention.

Having a provision on automated decisions is not a new right, having been brought over from the 1995 data protection directive. But it has attracted fresh attention — given the rampant rise of machine learning technology — as a potential route for GDPR to place a check on the power of AI blackboxes to determine the trajectory of humankind.

The real-world impact will probably be rather more prosaic, though. And experts suggest it does not seem likely that the regulation, as drafted, equates to a right for people to be given detailed explanations of how algorithms work.

Though as AI proliferates and touches more and more decisions, and as its impacts on people and society become ever more evident, pressure may well grow for proper regulatory oversight of algorithmic blackboxes.

In the meanwhile, what GDPR does in instances where restrictions apply to automated decisions is require data controllers to provide some information to individuals about the logic of an automated decision.

They are also obliged to take steps to prevent errors, bias and discrimination. So there’s a whiff of algorithmic accountability. Though it may well take court and regulatory judgements to determine how stiff those steps need to be in practice.

Individuals do also have a right to challenge and request a (human) review of an automated decision in the restricted class.

Here again the intention is to help people understand how their data is being used. And to offer a degree of protection (in the form of a manual review) if a person feels unfairly and harmfully judged by an AI process.

The regulation also places some restrictions on the practice of using data to profile individuals if the data itself is sensitive data — e.g. health data, political belief, religious affiliation etc — requiring explicit consent for doing so. Or else that the processing is necessary for substantial public interest reasons (and lies within EU or Member State law).

While profiling based on other types of personal data does not require obtaining consent from the individuals concerned, it still needs a legal basis and there is still a transparency requirement — which means service providers will need to inform users they are being profiled, and explain what it means for them.

And people also always have the right to object to profiling activity based on their personal data.

 

Source: https://techcrunch.com/2018/01/20/wtf-is-gdpr/

Advertisements

two giants of AI team up to prevent the robot apocalypse

THERE’S NOTHING NEW about worrying that superintelligent machines may endanger humanity, but the idea has lately become hard to avoid.

A spurt of progress in artificial intelligence as well as comments by figures such as Bill Gates—who declared himself “in the camp that is concerned about superintelligence”—have given new traction to nightmare scenarios featuring supersmart software. Now two leading centers in the current AI boom are trying to bring discussion about the dangers of smart machines down to Earth. Google’s DeepMind, the unit behind the company’s artificial Go champion, and OpenAI, the nonprofit lab funded in part by Tesla’s Elon Musk, have teamed up to make practical progress on a problem they argue has attracted too many headlines and too few practical ideas: How do you make smart software that doesn’t go rogue?

“If you’re worried about bad things happening, the best thing we can do is study the relatively mundane things that go wrong in AI systems today,” says Dario Amodei, a curly-haired researcher on OpenAI’s small team working on AI safety. „That seems less scary and a lot saner than kind of saying, ‘You know, there’s this problem that we might have in 50 years.’” OpenAI and DeepMind contributed to a position paper last summer calling for more concrete workon near-term safety challenges in AI.

A new paper from the two organizations on a machine learning system that uses pointers from humans to learn a new task, rather than figuring out its own—potentially unpredictable—approach, follows through on that. Amodei says the project shows it’s possible to do practical work right now on making machine learning systems less able to produce nasty surprises. (The project could be seen as Musk’s money going roughly where his mouth has already been; in a 2014 appearance at MIT, he described work on AI as “summoning the demon.”)

None of DeepMind’s researchers were available to comment, but spokesperson Jonathan Fildes wrote in an email that the company hopes the continuing collaboration will inspire others to work on making machine learning less likely to misbehave. “In the area of AI safety, we need to establish best practices that are adopted across as many organizations as possible,” he wrote.

The first problem OpenAI and DeepMind took on is that software powered by so-called reinforcement learning doesn’t always do what its masters want it to do—and sometimes kind of cheats. The technique, which is hot in AI right now, has software figure out a task by experimenting with different actions and sticking with those that maximize a virtual reward or score, meted out by a piece of code that works like a mathematical motivator. It was instrumental to the victory of DeepMind’s AlphaGo over human champions at the board game Go, and is showing promise in making robots better at manipulating objects.

But crafting the mathematical motivator, or reward function, such that the system will do the right thing is not easy. For complex tasks with many steps, it’s mind-bogglingly difficult—imagine trying to mathematically define a scoring system for tidying up your bedroom—and even for seemingly simple ones results can be surprising. When OpenAI set a reinforcement learning agent to play boat racing game CoastRunners, for example, it surprised its creators by figuring out a way to score points by driving in circles rather than completing the course.

DeepMind and OpenAI’s solution is to have reinforcement learning software take feedback from human trainers instead, and use their input to define its virtual reward system. They hired contractors to give feedback to AI agents via an interface that repeatedly asks which of two short video clips of the AI agent at work is closest to the desired behavior.

This simple simulated robot, called a Hopper, learned to do a backflip after receiving 900 of those virtual thumbs-up verdicts from the AI trainers while it tried different movements. With thousands of bits of feedback, a version of the system learned to play Atari games such as Pong and got to be better than a human player at the driving game Enduro. Right now this approach requires too much human supervision to be very practical at eliciting complex tasks, but Amodei says results already hint at how this could be a powerful way to make AI systems more aligned with what humans want of them.

It took less than an hour of humans giving feedback to get Hopper to land that backflip, compared to the two hours it took an OpenAI researcher to craft a reward function that ultimately produced a much less elegant flip. “It looks super awkward and kind of twitchy,” says Amodei. “The backflip we trained from human feedback is better because what’s a good backflip is kind of an aesthetic human judgment.” You can see how complex tasks such as cleaning your home might also be easier to specify correctly with a dash of human feedback than with code alone.

 

Making AI systems that can soak up goals and motivations from humans has emerged as a major theme in the expanding project of making machines that are both safe and smart. For example, researchers affiliated with UC Berkeley’s Center for Human-Compatible AI are experimenting with getting robots such as autonomous cars or home assistants to take advice or physical guidance from people. “Objectives shouldn’t be a thing you just write down for a robot; they should actually come from people in a collaborative process,” says Anca Dragan, coleader of the center.

She hopes the idea can catch on in the industry beyond DeepMind and OpenAI’s explorations, and says companies already run into problems that might be prevented by infusing some human judgement into AI systems. In 2015, Google hurriedly tweaked its photo recognition service after it tagged photos of black people as gorillas.

Longer term, Amodei says, spending the next few years working on making existing, modestly smart machine learning systems more aligned with human goals could also lay the groundwork for our potential future face-off with superintelligence. “When, someday, we do face very powerful AI systems, we can really be experts in how to make them interact with humans,” he says. If it happens, perhaps the first superintelligent machine to open its electronic eyes will gaze at us with empathy.

Original Source from: https://www.wired.com/story/two-giants-of-ai-team-up-to-head-off-the-robot-apocalypse/

Der Kreis derer, die als Chief Disruption Officer überhaupt nur annähernd in Betracht kommen, hat den Radius „null“

Ich bin eine eierlegende WollMilchSau – und der neue Chief Disruption Officer Deiner Firma!

Eierlegende Wollmilchsau

Eierlegende Wollmilchsau

Fotolia #83825279 | Urheber: jokatoons

Herausforderung: die Auftragsklärung

Ein neuer CDO soll bei den Konzernen oft den „Tanker bewegen und in Schnellboote verwandeln“, schließlich hört und liest man ja überall von Startups, Agil, Dynamik, Disruption und stetiger Veränderung. Da stellt sich doch die Frage (typischerweise an HR) wer erstellt den das JobProfil für einen Job, den es noch nie gab und dessen Ziele so faszinierend unterschiedlich, ja widersprüchlich sind. Schließlich wird jeder seine eigene Vorstellung davon haben, was der künftige CDO „endlich“ angehen soll – fragen Sie doch mal Kollegen aus unterschiedlichen Funktionen!

In der folgenden Liste habe ich einmal einige (Achtung Buzzword-Bingo) zusammengefasst:

Typische CDO Erwartungsperspektiven:

  • Neue(s) Business Modell(e) finden, entwickeln und bitte gleich den Return on Investment im ersten Jahr sicherstellen
  • Change Manager (Disruption, Innovation…) der die gesamte Organisation in die neue Arbeitswelt führt
  • Neue Vertriebs- und Finanzierungskanäle – vom Crowdfunding über Crowdstorming, Crowdworking und Social Marketing
  • Digital Mindset / Organisationsentwicklung – nachhaltige Veränderung der Unternehmenskultur
  • Board Coaching / Trainer für die anderen Vorstände
  • Smart Factory – die intelligente Fabrik, digitalisierte, automatisierte und vernetzte Produktionsumgebungen mit neuen agilen Werkzeugen bis zur Losgröße 1 (zugleich stetig wachsender Fokus auf Service-Orientierung stattfindet – also „nicht-produktion“)
  • BigData / Analytics / Predictive – alles was man mit Daten, deren Analyse und Vorhersagbarkeit so treiben kann
  • Rechtsanwalt – Arbeit 4.0, Zusammenarbeit mit Externen, Compliance… siehe unten „illegal“
  • Neues IT Framework – moderne Softwarearchitekturen, Werkzeuge und Apps einführen
  • Digitales Vorbild / Botschafter – Sichtbar werden für neuen Arbeitsstil, Führungskultur – am Besten auch nach außen werbewirksam
  • Digitale Prozesse / Digitale Effizienz – den systemischen Organisationsmotor generalsanieren
  • Social Media extern – von Arbeitnehmerattraktivität über Recruiting (von natürlich Digital Professionals) bis zu Wirkungsverbesserung durch virales Marketing
  • Interne Kommunikation und Zusammenarbeit (Enterprise Social Networking)… – die gesamte Belegschaft, inklusive Fabrikarbeiter mobil, vernetzt, zeit- und orts-unabhängig sowie skallierbar in Arbeit 4.0 führen

Diese Liste an Erwartungen ist sicher alles andere als vollständig, soll aber zeigen, dass es nicht einfach ist, das Profil für diese Position so zu definieren, dass der Inhaber überhaupt eine Chance hat Wirkung zu entfalten. Schließlich gilt es neben den fachlichen Aufgaben auch die bestehende Kultur, Politik, Seilschaften etc. kennen zu lernen und dann nachhaltig zu verändern.

Herausforderung: Woher nehmen, diese CDO – eierlegende WollMilchSau?

Wie einer der Headhunter mal so schön formulierte:

„der Kreis derer, die als CDO überhaupt nur annähernd in Betracht kommen, hat den Radius „null““

Es gibt keine Ausbildung zum CDO, typische Karrierewege erzeugen meist „system-stabilisierende“ Vertreter, wer will einem „jungen Wilden“ die Verantwortung über einen Konzern geben. Die Zahl derer, die in ähnlichen Rollen erfolgreich sind, ist äußerst überschaubar – Nachahmung schwierig- und oft auch nicht einfach übertragbar… auch die großen Consulting Riesen sind hier sicher keine Hilfe, da deren Reifegrad hier ähnlich jungfräulich ist (Es gibt keine Blaupausen, die man aus der Schublade ziehen könnte, keine Beweise, kaum Studien die als Handlungsanleitung taugen)

Also wird nach Kompromissen gesucht, das kann dann z.B. so aussehen:

  • wir nehmen eine(n), der schon Vorstand war/ist … dort findet man kaum Digital Natives (damit ist nicht vorrangig das Alter, vielmehr deren Haltung gegenüber neuen, disruptiven Entwicklungen gemeint, die noch nicht allgemein als erfolgreich, bleibend und wichtig/prägend anerkannt sind), aus Karrieregründen kaum jemanden, der mit Transparenz, Beteiligung und agilen Methoden risikofreudig umgeht
  • wir nehmen eine(n), der IT kann … wohl einer der häufigsten Fehler, Digitale Transformation mit IT zu verwechseln. Wohl ist ein guter Teil (ca. 20%) mit Software, Tools und IT KnowHow verbunden, der Großteil geht aber um völlig andere (oft sehr IT fremde) Themen – es geht sehr viel um Führung! siehe Liste oben
  • wir nehmen eine(n), der schon ein Startup erfolgreich gemacht hat … das führt auf beiden Seiten zu großen Enttäuschungen: Freiheit, Sicherheit, Vorgaben, Rahmenbedingungen, Größe, Internationalität… Assimilation garantiert
  • wir nehmen jemanden, der Karriere machen will und großes Potential zeigt … Wer Karriere machen will ist meist doch recht Regel-konform unterwegs. Wer traut es sich „alles“ in Frage zu stellen bei einem System, in dem er/sie groß werden will? Risikobereitschaft, Fehler machen (dürfen) sind nicht die üblichen Treiber einer erfolgreichen Karriere
  • wir suchen jemanden von Extern – klar, neue Besen kehren gut… wie sieht es aber mit der damit verbundenen sehr langen Anlaufzeit aus. Kann es sich z.B. ein Automobilkonzern in der heutigen Lage leisten jetzt mit jemandem bei null anzufangen, was die internen Kenntnisse, Netzwerke (oder besser Verstrickungen), Politik, Kultur angeht?

Den „fertigen“ CDO zu finden dürfte also ein schwieriges Unterfangen sein – eine Lösung wäre in meinen Augen mit der aktuellen Priorität zu beginnen und zu versuchen die fehlenden Merkmale zu intern zu entwickeln (ideal parallel mit allen anderen). Neben Kultur, Führung ist sicher „neues, konstantes Lernen“ auf allen Ebenen höchst relevant.

aus: https://www.linkedin.com/pulse/der-cdo-wirds-schon-richten-harald-schirmer

What mobile carriers should do next: Become banks

mobile-banking

If banking is something you do on an app, why shouldn’t your mobile carrier actually be your bank? It’s more than just an idea. Orange, Telenor, and O2 are all building their own operations.

In the UK alone, people use mobile banking apps more than 7,610 times a minute, or 4 billion times a year.

According to the “Way We Bank Now” report by the British Banking Association, they downloaded more than 13.8 million banking apps in 2015, up 25 percent from 2014.

All over the world people are switching away from branch-based banking, and even desktop Internet banking, to manage their financial lives through an app.

Why wouldn’t they? There’s no need to go anywhere. The user interface is typically better than it is on a PC. And the addition of biometrics (typically fingerprint) makes signing in so much easier and safer than passwords.

Of course, banking apps are made by banks. The carriers just provide the data packages that allow people to use use them.

But in the last year, a small number of European carriers have come to a radical conclusion: Let’s do more than just enable mobile banking apps; let’s build our own.

Orange has made headlines recently for just this reason. Earlier this year, it moved to acquire Groupama Banque, enabling it to leverage its banking license and benefit from its existing client network, thereby creating its own banking operation. Now, authorities in France and Europe have approved the deal.

Groupama Banque is currently owned by insurance firm Groupama. When the deal is completed, Orange will own 65 percent of it. Thus, the telco will be able to launch Orange Bank in France in January 2017, with Spain and Belgium to follow.

Actually, Orange already has some experience in the area. In October 2014, it launched Orange Finanse as a joint venture between mBank and Orange Polska. It’s not alone. O2 Germany launched a bank with Fidor in July, while Telenor is two years into its Banka Serbia launch.

Other operators are experimenting. Telefonica Spain announced a joint venture with CaixaBank and Santander, while in the US, T-Mobile launched a Visa card with banking features linked to a smartphone app (though it is now being wound down).

Needless to say, financial services are nothing new for mobile operators. In developing markets, they have launched text-based mobile money systems that have transformed the lives of millions. Vodafone’s M-Pesa has 25 million customers and 261,000 agents in 11 countries.

Meanwhile Orange has its own Orange Money service, which launched in Ivory Coast in 2008 and has 18 million customers in 14 countries across Africa.

In mature markets, the emphasis has been on NFC payments. The typical model was a contactless wallet app, with account credentials stored in the secure element of a SIM card. There were numerous launches — Softcard (US), Valyou (Norway), Buyster (France), SixPack (Denmark), and so on. Most have closed.

So why would operators switch focus to banking? The simple reason is that they believe they can build new and intuitive products. Why? Because they are mobile-first.

The theory goes that banks have a tendency to approach new mobile services by layering them on top of legacy IT systems. By contrast, operators should have the know-how to build much better mobile experiences that are consumer centric.

So O2 Banking customers can, for example, sign up via a video chat session with an agent. They can have a current account with a free MasterCard inside five minutes. They can also earn rewards of mobile data rather than pennies of interest.

Telenor Banka in Serbia launched in September 2014. It carefully targeted “premium” tech-savvy customers and cultivated them as brand ambassadors and to quickly spread the word on social media. By summer 2016, the bank had 180,000 customers (the biggest traditional bank in the country has 500,000 mobile users).

The Telenor Banka app was built around specific “pain points” such as currency transfer. In Serbia, people like to transfer their dinars for euros. Typically, they queue to do so with an agent, then queue again at the bank to deposit the cash back into their accounts. Telenor Banka lets them do the same in two clicks inside the app.

Users can also activate and deactivate their cards from inside the app. This helps people combat online fraud as they can “turn off” their cards apart from when they are actually making a payment.

All these launches are indicative of a dynamic moment in banking. Technology is making it easier for digital-only challenger banks (including mobile operators) to launch rival products. Regulation is helping too. The EU Payment Services Directive 2, coming into force in 2018, mandates that banks must open up APIs so that third parties (with user permission) can have access to account information.

In its Essentials 2020 review, Orange set a target of making €400 million ($435 million) from financial services by 2018. This compares to overall group revenues at Orange of €10.3 billion ($11.2 billion) in the third quarter of 2015 alone.

This is ushering in the idea of “banking as a marketplace,” which operators are keen to leverage. Here, banking apps offer account services but also act as a mini mall in which users can “shop” for foreign exchange, insurance, loans, and so on from specialists.

For telcos, it’s an opportunity to experiment with new customer centric business models while delivering CRM and achieving churn reduction. For banks and other key players in financial services, it’s a call to action to leverage their own assets in a way that creates value for the discerning mobile consumer.

What mobile carriers do next: Become banks

Web 3.0 A decentralized web would give power back to the people online

Recently, Google launched a video calling tool (yes, another one). Google Hangouts has been sidelined to Enterprise, and Google Duo is supposed to be the next big thing in video calling.

So now we have Skype from Microsoft, Facetime from Apple, and Google with Duo. Each big company has its own equivalent service, each stuck in its own bubble. These services may be great, but they aren’t exactly what we imagined during the dream years when the internet was being built.

The original purpose of the web and internet, if you recall, was to build a common neutral network which everyone can participate in equally for the betterment of humanity. Fortunately, there is an emerging movement to bring the web back to this vision and it even involves some of the key figures from the birth of the web. It’s called the Decentralised Web or Web 3.0, and it describes an emerging trend to build services on the internet which do not depend on any single “central” organisation to function.

So what happened to the initial dream of the web? Much of the altruism faded during the first dot-com bubble, as people realised that an easy way to create value on top of this neutral fabric was to build centralised services which gather, trap and monetise information.

Search Engines (e.g. Google), Social Networks (e.g. Facebook), Chat Apps (e.g. WhatsApp) have grown huge by providing centralised services on the internet. For example, Facebook’s future vision of the internet is to provide access only to the subset of centralised services it endorses (Internet.org and Free Basics).

Meanwhile, it disables fundamental internet freedoms such as the ability to link to content via a URL (forcing you to share content only within Facebook) or the ability for search engines to index its contents (other than the Facebook search function).

paltalk-tinychat

The Decentralised Web envisions a future world where services such as communication, currency, publishing, social networking, search, archiving etc are provided not by centralised services owned by single organisations, but by technologies which are powered by the people: their own community. Their users.

The core idea of decentralisation is that the operation of a service is not blindly trusted to any single omnipotent company. Instead, responsibility for the service is shared: perhaps by running across multiple federated servers, or perhaps running across client side apps in an entirely “distributed” peer-to-peer model.

Even though the community may be “byzantine” and not have any reason to trust or depend on each other, the rules that describe the decentralised service’s behaviour are designed to force participants to act fairly in order to participate at all, relying heavily on cryptographic techniques such as Merkle trees and digital signatures to allow participants to hold each other accountable.

There are three fundamental areas that the Decentralised Web necessarily champions:privacy, data portability and security.

  • Privacy: Decentralisation forces an increased focus on data privacy. Data is distributed across the network and end-to-end encryption technologies are critical for ensuring that only authorized users can read and write. Access to the data itself is entirely controlled algorithmically by the network as opposed to more centralized networks where typically the owner of that network has full access to data, facilitating  customer profiling and ad targeting.
  • Data Portability: In a decentralized environment, users own their data and choose with whom they share this data. Moreover they retain control of it when they leave a given service provider (assuming the service even has the concept of service providers). This is important. If I want to move from General Motors to BMW today, why should I not be able to take my driving records with me? The same applies to chat platform history or health records.
  • Security: Finally, we live in a world of increased security threats. In a centralized environment, the bigger the silo, the bigger the honeypot is to attract bad actors. Decentralized environments are safer by their general nature against being hacked, infiltrated, acquired, bankrupted or otherwise compromised as they have been built to exist under public scrutiny from the outset.

 

Just as the internet itself triggered a grand re-levelling, taking many disparate unconnected local area networks and providing a new neutral common ground that linked them all, now we see the same pattern happening again as technology emerges to provide a new neutral common ground for higher level services. And much like Web 2.0, the first wave of this Web 3.0 invasion has walked among us for several years already.

Git is wildly successful as an entirely decentralised version control system – almost entirely replacing centralised systems such as Subversion. Bitcoin famously demonstrates how a currency can exist without any central authority, contrasting with a centralised incumbent such as Paypal. Diaspora aims to provide a decentralised alternative to Facebook. Freenet paved the way for decentralised websites, email and file sharing.

Less famously, StatusNet (now called GNU Social) provides a decentralised alternative to Twitter. XMPP was built to provide a decentralised alternative to the messaging silos of AOL Instant Messenger, ICQ, MSN, and others.

Telephone switchboard operators circa 1914. Photo courtesy Flickr and reynermedia.

Telephone switchboard operators circa 1914. Photo courtesy Flickr and reynermedia.

However, these technologies have always sat on the fringe — favourites for the geeks who dreamt them up and are willing to forgive their mass market shortcomings, but frustratingly far from being mainstream. The tide is turning . The public zeitgeist is finally catching up with the realisation that being entirely dependent on massive siloed community platforms is not entirely in the users’ best interests.

Critically, there is a new generation of Decentralised Startups that have got the attention of the mainstream industry, heralding in the new age for real.

Blockstack and Ethereum show how Blockchain can be so much more than just a cryptocurrency, acting as a general purpose set of building blocks for building decentralised systems that need strong consensus. IPFS and the Dat Project provide entirely decentralised data fabrics, where ownership and responsibility for data is shared by all those accessing it rather than ever being hosted in a single location.

The real step change in the current momentum came in June at the Decentralised Web Summit organised by the Internet Archive. The event brought together many of the original “fathers of the internet and World Wide Web” to discuss ways to “Lock the web open” and reinvent a web “that is more reliable, private, and fun.”

Brewster Kahle, the founder of the Internet Archive, saw first hand the acceleration in decentralisation technologies whilst considering how to migrate the centralised Internet Archive to instead be decentralised: operated and hosted by the community who uses it rather being a fragile and vulnerable single service.

Additionally, the enthusiastic presence of Tim Berners-Lee, Vint Cerf, Brewster himself and many others of the old school of the internet at the summit showed that for the first time the shift to decentralisation had caught the attention and indeed endorsement of the establishment.

Tim Berners-Lee said:

The web was designed to be decentralised so that everybody could participate by having their own domain and having their own webserver and this hasn’t worked out. Instead, we’ve got the situation where individual personal data has been locked up in these silos. […] The proposal is, then, to bring back the idea of a decentralised web.

To bring back power to people. We are thinking we are going to make a social revolution by just tweaking: we’re going to use web technology, but we’re going to use it in such a way that we separate the apps that you use from the data that you use.

We now see the challenge is to mature these new technologies and bring them fully to the mass market. Commercially there is huge value to be had in decentralisation: whilst the current silos may be washed away, new ones will always appear on top of the new common ground, just as happened with the original Web.

Github is the posterchild for this: a $2 billion company built entirely as a value-added service on top of the decentralised technology of Git — despite users being able to trivially take their data and leave at any point.

 Similarly, we expect to see the new wave of companies providing decentralised infrastructure and commercially viable services on top, as new opportunities emerge in this brave new world.

Ultimately, it’s hard to predict what final direction Web 3.0 will take us, and that’s precisely the point. By unlocking the web from the hands of a few players this will inevitably enable a surge in innovation and let services flourish which prioritise the user’s interests.

Apple, Google, Microsoft, and others have their own interests at heart (as they should), but that means that the user can often be viewed purely as a source of revenue, quite literally at the users’ expense.

As the Decentralised Web attracts the interest and passion of the mainstream developer community, there is no telling what new economies will emerge and what kinds of new technologies and services they will invent. The one certainty is they will intrinsically support their communities and user bases just as much as the interests of their creators.

A decentralized web would give power back to the people online

Google Hits a Samsung Roadblock With New AI Assistant – Viv & Adam Cheyer

Google just debuted a digital assistant, which it hopes to place inside smartphones, watches, cars and every other imaginable internet-connected device. It’s already hit a snag.

The Alphabet division launched new smartphones last week with the artificially intelligent assistant deeply embedded. It also rolled out a speaker with the feature at its core and announced plans to let other companies tie their apps and services to the assistant.

A day later, Samsung, which just announced it was ending production of its problematic Galaxy Note 7 smartphones, said it was acquiring Viv Labs, a startup building its own AI voice-based assistant.

At first, the deal looked like a counter-punch to Samsung rival Apple — Viv is run by the creators of Apple’s Siri assistant. But buying Viv may be more of a problem for Google, because Samsung is the biggest maker of phones running Google’s Android mobile operating system.

Google strategy is now centered on the assistant, rather than its search engine, because it’s a more natural way for people to interact with smartphones and other connected devices. Getting all Android phone makers to put the Google assistant on their devices would get the technology into millions of hands quickly. But Samsung’s Viv deal suggests assistants are too important for phone makers to let other companies supply this feature.

Last week, despite the Note 7 crisis, Samsung executive Injong Rhee said the company plans to put Viv’s technology in its smartphones next year and then embed it into other electronics and home appliances. A Samsung representative and a Google spokeswoman declined to comment.

That’s a necessity for Samsung, according to some analysts and industry insiders.

„As AI is becoming more sophisticated and valuable to the consumer, there’s no question it will be important for hardware companies,“ said Kirt McMaster, executive chairman of Cyanogen, a startup that makes Android software. Mr. McMaster, a frequent Google critic, said other Android handset makers will likely follow Samsung’s move.

„If you don’t have an AI asset, you’re not going to have a brain,“ he added.

Google may already have known that some Android phone makers — known as original equipment manufacturers, or OEMs — were reluctant to embrace its assistant.

„Other OEMs may want to differentiate“ Google’s Android chief Hiroshi Lockheimer told Bloomberg before it released its own smartphones. „They may want to do their own thing — their own assistant, for example.“

Samsung and Google have sparred in the past over distribution. Google requires Android handset makers to pre-install 11 apps, yet Samsung often puts its own services on its phones. And the South Korean company has released devices that run on its own operating system, called Tizen, not Android.

Viv was frequently on the short-list of startups that could help larger tech companies build assistant technology. Founded four-years ago by Dag Kittlaus, Adam Cheyer and Chris Brigham, the startup was working on voice technology to handle more complex queries than existing offerings.

While it drummed up considerable attention and investment, Viv has not yet released its product to the public. And some analysts are skeptical of Samsung’s ability to convert the technology into a credible service, given its mixed record with software applications.

„It will be very hard to compete with Google’s strength in data and their AI acquisitions,“ said Jitendra Waral, senior analyst with Bloomberg Intelligence. „Samsung would need to prove that its AI solutions are superior to that of Google’s. They are handicapped in this race.“

Samsung is also focused on handling the fallout from its exploding Galaxy Note 7 phones, potentially taking management time away from its Viv integration.

But it’s a race Samsung has to join. In recent years, Samsung acquired mobile-payments and connected-device startups to keep up with Apple, Google and Amazon. Digital voice-based assistants may be more important, if they become the main way people interact with devices.

Silicon Valley titans are rushing into the space because of this potential. Amazon is trying to sign up developers for its Alexa voice technology. Apple has recently touted more Siri capabilities and opened the technology to other developers. And now Google, considered the leader in artificial intelligence, is making its own push.

„I don’t ever remember a time when every single major consumer tech company — and even enterprise companies — have been singularly focused on an identical strategy,“ said Tim Tuttle, chief executive officer of MindMeld Inc., a startup working on voice interaction software. „They’re all following the exact same playbook.“

 

http://adage.com/article/digital/google-hits-a-roadblock-ai-assistant/306244/

Amazon has a secret plan to replace FedEx and UPS called ‚Consume the City‘

Amazon has been quietlybeefing up its own shipping logistics network lately.

Amazon CEO Jeff BezosAmazon CEO Jeff Bezos

Although Amazon publicly says it’s meant to complement existing delivery partners like FedEx and UPS, a new report by The Wall Street Journal’s Greg Bensinger and Laura Stevens says Amazon has broader ambitions.

Eventually, Amazon aims to build a full-scale shipping and logistics network that will not only ship products ordered from Amazon, but also will ship products for other retailers and consumers.

In other words, Amazon is looking to compete against delivery services like FedEx and UPS, the report says. Internally, some Amazon execs call the plan „Consume the City.“

Here are other new details around Amazon’s logistics plan, according to the report:

  • Amazon recently hired former Uber VP Tim Collins as VP of global logistics.
  • It recruited dozens of UPS and FedEx executives and hundreds of UPS employees in recent years.
  • Test trials for last-mile deliveries are running in big cities like Los Angeles, Chicago, and Miami.
  • The company also experimented with a program called „I Have Space“ to store Amazon’s inventory in warehouses owned by other companies.

On top of that, InternetRetailer.com recently reported that Amazon has hired Ed Feitzinger, the former CEO of UTi Worldwide, one of the largest supply chain management companies, as VP of global logistics. Add that to the fact that Amazon has now built facilities within 20 miles of 44% of the US population, and Amazon is starting to look like a real threat to existing logistics networks.

According to Baird Equity Research, Amazon is looking at a $400 billion market opportunity by launching all these initiatives. They could also help Amazon reduce some of its shipping costs, which have been increasing every year.

People in the industry are starting to take notice, too, according to Zvi Schreiber, the CEO of Freightos, an online marketplace for international freight.

„After dominating e-commerce and warehousing, Amazon is moving farther up the supply chain and eyeing the logistics sector from all angles, particularly looking to leverage technology, capital, and manpower to make logistics more efficient,“ Schreiber told Business Insider.

„Given their track record of disrupting industries — from retail to warehousing and e-commerce fulfillment to cloud computing — the trillion-dollar freight industry is certainly tracking Amazon nervously.“

http://www.businessinsider.de/amazon-secret-plan-replace-fedex-ups-called-consume-the-city-2016-9