Archiv der Kategorie: Kreativität

Freaks – die wahren Helden – die Disruptoren der neuen Arbeitswelt – die Querdenker als neue Elite

Irgendwie ist der neue Mitarbeiter doch ein Spinner, sichtbar tätowiert oder ein echter Comupter-Nerd? Gewöhnen Sie sich an das Ungewöhnliche, denn laut Experten sind gerade die sogenannten „Freaks“ die besten Führungskräfte und schon bald werden dies auch die deutschen Unternehmen erkennen. Da kann es gut sein, dass Ihr neuer Chef in Kürze irgendwie „anders“ ist. Doch was hat es eigentlich mit den Freaks auf sich und wieso taugen sie besser zur Führungskraft als der 08/15-Mitarbeiter?

Sind Freaks die besseren Führungskräfte?
Traut Euch, anders zu sein

Inhalt
1. Deutsche Unternehmen setzen auf den angepassten Durchschnitt
2. Wonach suchen deutsche Unternehmen ihre Führungskräfte aus?
3. Wer ist eigentlich ein „Freak“?
4. Die Schwächen sind das Problem
5. Wie können Peak Performer integriert werden?
6. Für welche Unternehmen eignen sich die Spiky Leaders?

Deutsche Unternehmen setzen auf den angepassten Durchschnitt

Bislang halten die deutschen Führungsetagen keine großen Überraschungen bereit: Angepasste Anzugträger tummeln sich in den leitenden Positionen, hier und da eine Frau – aber nicht zu viele. Tatsächlich suchen die meisten deutschen Unternehmen für ihre Führungspositionen nach angepassten und leistungswilligen Mitarbeitern. Wieso? Weil Sie kein Risiko darstellen, Beständigkeit versprechen und ein hohes Maß an Zuverlässigkeit. Der Durchschnitt bringt es deshalb im Beruf am weitesten.

Freaks hingegen feiern eher als Selbstständige ihre Erfolge und stellen da schon einmal die gewohnten Marktmechanismen auf den Kopf.

Doch wieso machen sich die deutschen Unternehmen eigentlich nichts aus eben dieser Fähigkeit? Aus den Querdenkern, Risikofreudigen und wahren Genies? Echte Talente und herausragende Stärken, das sollte eine Führungskraft mitbringen. Da sind sich zumindest viele Experten einig…

Wonach suchen deutsche Unternehmen ihre Führungskräfte aus?

Eine bei Statista veröffentlichte Umfrage gibt hierauf wenig überraschende Antworten: Demnach erachten 100 Prozent aller befragten Unternehmen die Kommunikationsfähigkeit als besonders wichtig für eine Führungskraft. 99 Prozent setzen zudem auf eine hohe Motivation, 98 Prozent auf bereits erbrachte Leistungen im Unternehmen.

Statistik: Erachten Sie folgende Eigenschaften bei Führungskräften als wichtig? | Statista
Mehr Statistiken finden Sie bei Statista

Die Personalberaterin Uta von Boyen kennst sich bestens mit dem Thema aus: Nach Allroundern werde gesucht, Beständigkeit und Mittelmaß. Schul- und Hochschulnoten, Assessment-Center und normierte Lebensläufe seien die Auswahlkriterien für neue Mitarbeiter und Führungskräfte müssen in erster Linie leistungsbereit sein. Es ist das Prinzip „Befehl und Gehorsam“, das in vielen Unternehmen in den Führungsetagen ausgeübt wird – welches jedoch eigentlich in der modernen Wirtschaft nichts mehr verloren hätte. Denn in den immer schneller werdenden Zeiten der Globalisierung und Digitalisierung müssen Unternehmen auf neuartige Geschäftsstrategien setzen, um dauerhaft gegen die nunmehr weltweite Konkurrenz bestehen zu können. Und hierfür, so Uta von Boyen, seien gerade Freaks die besseren Führungskräfte.

Wer ist eigentlich ein „Freak“?

Als „Freak“ in diesem Sinne bezeichnen die Experten alle jene Mitarbeiter, die aus dem üblichen Rahmen fallen. Es handelt sich um Querdenker, Menschen mit Spezialbegabungen und ausgeprägter Persönlichkeit. Freaks bringen Unruhe in ein Unternehmen, fungieren als Visionäre und haben häufig Schwierigkeiten damit, sich in die gegebenen Strukturen einzufügen. Sie werden deshalb auch „Peak Performer“ oder „Spiky Leaders“ genannt. Es sind eben jene Menschen, die unangepasst arbeiten, neue Ideen hervorbringen und ebenso herausragende Stärken wie eben auch Schwächen mitbringen.

Die Schwächen sind das Problem

Genau hierin liegt aber das Hauptproblem der Unternehmen mit den Peak Performern: Sie haben Schwächen. Und Schwächen werden in der modernen Arbeitswelt nicht geduldet. Der Sinn steht daher stets nach der möglichen Minimierung der Schwächen anstelle der Förderung von Stärken.

Die scheinbar besten Mitarbeiter sehen die Unternehmen deshalb in angepassten „General Managern“. Ein Prozess, der bereits in den Schulen beginnt, ja mancherorts sogar im Kindergarten oder der Vorschule. Wer aus dem Rahmen fällt, erhält Nachhilfeunterricht oder gilt als schwer erziehbar. Die scheinbaren ADHS-Fälle nehmen immer weiter zu, nur weil ein Kind keine acht Stunden ruhig in der Schulbank sitzt. Wer besondere Begabungen oder originelles Denken mitbringt wird nicht weiter gefördert. Stattdessen wird der Unterricht starr durchgezogen und die Schüler auf die goldene Mitte eingeebnet. Wieso? Weil der Durchschnitt den Weg des geringsten Widerstands bedeutet.

Spiky Leaders hingegen, müssen mit viel Aufwand in ein Unternehmen integriert werden, sollten diese nicht bereits desillusioniert und demotiviert aus der Schul- und Hochschullaufbahn herauskommen. Dabei hat uns die Geschichte immer wieder gelehrt, dass gerade diese Peak Performer einen hohen Wert für die Gesellschaft und Wirtschaft haben. Sie haben in der Vergangenheit gar immer wieder das Überleben der Menschheit gesichert, da sind sich Historiker und Evolutionsbiologen einig. Und hätten Sie Steve Jobs nicht auch zu Beginn seiner Laufbahn als echten Freak wahrgenommen?

Wie können Peak Performer integriert werden?

Das größte Problem darin, die außergewöhnlichen Begabungen der Peak Performer in einem Unternehmen zu nutzen, liegt also in ihrer erfolgreichen Integration in das Unternehmen. Hierfür muss es seine Führungsstrukturen überdenken und neue Konzepte erstellen. Spiky Leaders funktionieren meist in kleinen Teams am besten, wo sie mit dem angepassten Durchschnitt zusammenarbeiten können. Ein Unternehmen funktioniert nämlich ebenso wenig nur mit „Freaks“ als ganz ohne. Es geht also um eine effiziente Zusammenarbeit zwischen Peak Performer und 08/15-Mitarbeiter. Die Zusammensetzung dieser gemischten Teams ist eine wahre Herausforderung, zumal die Unangepassten häufig menschlich schwierig sind, als „stachelig“ wahrgenommen werden. Dadurch bringen sie aber eine positive Dynamik in jedes Team und eine produktivere Arbeitsatmosphäre. Dynamiken bringen schließlich Ergebnisse hervor – Stillstand nicht. Es gilt also, die Organisationsform eines Unternehmens der Integration von Peak Performern anzupassen:

  • Feste Strukturen müssen aufgelockert werden.
  • Der Spiky Leader muss individuelle Freiräume genießen.
  • Seine Talente und Stärken müssen effizient gefördert und gezielt eingesetzt werden.
  • Die Schwächen der Peak Performer gilt es frühzeitig aufzufangen.

Für welche Unternehmen eignen sich die Spiky Leaders?

Es geht nun nicht darum, dass jedes Unternehmen in jedem Team mindestens einen Spiky Leader besetzt. Im Gegenteil: Ob ein Peak Performer für Ihr Unternehmen geeignet ist, wer, wie viele und in welcher Position, das hängt von Ihrer jeweiligen Organisationsform sowie der strategischen Ausrichtung des Unternehmens ab. Häufig sind Peak Performer gerade in in geringer Anzahl auf wichtigen Schlüsselpositionen gut besetzt. Zudem sollte stets nur höchstens ein „Freak“ pro Team eingesetzt werden. Allerdings ist die Akzeptanz der Peak Performer in einem Team nicht immer einfach und sie stellen damit ein hohes Risiko dar. Ein Risiko, welches bislang nur die wenigsten Unternehmen bereit sind einzugehen. Wer jedoch bereits jetzt begreift, dass ganzheitliche Führung in Zukunft auch auf Querdenker nicht verzichten kann, ist der Konkurrenz in der Globalisierung einen großen Schritt voraus.

Was denken Sie von den Peak Performern? Haben Sie bereits Erfahrungen mit ihnen gemacht oder würden Sie sich vielleicht sogar selbst als einen solchen bezeichnen? Es ist und bleibt ein spannendes Thema…

https://arbeits-abc.de/querdenker-als-fuehrungskraft

Advertisements

Delight Users with Animation

“Delight” is a word that we’re hearing and using more often to describe pleasurable moments in our products. Delight is the magic that makes us fall in love with a product. It’s a core element to strive for when designing. When it comes to providing pleasure or delight in our websites and apps, animations contribute a lot.

WHY DELIGHTFUL ANIMATION IS IMPORTANT

Digital design plays a crucial role in how customers experience a product. Modern design is highly focussed on usability, because usability allows people to easily accomplish their goals. However, designing for the user experience has a lot more to it than making a usable product. Good design is pleasurable and seductive. Good design is delightful. “At this point in experience design’s evolution, satisfaction ought to be the norm, and delight ought to be the goal,” says Stephen Anderson. Animation can help you achieve this goal.

WHEN TO USE DELIGHTFUL ANIMATION

Just like any other design element animation should contribute the user flow. Delightful animations are pleasurable for the user without detracting from the usability of the app. There are two cases when implementing delightful animation into your digital designs can strengthen UX:

  • Engaging and entertaining. Entertaining animation draws attention to our products by creating a strong first impression. It can make our products more memorable and more shareable.
  • Baking emotion in design. Showing the human side of your business or product can be a very powerful way for your audience to identify and empathize with you. The aim of emotional design is to create happiness. You want people to feel happy when they use your product.

Let’s look at a few ways animation can help create delightful moments:

1. KEEP USERS INTERESTED DURING LOADING

Loading time is an unavoidable situation for most digital products. But who says that loading should be boring? When we can’t shorten the line, we can certainly make the wait more pleasant. To ensure people don’t get bored while waiting for something to happen, you can offer them some distraction: this can be something fun or something unexpected. While animation won’t solve the problem, it definitely makes waiting less of a problem: fine animation can distract your users and make them ignore long loading times.

001
Credits: Dribbble

2. MAKE A GREAT FIRST IMPRESSION

First impressions count: people judge things based on how they look. Good animation throughout the onboarding flow has a strong impact on how first-time users will engage with the app. A good first impression isn’t just about usability, it’s also about personality. If your first few app screens look a little different from similar products, you’ve shown the user that your entire product experience will likely be different too. For example, animating an illustration for a new feature can educate the user about the feature in a memorable way.

002
Credits: Dribbble

3. MAKE YOUR INTERFACES FEEL MORE ALIVE

Creative animation can make your user experience truly delightful: they can transform familiar interactions into something much more enjoyable and have the power to encourage users to actually interact. Attention to fine movements can increase the level of usability and therefore desirability of the product.

4. INCORPORATE EMOTIONAL INTERACTIONS

Focusing on user emotions plays a huge role in UI interactions. As Aarron Walter said in his book Designing for Emotion: “Personality is the mysterious force that attracts us to certain people and repels us from others.” Using animation you can establish an emotional connection with your users, and remind them that there are real humans behind the design. An example of animation from ReadMe is full of emotions.

5. HELP USER RECOVER FROM UNEXPECTED ERRORS

‘Errors’ happen. They happen in our apps and they happen in our life. Sometimes they happen because we made mistakes. Sometimes because an app failed. Whatever the cause, these errors — and how they are handled — can have a huge impact on the way user experiences your app. A well-crafted error handling can turn a moment of failure into a moment of delight. When displaying an unexpected error, use it as an opportunity to delight with animation.

003
Credits: Dribbble

6. MAKE A COMPLEX TASK FEEL EASIER

Animation is able to transform a complex task into an inviting experience.  Let’s take a MailChimp case for inspiration. What makes MailChimp awesome is its smooth functionality wrapped in cheeky humor and friendly animation. When you’re about to send out your first campaign, the accompanying animation shows how stressful it is. Mailchimp brings empathy to the design: by combining animated cartoons with tongue-in-cheek messages like “This is your moment of glory,” MailChimp softens the nervousness of sending your first emails.

7. BREATHE FUN INTO THE INTERACTIONS

People love to discover treats in interfaces just as they do in real life. The joy is more than the treat, it’s the discovery of the treat and the feeling that someone took the time to think of you.

index
Credits: Dribbble

People will forget what you said, people will forget what you did, but people will never forget how you made them feel.—Maya Angelou

Never underestimate the power of delight to improve the user experience. The difference between products we love and those we simply tolerate is often the delight we have with them.

Of course, before your application can create an emotional connection with the user it must get the basics right.  Thus, make your product a joy to use by connecting feelings with features!

https://www.webdesignerdepot.com/2017/04/7-ways-to-delight-users-with-animation

The 15 coolest concept cars revealed this year so far

Automakers are pushing bold, innovative ideas forward with their latest concept cars.

Faraday Thumb23Rob Ludacer

Whether it’s a car with nothing inside but a sofa and TV or an electric car resembling the Batmobile, concept cars give us a glimpse of how technology will shape the future of driving.

1. Volkswagen unveiled a microbus concept meant to give a modern spin to the classic Volkswagen bus at the Consumer Electronics Show in January.

1. Volkswagen unveiled a microbus concept meant to give a modern spin to the classic Volkswagen bus at the Consumer Electronics Show in January.

Volkswagen

Called the BUDD-e, the electric car gets up to 373 miles of range.

The doors open with a simple wave of the hand, and you can control the console’s interface by making hand gestures.

The doors open with a simple wave of the hand, and you can control the console's interface by making hand gestures.

Volkswagen

You can also use the interface to control things like the temperature and lighting in your house.

2. The big unveiling to come out of the Consumer Electronics Show was Faraday Future’s concept car, the FFZERO1.

2. The big unveiling to come out of the Consumer Electronics Show was Faraday Future's concept car, the FFZERO1.

Rob Ludacer

It can go from zero to 60 miles per hour in under three seconds.

Four motors placed over each wheel give the car a top speed of 200 miles per hour. It’s also capable of learning the driver’s preferences and automatically adjusting the internal settings.

Four motors placed over each wheel give the car a top speed of 200 miles per hour. It's also capable of learning the driver's preferences and automatically adjusting the internal settings.

Faraday Future

Although Faraday Future plans to release a production car in 2020, the FFZERO1 is just a show car.

3. LeEco, a Chinese tech company, unveiled its Tesla killer concept car at the Consumer Electronics Show.

LeEco is also partners with Faraday Future.

Called the LeSEE, the car has a top speed of 130 miles per hour. It also has an autonomous mode.

Called the LeSEE, the car has a top speed of 130 miles per hour. It also has an autonomous mode.

LeEco

The steering wheel will retract back into the dashboard when the car is in autonomous mode.

4. The Lincoln Navigator concept car comes with giant gullwing doors. It was unveiled at the New York Auto Show in March.

4. The Lincoln Navigator concept car comes with giant gullwing doors. It was unveiled at the New York Auto Show in March.

Ford

We won’t be seeing those doors in the production model of a Lincoln Navigator anytime soon, unfortunately.

The six seats inside can be adjusted 30 different ways, and there’s entertainment consoles on the back of four seats so passengers can watch TV or play games.

The six seats inside can be adjusted 30 different ways, and there's entertainment consoles on the back of four seats so passengers can watch TV or play games.

Ford

There’s even a built-in wardrobe management system in the trunk so you can turn your car into part walk-in closet.

5. BMW’s Vision Next 100 was unveiled at the Geneva Motor Show in March. It comes with an AI system called Companion that can learn your driving preferences and adjust accordingly in advance.

5. BMW's Vision Next 100 was unveiled at the Geneva Motor Show in March. It comes with an AI system called Companion that can learn your driving preferences and adjust accordingly in advance.

BMW

The side panels of the Next 100 are made of carbon fiber.

The steering wheel will retract into the dashboard when the car is in autonomous mode.

The steering wheel will retract into the dashboard when the car is in autonomous mode.

BMW

There’s also a heads-up display that will show information about your route on the windshield.

6. BMW added to its Vision 100 line in June. Here we see the Mini Vision Next 100 that was built for ridesharing.

6. BMW added to its Vision 100 line in June. Here we see the Mini Vision Next 100 that was built for ridesharing.

BMW

The car can recognize who you are when it comes to pick you up and will greet you with personalized lighting.

The steering wheel will shift into the center of the console when the car is in autonomous mode.

The steering wheel will shift into the center of the console when the car is in autonomous mode.

BMW

The BMW also comes with a heads-up display that will show information about your route on the windshield.

7. The last addition to the BMW Vision 100 line is this futuristic Rolls-Royce.

7. The last addition to the BMW Vision 100 line is this futuristic Rolls-Royce.

Rob Ludacer

The Rolls-Royce is also completely autonomous.

Because the car envisions a completely autonomous future, the interior is composed entirely of a two-person, silk sofa and a giant OLED TV.

Because the car envisions a completely autonomous future, the interior is composed entirely of a two-person, silk sofa and a giant OLED TV.

Rolls-Royce

There’s also a secret compartment in the car for storing your luggage.

8. McLaren unveiled a stunning concept car called the 675LT JVCKENWOOD at the Consumer Electronics Show.

8. McLaren unveiled a stunning concept car called the 675LT JVCKENWOOD at the Consumer Electronics Show.

McLaren

The McLaren 675LT comes with a wireless networking system so it could communicate with other cars on the road about traffic and accidents.

The car comes with a steering wheel that looks like a video game controller!

The car comes with a steering wheel that looks like a video game controller!

McLaren

The controller is meant to help the driver control the heads-up display while in motion.

9. Italian automaker Pininfarina unveiled a beautiful hydrogen-powered concept car at the Geneva Motor Show.

9. Italian automaker Pininfarina unveiled a beautiful hydrogen-powered concept car at the Geneva Motor Show.

Pininfarina

The car, called H2 Speed, refuels in just three minutes.

It has a top speed of 186 miles per hour and can go from zero to 62 miles per hour in 3.4 seconds.

It has a top speed of 186 miles per hour and can go from zero to 62 miles per hour in 3.4 seconds.

Pininfarina

The car can regenerate energy from braking.

10. Audi unveiled its connected mobility concept car in April. There’s a longboard integrated in the bumper in case you want to roll from the parking lot to work.

10. Audi unveiled its connected mobility concept car in April. There's a longboard integrated in the bumper in case you want to roll from the parking lot to work.

Audi

It conveniently pulls out when you need it and is stored in the bumper when you’d rather travel on foot!

The car’s infotainment system can calculate the fastest route based on real-time data and will suggest using the longboard if that seems faster.

The car's infotainment system can calculate the fastest route based on real-time data and will suggest using the longboard if that seems faster.

Audi

It will even show you the best parking spot to make the longboard portion of your commute shorter.

11. Aston Martin showed off a beautiful concept car in May called the Vanquish Zagato Concept.

11. Aston Martin showed off a beautiful concept car in May called the Vanquish Zagato Concept.

Aston Martin

All of the body panels in the Vanquish Zagato are made of carbon fiber.

Aston Martin made the car with Italian auto design company Zagato. The two have worked together since 1960.

Aston Martin made the car with Italian auto design company Zagato. The two have worked together since 1960.

Aston Martin

There’s not too many details on this car since it’s just a concept, but it sure is pretty.

12. Jeep showed off a crazy looking wrangler in March at the Easter Jeep Safari, an off road rally.

12. Jeep showed off a crazy looking wrangler in March at the Easter Jeep Safari, an off road rally.

Chrysler

That is a monster car.

The Wrangler Trailcat concept had to be stretched to 12 inches to accommodate the massive engine providing 707 horsepower.

The Wrangler Trailcat concept had to be stretched to 12 inches to accommodate the massive engine providing 707 horsepower.

Chrysler

It comes with racing seats from a Dodge Viper.

13. Toyota unveiled a strange-looking concept car dubbed the uBox to appeal to Generation Z in April.

13. Toyota unveiled a strange-looking concept car dubbed the uBox to appeal to Generation Z in April.

Toyota

The uBox is all-electric.

The interior is entirely customizable so it can transform into a mobile office or fit more people.

The interior is entirely customizable so it can transform into a mobile office or fit more people.

Toyota

It also comes with a nice curved glass roof that lets plenty of light inside.

14. French automaker Renault showed off a stunning, high-tech sports car dubbed the Alpine Vision in February.

The Alpine Vision is a two-door, two-seater sports car.

It can go from zero to 62 miles per hour in 4.5 seconds

The interior is decked out with a LCD gauge cluster in the center console.

15. Lastly, Croatian automaker Rimac designed a stunning, all-electric concept car for the Geneva Motor Show.

15. Lastly, Croatian automaker Rimac designed a stunning, all-electric concept car for the Geneva Motor Show.

Rimac

Called the Concept_One it can accelerate from zero to 62 miles per hour in just 2.6 seconds.

The Concept_One can reach a top speed of 185 miles per hour.

The Concept_One can reach a top speed of 185 miles per hour.

Rimac

It has a regenerative braking system that allows it to generate energy whenever it brakes.

http://www.businessinsider.com/coolest-concept-cars-revealed-in-2016-2016-6

Machine Learning and Artificial Intelligence: Soon We Won’t Program Computers. We’ll Train Them Like Dogs

AI_2-1-1.jpg

BEFORE THE INVENTION of the computer, most experimental psychologists thought the brain was an unknowable black box. You could analyze a subject’s behavior—ring bell, dog salivates—but thoughts, memories, emotions? That stuff was obscure and inscrutable, beyond the reach of science. So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades.

Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses. They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn’t a black box at all. It was more like a computer.

The so-called cognitive revolution started small, but as computers became standard equipment in psychology labs across the country, it gained broader acceptance. By the late 1970s, cognitive psychology had overthrown behaviorism, and with the new regime came a whole new language for talking about mental life. Psychologists began describing thoughts as programs, ordinary people talked about storing facts away in their memory banks, and business gurus fretted about the limits of mental bandwidth and processing power in the modern workplace.

This story has repeated itself again and again. As the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work. Technology always does this. During the Enlightenment, Newton and Descartes inspired people to think of the universe as an elaborate clock. In the industrial age, it was a machine with pistons. (Freud’s idea of psychodynamics borrowed from the thermodynamics of steam engines.) Now it’s a computer. Which is, when you think about it, a fundamentally empowering idea. Because if the world is a computer, then the world can be coded.

Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age. As software has eaten the world, to paraphrase venture capitalist Marc Andreessen, we have surrounded ourselves with machines that convert our actions, thoughts, and emotions into data—raw material for armies of code-wielding engineers to manipulate. We have come to see life itself as something ruled by a series of instructions that can be discovered, exploited, optimized, maybe even rewritten. Companies use code to understand our most intimate ties; Facebook’s Mark Zuckerberg has gone so far as to suggest there might be a “fundamental mathematical law underlying human relationships that governs the balance of who and what we all care about.” In 2013, Craig Venter announced that, a decade after the decoding of the human genome, he had begun to write code that would allow him to create synthetic organisms. “It is becoming clear,” he said, “that all living cells that we know of on this planet are DNA-software-driven biological machines.” Even self-help literature insists that you can hack your own source code, reprogramming your love life, your sleep routine, and your spending habits.

In this world, the ability to write code has become not just a desirable skill but a language that grants insider status to those who speak it. They have access to what in a more mechanical age would have been called the levers of power. “If you control the code, you control the world,” wrote futurist Marc Goodman. (In Bloomberg Businessweek, Paul Ford was slightly more circumspect: “If coders don’t run the world, they run the things that run the world.” Tomato, tomahto.)

But whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand.

Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.

This approach is not new—it’s been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed, and Google Photos uses it to identify faces. Machine learning runs Microsoft’s Skype Translator, which converts speech to different languages in real time. Self-driving cars use machine learning to avoid accidents. Even Google’s search engine—for so many years a towering edifice of human-written rules—has begun to rely on these deep neural networks. In February the company replaced its longtime head of search with machine-learning expert John Giannandrea, and it has initiated a major program to retrain its engineers in these new techniques. “By building learning systems,” Giannandrea told reporters this fall, “we don’t have to write these rules anymore.”

But here’s the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box. And as these black boxes assume responsibility for more and more of our daily digital tasks, they are not only going to change our relationship to technology—they are going to change how we think about ourselves, our world, and our place within it.

If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.

ANDY RUBIN IS an inveterate tinkerer and coder. The cocreator of the Android operating system, Rubin is notorious in Silicon Valley for filling his workplaces and home with robots. He programs them himself. “I got into computer science when I was very young, and I loved it because I could disappear in the world of the computer. It was a clean slate, a blank canvas, and I could create something from scratch,” he says. “It gave me full control of a world that I played in for many, many years.”

Now, he says, that world is coming to an end. Rubin is excited about the rise of machine learning—his new company, Playground Global, invests in machine-learning startups and is positioning itself to lead the spread of intelligent devices—but it saddens him a little too. Because machine learning changes what it means to be an engineer.

Soon We Won’t Program Computers. We’ll Train Them Like Dogs

Tech’s ‘Frightful 5’ Will Dominate Digital Life for Foreseeable Future

There’s a little parlor game that people in Silicon Valley like to play. Let’s call it, Who’s Losing?

There are currently four undisputed rulers of the consumer technology industry: Amazon, Apple, Facebook and Google, now a unit of a parent company called Alphabet. And there’s one more, Microsoft, whose influence once looked on the wane, but which is now rebounding.

So which of these five is losing? A year ago, it was Google that looked to be in a tough spot as its ad business appeared more vulnerable to Facebook’s rise. Now, Google is looking up, and it’s Apple, hit by rising worries about a slowdown in iPhone sales, that may be headed for some pain. Over the next couple of weeks, as these companies issue earnings that show how they finished 2015, the state of play may shift once more.

But don’t expect it to shift much. Asking “who’s losing?” misses a larger truth about how thoroughly Amazon, Apple, Facebook, Google and Microsoft now lord over all that happens in tech.

Who’s really losing? In the larger picture, none of them — not in comparison with the rest of the tech industry, the rest of the economy and certainly not in the influence each of them holds over our lives.

Tech people like to picture their industry as a roiling sea of disruption, in which every winner is vulnerable to surprise attack from some novel, as-yet-unimagined foe. “Someone, somewhere in a garage is gunning for us,” Eric Schmidt, Alphabet’s executive chairman, is fond of saying.

But for much of the last half-decade, most of these five giants have enjoyed a remarkable reprieve from the bogeymen in the garage. And you can bet on them continuing to win. So I’m coining the name the Frightful Five.
It’s not just because I’m a Tarantino fan. By just about every measure worth collecting, these five American consumer technology companies are getting larger, more entrenched in their own sectors, more powerful in new sectors and better insulated against surprising competition from upstarts.

Though competition between the five remains fierce — and each year, a few of them seem up and a few down — it’s becoming harder to picture how any one of them, let alone two or three, may cede their growing clout in every aspect of American business and society.
“The Big Five came along at a perfect time to roll up the user base,” said Geoffrey G. Parker, a business professor at Tulane University and the co-author of “Platform Revolution,” a forthcoming book that explains some of the reasons these businesses may continue their dominance. “These five rode that perfect wave of technological change — an incredible decrease in the cost of I.T., much more network connectivity and the rise of mobile phones. Those three things came together, and there they were, perfectly poised to grow and take advantage of the change.”

Mr. Parker notes the Big Five’s power does not necessarily prevent newer tech companies from becoming huge. Uber might upend the transportation industry, Airbnb could rule hospitality and, as I argued last week, Netflix is bent on consuming the entertainment business. But if such new giants do come along, they’re likely to stand alongside today’s Big Five, not replace them.
Indeed, the Frightful Five are so well protected against start-ups that in most situations, the rise of new companies only solidifies their lead.

Consider that Netflix hosts its movies on Amazon’s cloud, and Google’s venture capital arm has a huge investment in Uber. Or consider all the in-app payments that Apple and Google get from their app stores, and all the marketing dollars that Google and Facebook reap from start-ups looking to get you to download their stuff.

This gets to the core of the Frightful Five’s indomitability. They have each built several enormous technologies that are central to just about everything we do with computers. In tech jargon, they own many of the world’s most valuable “platforms” — the basic building blocks on which every other business, even would-be competitors, depend.
These platforms are inescapable; you may opt out of one or two of them, but together, they form a gilded mesh blanketing the entire economy.

The Big Five’s platforms span so-called old tech — Windows is still the king of desktops, Google rules web search — and new tech, with Google and Apple controlling mobile phone operating systems and the apps that run on them; Facebook and Google controlling the Internet advertising business; and Amazon, Microsoft and Google controlling the cloud infrastructure on which many start-ups run.

Amazon has a shopping and shipping infrastructure that is becoming central to retailing, while Facebook keeps amassing greater power in that most fundamental of platforms: human social relationships.
Many of these platforms generate what economists call “network effects” — as more people use them, they keep getting more indispensable. Why do you chat using Facebook Messenger or WhatsApp, also owned by Facebook? Because that’s where everyone else is.

Their platforms also give each of the five an enormous advantage when pursuing new markets. Look how Apple’s late-to-market subscription streaming music service managed to attract 10 million subscribers in its first six months of operation, or how Facebook leveraged the popularity of its main app to push users to download its stand-alone Messenger app.

Then there’s the data buried in the platforms, also a rich source for new business. This can happen directly — for instance, Google can tap everything it learns about how we use our phones to create an artificial intelligence engine that improves our phones — and in more circuitous ways. By watching what’s popular in its app store, Apple can get insight into what features to add to the iPhone.
“In a way, a lot of the research and development costs are being borne by companies out of their four walls, which allows them to do better product development,” Mr. Parker said.
This explains why these companies’ visions are so expansive. In various small and large ways, the Frightful Five are pushing into the news and entertainment industries; they’re making waves in health care and finance; they’re building cars, drones, robots and immersive virtual-reality worlds. Why do all this? Because their platforms — the users, the data and all the money they generate — make these far-flung realms seem within their grasp.

Which isn’t to say these companies can’t die. Not long ago people thought IBM, Cisco Systems, Intel and Oracle were unbeatable in tech; they’re all still large companies, but they’re far less influential than they once were.

And a skeptic might come up with significant threats to the five giants. One possibility might be growing competition from abroad, especially Chinese hardware and software companies that are amassing equally important platforms. Then there’s the threat of regulation or other forms of government intervention. European regulators are already pursuing several of the Frightful Five on antitrust and privacy grounds.
Even with these difficulties, it’s unclear if the larger dynamic may change much. Let’s say that Alibaba, the Chinese e-commerce company, eclipses Amazon’s retail business in India — well, O.K., so then it satisfies itself with the rest of the world.
Government intervention often limits one giant in favor of another: If the European Commission decides to fight Android on antitrust grounds, Apple and Microsoft could be the beneficiaries. When the Justice Department charged Apple with orchestrating a conspiracy to raise e-book prices, who won? Amazon.

So get used to these five. Based on their stock prices this month, the giants are among the top 10 most valuable American companies of any kind. Apple, Alphabet and Microsoft are the top three; Facebook is No. 7, and Amazon is No. 9. Wall Street gives each high marks for management; and three of them — Alphabet, Amazon and Facebook — are controlled by founders who don’t have to bow to the whims of potential activist investors.
So who’s losing? Not one of them, not anytime soon.
JANUARY 20, 2016

Farhad Manjoo

STATE OF THE ART http://mobile.nytimes.com/2016/01/21/technology/techs-frightful-5-will-dominate-digital-life-for-foreseeable-future.html

Google Creates Terminator Like Email Response System

Source: http://googleresearch.blogspot.co.at/2015/11/computer-respond-to-this-email.html

possible-response

Google Creates Terminator 2 Like E-Mail Response System and details the functionality:

Machine Intelligence for You

What I love about working at Google is the opportunity to harness cutting-edge machine intelligence for users’ benefit. Two recent Research Blog posts talked about how we’ve used machine learning in the form of deep neural networks to improve voice search and YouTube thumbnails. Today we can share something even wilder — Smart Reply, a deep neural network that writes email.

I get a lot of email, and I often peek at it on the go with my phone. But replying to email on mobile is a real pain, even for short replies. What if there were a system that could automatically determine if an email was answerable with a short reply, and compose a few suitable responses that I could edit or send with just a tap?

Some months ago, Bálint Miklós from the Gmail team asked me if such a thing might be possible. I said it sounded too much like passing the Turing Test to get our hopes up… but having collaborated before on machine learning improvements to spam detection and email categorization, we thought we’d give it a try.

There’s a long history of research on both understanding and generating natural language for applications like machine translation. Last year, Google researchers Oriol Vinyals, Ilya Sutskever, and Quoc Le proposed fusing these two tasks in what they called sequence-to-sequence learning. This end-to-end approach has many possible applications, but one of the most unexpected that we’ve experimented with is conversational synthesis. Early results showed that we could use sequence-to-sequence learning to power a chatbot that was remarkably fun to play with, despite having included no explicit knowledge of language in the program.

Obviously, there’s a huge gap between a cute research chatbot and a system that I want helping me draft email. It was still an open question if we could build something that was actually useful to our users. But one engineer on our team, Anjuli Kannan, was willing to take on the challenge. Working closely with both Machine Intelligence researchers and Gmail engineers, she elaborated and experimented with the sequence-to-sequence research ideas. The result is the industrial strength neural network that runs at the core of the Smart Reply feature we’re launching this week.

How it works

A naive attempt to build a response generation system might depend on hand-crafted rules for common reply scenarios. But in practice, any engineer’s ability to invent “rules” would be quickly outstripped by the tremendous diversity with which real people communicate. A machine-learned system, by contrast, implicitly captures diverse situations, writing styles, and tones. These systems generalize better, and handle completely new inputs more gracefully than brittle, rule-based systems ever could.

Diagram by Chris Olah

Like other sequence-to-sequence models, the Smart Reply System is built on a pair of recurrent neural networks, one used to encode the incoming email and one to predict possible responses. The encoding network consumes the words of the incoming email one at a time, and produces a vector (a list of numbers). This vector, which Geoff Hinton calls a “thought vector,” captures the gist of what is being said without getting hung up on diction — for example, the vector for „Are you free tomorrow?“ should be similar to the vector for „Does tomorrow work for you?“ The second network starts from this thought vector and synthesizes a grammatically correct reply one word at a time, like it’s typing it out. Amazingly, the detailed operation of each network is entirely learned, just by training the model to predict likely responses.

One challenge of working with emails is that the inputs and outputs of the model can be hundreds of words long. This is where the particular choice of recurrent neural network type really matters. We used a variant of a „long short-term-memory“ network (or LSTM for short), which is particularly good at preserving long-term dependencies, and can home in on the part of the incoming email that is most useful in predicting a response, without being distracted by less relevant sentences before and after.

Of course, there’s another very important factor in working with email, which is privacy. In developing Smart Reply we adhered to the same rigorous user privacy standards we’ve always held — in other words, no humans reading your email. This means researchers have to get machine learning to work on a data set that they themselves cannot read, which is a little like trying to solve a puzzle while blindfolded — but a challenge makes it more interesting!

Getting it right

Our first prototype of the system had a few unexpected quirks. We wanted to generate a few candidate replies, but when we asked our neural network for the three most likely responses, it’d cough up triplets like “How about tomorrow?” “Wanna get together tomorrow?” “I suggest we meet tomorrow.” That’s not really much of a choice for users. The solution was provided by Sujith Ravi, whose team developed a great machine learning system for mapping natural language responses to semantic intents. This was instrumental in several phases of the project, and was critical to solving the „response diversity problem“: by knowing how semantically similar two responses are, we can suggest responses that are different not only in wording, but in their underlying meaning.

Another bizarre feature of our early prototype was its propensity to respond with “I love you” to seemingly anything. As adorable as this sounds, it wasn’t really what we were hoping for. Some analysis revealed that the system was doing exactly what we’d trained it to do, generate likely responses — and it turns out that responses like “Thanks“, „Sounds good“, and “I love you” are super common — so the system would lean on them as a safe bet if it was unsure. Normalizing the likelihood of a candidate reply by some measure of that response’s prior probability forced the model to predict responses that were not just highly likely, but also had high affinity to the original message. This made for a less lovey, but far more useful, email assistant.

Give it a try

We’re actually pretty amazed at how well this works. We’ll be rolling this feature out on Inbox for Android and iOSlater this week, and we hope you’ll try it for yourself! Tap on a Smart Reply suggestion to start editing it. If it’s perfect as is, just tap send. Two-tap email on the go — just like Bálint envisioned.

Terminator Response

This blog post may or may not have actually been written by a neural network.

 

accelerometer and gyroscope sensor data can create a serious privacy breach

Source:

http://www.hindawi.com/journals/ijdsn/2013/272916/

 

International Journal of Distributed Sensor Networks
Volume 2013 (2013), Article ID 272916, 11 pages
http://dx.doi.org/10.1155/2013/272916
Research Article

A Study of Mobile Sensing Using Smartphones

School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China

Received 8 December 2012; Accepted 15 January 2013

Academic Editor: Chao Song

Copyright © 2013 Ming Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Traditional mobile sensing-based applications use extra equipments which are unrealistic for most users. Smartphones develop in a rapid speed in recent years, and they are becoming indispensable carry-on of daily life. The sensors embedded in them provide various possibilities for mobile applications, and these applications are helping and changing the way of our living. In this paper, we analyze and discuss existing mobile applications; after that, future directions are pointed out.

1. Introduction

The word sensing builds a bridge between real world and virtual world; with the help of various sensors, man-made devices are able to feel the world like God-made creatures do. Bell may be the first generation of sensors; people tie up a bell to a string so that when there is a vibration on the string, the bell will ring. Bell is a very powerful and effective sensor; it contains two parts: detection and processing. When a bell detects a vibration, it will generate a period of ringing and the volume of the ringing is proportional to the amplitude of the vibration. However, bell is the kind of sensor that connects real world to real world. With the development of electronic devices, a new man-made world has been building. This world is called virtual world; many complicated calculations are running in this world so that people in real world can enjoy their lives. Virtual world needs data to keep running, and it is far from enough to input data into the virtual world depending on human operations. Sensor is a way to sense the world and interpret the sensed information to the data form of the virtual world; therefore, sensing becomes an important part of research field and industry field.

Early sensing-based applications are mostly used for research purposes or used in some specific areas. References [1, 2] propose localization methods for finding odor sources using gas sensors and anemometric sensors. Reference [3] uses a number of sensors embedded into a cyclist’s bicycle to gather quantitative data about the cyclist’s rides; this information would be useful for mapping the cyclist experience. Reference [4] uses body-worn sensors to build an activity recognition system, and [5] uses body-worn sensors for healthcare monitoring. Reference [6] proposes a robotic fish carrying sensors for mobile sensing. Also, in Wireless Sensor Networks (WSN), there are a lot of sensing-based applications. References [7, 8] deploy wireless sensors to track the movement of mobile objects. References [9, 10] deploy sensors for monitoring volcano.

People-centric sensing mentioned in [11] uses smartphones for mobile sensing. Smartphones are very popular and becoming indispensable carry-on for people in recent years; they are embedded with various sensors which could be used for many interesting applications. Unlike specific sensors which are used for specific areas, sensors in smartphones could provide unlimited possibilities for applications to help and change the life of people; also, using smartphone instead of specific equipment makes an application easier to be accepted by users.

In this paper, we will discuss some existing interesting sensing-based applications using smartphones and give some possible future directions. Section 2 gives detailed descriptions of sensors embedded in modern smartphones; Section 3 introduces some sensing-based applications; Section 4 gives a conclusion and future directions.

2. Sensors in Smartphones

As Figure 1 shows, modern smartphones have several kinds of sensors. The most popular sensors which most smartphones have are accelerometer, gyroscope, magnetometer, microphone, and camera. In this section, we will discuss the characteristics of the sensors.

272916.fig.001
Figure 1: Sensors inside of smartphones.
2.1. Accelerometer

An accelerometer measures proper acceleration, which is the acceleration it experiences relative to free fall and is the acceleration felt by people and objects. To put it another way, at any point in spacetime the equivalence principle guarantees the existence of a local inertial frame, and an accelerometer measures the acceleration relative to that frame. Such accelerations are popularly measured in terms of g-force [12].

The principle of accelerometer is using inertial force. Try to imagine a box with six walls, a ball is floating in the middle of the box because no force is added to the ball (e.g., the box may be in the outer space) [13]. When the box moves to the right direction, the ball will hit the left wall. The left wall is pressure sensitive that it can measure the force of hitting applied to the left wall; therefore, the acceleration can be measured. Because of gravity, when the box is placed at earth, the ball will keep pressing the bottom wall of the box and give constant ~9.8 m/s2 acceleration. The gravity force will affect the measurement of accelerometer for measuring speed or displacement of an object in a three-dimension. The gravity force must be subtracted before any measurement. However, the gravity force can be taken as an advantage of detecting the rotation of a device. When a user rotates his smartphone, the content he/she is watching will switch between portrait and landscape. As Figure 2shows, when the screen of smartphone is in a portrait condition, -axis will sense the gravity; when the screen of smartphone is in a landscape condition, -axis will sense the gravity. According to this, users can rotate their screens without affecting their reading experiences.

272916.fig.002
Figure 2: Screen rotation.

In theory, the displacement can be calculated aswhere : displacement, : initial displacement, : initial velocity, and : acceleration.

Equation (1) is a continuous function; the we get in real environment is discrete due to sampling. To calculate the displacement according to discrete values, (2) has to be used aswhere : continuous acceleration, : th sample, and : time increment.

Then, the velocity and displacement can be calculated as the following [14]:

The value the accelerometer returns is three-dimensional as Figure 2 shows; therefore, will be calculated as the following shows:where , and are vectors.

2.2. Gyroscope

Accelerometer is good at measuring the displacement of an object; however, it is inaccurate to measure the spin movement of the device, which is an easy thing for gyroscope.

A gyroscope is a device for measuring or maintaining orientation, based on the principles of angular momentum. Mechanically, a gyroscope is a spinning wheel or disk in which the axle is free to assume any orientation. Although this orientation does not remain fixed, it changes in response to an external torque much lesser and in a different direction than it would be without the large angular momentum associated with the disk’s high rate of spin and moment of inertia. The device’s orientation remains nearly fixed, regardless of the mounting platform’s motion because mounting the device in a gimbal minimizes external torque [15].

Gyroscope is a very sensitive device; it is good at detecting the spin movement. Same as accelerometer, gyroscope returns three-dimensional values; the coordinate system is as Figure 2 shows. The value gyroscope returns is angular velocity which indicates how fast the device rotates around the axes. The gyroscope can be calculated aswhere : angular velocity and : vectors of angular velocity around x-, y-, and z-axes.

2.3. Magnetometer

A magnetometer is a measuring instrument used to measure the strength and perhaps the direction of magnetic fields [16]. Accelerometer and gyroscope are able to detect the direction of a movement; however, the direction is a relative direction; it obeys the coordinate system that a smartphone uses. Sometimes, different smartphones need to synchronize their directions; therefore, a magnetometer is needed to get an absolute direction (the direction obeys the coordinate system of earth).

The magnetometer returns three-dimensional values; if the device is placed horizontally, the orientation angle can be calculated as

Until now, we introduced three types of sensors: accelerometer, gyroscope, and magnetometer. With the help of the three types of sensors, smartphone can estimate its own all kind of movements. However, in real environment, errors of measurement happen all the time; we will talk about a way to correct the offset error of magnetometer, and the other two sensors may use the same way to correct their errors.

Firstly, place the magnetometer horizontally, rotate it in a uniform speed, measure the value of and , put axis horizontally, and rotate to measure the value . Calculate the offset value on the three axes as

2.4. Microphones

Microphone is a very common sensor; it is usually used for recording sound. The problem is how to deal with the recorded sound. The most common way is to find a known period of sound in a sound record. Cross-correlation is a method to search a piece of signal in a mixed signal [17]. In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time lag applied to one of them. This is also known as a sliding dot product or sliding inner product. It is commonly used for replacing a long signal for a shorter ones, which is known feature. It also has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology [18].

Cross-correlation can be calculated as (8) shows. Suppose that the known symbol pattern of sound wave of turn signal is of length is the complex number representing the received symbol, then the cross-correlation at a shift position is

If we use a sound wave to cross-correlate itself, for example, the sound wave as Figure 3 shows, the result will be shown as in Figure 4. The spike indicates the existence of a piece of signal.

272916.fig.003
Figure 3: A sound record of turn signal of automobile.
272916.fig.004
Figure 4: Cross-correlation.
2.5. Camera

Camera captures vision information from real world. From the human perspective, vision contains most information we get. However, pattern recognition in computer area is not mature enough to work as human does. In this section, we will briefly introduce principle the pattern recognition.

A photo the camera records can be expressed as a matrix of light intensity of each pixel (here we take the grey-mode photo as an example). Suppose that the source matrixes (or as we call them dictionary) are, as the matrix needed to be recognized is , then the pattern recognition is proceeded as (9) shows. matrix in the dictionary is the result:

Pattern recognition is far more complicated than (9); there are many good algorithms in pattern recognition area, like SIFT [1923] and SVM [2429]. But the recognition rate is still not good enough for practical applications.

3. Applications

In this section, we will introduce a few interesting sensing-based applications using smartphones. We divide the applications we are going to discuss into two categories: accelerometer, gyroscope, and magnetometer; microphone and camera.

3.1. Accelerometer, Gyroscope, and Magnetometer
3.1.1. Trace Track

Searching people in a public place is difficult; for example, a person is in a conference hall, a library, or a shopping mall, there are many crowds around, it is very difficult to find the target person. Even if the person tells you where he/she is, it is frustrating to find the place in an unfamiliar environment. Maps may be helpful but not always handy. Smartphones provide possibilities to develop an electronic escort service using opportunistic user intersections. Through periodically learning the walking trails of different individuals, as well as how they encounter each other in space time, a route can be computed between any pair of persons [30]. The escort system could guide a user to the vicinity of a desired person in a public place. Escort does not rely on GPS, WiFi, or war driving to locate a person—the escort user only needs to follow an arrow displayed on the phone [30].

Escort system presents an interest idea that a smartphone is an escort which tells you how many steps you walk and which direction you are heading to [30], see Figure 5(a). GPS is a way to achieve the idea; however, GPS cannot work in an indoor environment; WiFi localization is a good indoor localization method; however, it cannot guarantee, there are enough WiFi fingerprints for localization. Escort system uses accelerometer and gyroscope to achieve the idea. However, the displacement calculated based on accelerometer is inaccurate; the reasons are the jerky movement of smartphone in people’s pocket and inherent error of measurement [3133]; the displacement error may reach 100 m after 30 m walk, see Figure 5(b). Escort identifies an acceleration signature in human walking patterns to avoid the problem. This signature arises from the natural up- and down-bounces of the human body while walking and can be used to count the number of steps walked [30]. The physical displacement can then be computed by multiplying step count with the user’s step size which is, a function of the user’s weight and height [30], see Figure 5(c). Escort varies the step size with an error factor drawn from a Gaussian distribution centered on 0 and standard deviation 0.15 m [30]. This better accommodates the varying human step size [30].

fig5
Figure 5: (a) Accelerometer readings (smoothened) from a walking user. (b) Displacement error with double integration for two users. (c) Error with step count method.

Compass (magnetometer) is also used in the escort system. Like accelerometer has measurement errors, the compass noise is caused by several factors, including user sway, movement irregularities, magnetic fields in the surroundings, and the sensor’s internal bias. Because these factors are related to the user, surroundings, and the sensor, the noise is difficult to predict and compensate. To characterize the compass noise, escort ran 100 experiments using 2 Nokia 6210 Navigator phones and has observed an average bias of 8 degrees and a standard deviation of 9 degrees [30]. In addition to this large noise range, escort made two consistent observations as follows: when the user is walking in a constant direction, the compass readings stabilize quickly on a biased value and exhibit only small random oscillations, and after each turn, a new bias is imposed [30]. Escort identifies two states of the user: walking in a constant direction and turning based on the two observations. Turns are identified when the compass headings change more significantly than due to random oscillations. The turn identification algorithm uses the following condition [30]:where denotes the average compass readings over a time period (e.g., 1 second), is the standard deviation of compass readings during , and is a guard factor. While on a constant direction, escort compensates the stabilized compass reading with the average bias and reports the resulting value as the direction of the user. During turns, escort considers the sequence of readings reported by the compass [30].

Similar usage of accelerometer and magnetometer appears in [34]. It uses accelerometer to estimate the walking trace of people and correct the trace periodically by GPS. Accelerometer used as a supplement of GPS localization is a popular way, and many researches focused on this [3540].

Smartphone is not only used on walking people but also on people on vehicles.

3.1.2. Dangerous Drive

When drivers are sitting in a vehicle, their smartphones are able to measure the acceleration, velocity, and turns through sensors embedded. Because the smartphone is very popular, using smartphone is an easy way to implement mobile sensing.

Driving style can be divided into two categories: nonaggressive and aggressive. To study the vehicle safety system, we need to understand and recognize the driving events. Potentially aggressive driving behavior is currently a leading cause of traffic fatalities in the United States, and drivers are unaware that they commit potentially aggressive actions daily [41]. To increase awareness and promote driver safety, a novel system that uses Dynamic Time Warping (DTW) and smartphone-based sensor fusion (accelerometer, gyroscope, magnetometer, GPS, and video) to detect, recognize, and record these actions without external processing being proposed [41].

The system collects the motion data from the the accelerometer and gyroscope in the smartphone continuously at a rate of 25 Hz in order to detect specific dangerous driving behaviors. Typical dangerous driving behaviors are hard left and right turns, swerves, and sudden braking and acceleration patterns. These dangerous driving behaviors indicate potentially aggressive driving that would cause danger to both pedestrians and other drivers. The system first determines when a dangerous behavior starts and ends using endpoint detection. Once the system has a signal representing a maneuver, it is possible to compare it to stored maneuvers (templates) to determine whether or not it matches an aggressive event [41].

References [4244] use accelerometer and gyroscope for driving event recognition. Reference [42] includes two single-axis accelerometers, two single-axis gyroscopes, and a GPS unit (for velocity) attached to a PC for processing. While the system included gyroscopes for inertial measurements, they were not used in the project. Hidden Markov Models (HMM) were trained and used only on the acceleration data for the recognition of simple driving patterns. Reference [43] used accelerometer data from a mobile phone to detect drunk driving patterns through windowing and variation threshold. A Driver Monitor System was created in [44] to monitor the driving patterns of the elderly. This system involved three cameras, a two-axis accelerometer, and a GPS receiver attached to a PC. The authors collected large volumes of data for 647 drivers. The system had many components, one of them being detection of erratic driving using accelerometers. Braking and acceleration patterns were detected, as well as high-speed turns via thresholding. Additionally, the data could be used to determine the driving environment (freeway versus city) based on acceleration patterns.

References [4547] use accelerometer and gyroscope to detect gesture recognition. The uWave paper [47] and the gesture control work of Kela et al. [48] explore gesture recognition using DTW and HMM algorithm,s respectively. They both used the same set of eight simple gestures which included up, down, left, right, two opposing direction circles, square, and slanted 90-degree angle movements for their final accuracy reporting. Four of these eight gestures are one-dimensional in nature. The results proved that using DTW with one training sample was just as effective as HMMs.

3.1.3. Phone Hack

Accelerometer is so sensitive that it can even detect finger press at the screen of the smartphone. Reference [49] shows that the location of screen taps on modern smartphones and tablets can be identified from accelerometer and gyroscope readings. The findings have serious implications, as we demonstrate that an attacker can launch a background process on commodity smartphones and tablets, and silently monitor the user’s inputs, such as keyboard presses and icon taps [49]. While precise tap detection is nontrivial, requiring machine-learning algorithms to identify fingerprints of closely spaced keys, sensitive sensors on modern devices aid the process [49]. Reference [49] presents TapPrints, a framework for inferring the location of taps on mobile device touchscreens using motion sensor data combined with machine-learning analysis. By running tests on two different off-the-shelf smartphones and a tablet computer, the results show that identifying tap locations on the screen and inferring English letters could be done with up to 90% and 80% accuracy, respectively [49]. By optimizing the core tap detection capability with additional information, such as contextual priors, the core threat may further be magnified [49].

On both Android and iOS systems, location sensor like GPS is the only one which requires explicit user access permission, the reason probably is that people are not willing to be tracked and consider their locations information as privacy. But accelerometer and gyroscope sensors do not require explicit user permission on of the two mentioned operating systems. Because the sensors are mostly used for gaming, Android system does not restrict the access of accelerometer and gyroscope; background services with the two sensors are allowed. Moreover, there is work aimed at the standardization of JavaScript access to a device’s accelerometer and gyroscope sensors in order for any web application to perform, for example, website layout adaptation [49]. Reference [49] shows that accelerometer and gyroscope sensor data can create a serious privacy breach. In particular, it is demonstrated that it is possible to infer where people tap on the screen and what people type by applying machine-learning analysis to the stream of data from these two motion sensors. The reason the work focuses on the accelerometer and gyroscope sensors is that they are able to capture tiny device vibrations and angle rotations, respectively, with good precision [49].

Figure 6 shows a sample of the raw accelerometer data after a user has tapped on the screen—the timing of each tap marked by the digital pulses on the top line. As we can see, each tap generates perturbations in the accelerometer sensor readings on the three axes, particularly visible along the -axis, which is perpendicular to the phone’s display. Gyroscope data exhibits similar behavior and is not shown [49]. Some related works [5052] are also about using accelerometer and gyroscope for phone hacks.

272916.fig.006
Figure 6: The square wave in the top line identifies the occurrence of a tap. Two particular taps have also been highlighted by marking their boundaries with dashed vertical lines. Notice that the accelerometer sensor readings (on the -axis in particular) show very distinct patterns during taps. Similar patterns can also be observed in the gyroscope.
3.1.4. Phone Gesture

The first two sections are about using accelerometer, gyroscope, and magnetometer to detect a long-distance movement; actually, if the sensors are used to detect gestures, some interesting applications may appear.

The ability to note down small pieces of information, quickly and ubiquitously, can be useful. Reference [53] proposes a system called PhonePoint Pen that uses the in-built accelerometer in mobile phones to recognize human writing. By holding the phone like a pen, a user should be able to write short messages or even draw simple diagrams in the air. The acceleration due to hand gestures can be converted into an image and sent to the user’s Internet email address for future reference [53].

The work is done without gyroscope, because the smartphone it used lacks gyroscope. There are two main issues to be solved: coping with background vibration (noise); computing displacement of phone.

Accelerometers are sensitive to small vibrations. Figure 7(a) reports acceleration readings as the user draws a rectangle using 4 strokes (around −350 units on the -axis is due to earth’s gravity). A significant amount of jitter is caused by natural hand vibrations. Furthermore, the accelerometer itself has measurement errors. It is necessary to smooth this background vibration (noise) in order to extract jitter-free pen gestures. To cope with vibrational noise, the system smoothes the accelerometer readings by applying a moving average over the last nreadings (). The results are presented in Figure 7(b) (the relevant movements happen in the -plane, so we removed the -axis from the subsequent figures for better visual representation) [53].

fig7
Figure 7: (a) Raw accelerometer data while drawing a rectangle (note gravity on the -axis). (b) Accelerometer noise smoothing.

The phone’s displacement determines the size of the character. The displacement is computed as a double integral of acceleration, that is, , where is the instantaneous acceleration. In other words, the algorithm first computes the velocity (the integration of acceleration) followed by the displacement (the integration of velocity). However, due to errors in the accelerometer, the cumulative acceleration and deceleration values may not sum to zero even after the phone has come to rest. This offset translates into some residual constant velocity. When this velocity is integrated, the displacement and movement direction become erroneous. In order to reduce velocity-drift errors, the velocity to zero at some identifiable points needs to be reset. The stroke mechanism described earlier is therefore used. Characters are drawn using a set of strokes separated by short pauses. Each pause is an opportunity to reset velocity to zero and thus correct displacement. Pauses are detected by using a moving window over consecutive accelerometer readings () and checking if the standard deviation in the window is smaller than some threshold. This threshold is chosen empirically based on the average vibration caused when the phone was held stationary. All acceleration values during a pause are suppressed to zero. Figure 8(a) shows the combined effect of noise smoothing and suppression. Further, velocity is set to zero at the beginning of each pause interval. Figure 8(b) shows the effect of resetting the velocity. Even if small velocity drifts are still present, they have a small impact on the displacement of the phone [53].

fig8
Figure 8: (a) Accelerometer readings after noise smoothing and suppression. (b) Resetting velocity to zero in order to avoid velocity drifts.
3.2. Microphone and Camera
3.2.1. Surround Sense

There are some research works [5457] about using smartphones to sense the context of surroundings. A growing number of mobile computing applications are centered around the user’s location. The notion of location is broad, ranging from physical coordinates (latitude/longitude) to logical labels (like Starbucks, McDonalds). While extensive research has been performed in physical localization, there have been few attempts to recognize logical locations. Reference [57] argues that the increasing number of sensors on mobile phones presents new opportunities for logical localization. Reference [57] postulates that ambient sound, light, and color in a place convey a photoacoustic signature that can be sensed by the phone’s camera and microphone. In-built accelerometers in some phones may also be useful in inferring broad classes of user motion, often dictated by the nature of the place. By combining these optical, acoustic, and motion attributes, it may be feasible to construct an identifiable fingerprint for logical localization. Hence, users in adjacent stores can be separated logically, even when their physical positions are extremely close [57] see Figures 9 and 10.

272916.fig.009
Figure 9: Sound fingerprints from 3 adjacent stores.
272916.fig.0010
Figure 10: Color/light fingerprint in the HSL format from the Bean Traders’ coffee shop. Each cluster is represented by a different symbol.

Reference [57] takes advantage of microphone and camera to collect the surrounding fingerprint so that it can provide logical localization.

3.2.2. Localization

References [5860] use audio for localization and the accuracy is improved to a high level. Reference [58] operates in a spontaneous, ad hoc, and device-to-device context without leveraging any preplanned infrastructure. It is a pure software-based solution and uses only the most basic set of commodity hardware—a speaker, a microphone, and some form of device-to-device communication—so that it is readily applicable to many low-cost sensor platforms and to most commercial-off-the-shelf mobile devices like cell phones and PDAs [58]. High accuracy is achieved through a combination of three techniques: two-way sensing, self-recording, and sample counting. To estimate the range between two devices, each will emit a specially designed sound signal (“Beep”) and collect a simultaneous recording from its microphone [58]. Each recording should contain two such beeps, one from its own speaker and the other from its peer [58]. By counting the number of samples between these two beeps and exchanging the time duration information with its peer, each device can derive the two-way time of flight of the beeps at the granularity of the sound sampling rate [58]. This technique cleverly avoids many sources of inaccuracy found in other typical time-of-arrival schemes, such as clock synchronization, non-real-time handling, and software delays [58].

Reference [58] sends beep sound to calculate the distance between two objects; the microphone will distinguish original sound and echo to get the time interval and, therefore, get a distance, see Figure 11.

272916.fig.0011
Figure 11: Illustration of event sequences in BeepBeep ranging procedure. The time points are marked for easy explanation and no timestamping is required in the proposed ranging mechanism.

4. Discussions and Conclusion

Sensors are the key factors of developing more and more interesting applications on the smartphones, and the sensors make the smartphone different from traditional computing devices like computer. Most applications used accelerometer and gyroscope because they are somehow the most accurate sensors. However, the vision contains huge information. We believe that camera and pattern recognition will be used more and more in the future.

Acknowledgments

This work is supported by National Science Foundation under Grant nos. 61103226, 60903158, 61170256, 61173172, and 61103227 and the Fundamental Research Funds for the Central Universities under Grant no. ZYGX2010J074 and ZYGX2011J102.

References

  1. H. Ishida, K. Suetsugu, T. Nakamoto, and T. Moriizumi, “Study of autonomous mobile sensing system for localization of odor source using gas sensors and anemometric sensors,” Sensors and Actuators A, vol. 45, no. 2, pp. 153–157, 1994. View at Google Scholar · View at Scopus
  2. H. Ishida, T. Nakamoto, and T. Moriizumi, “Remote sensing of gas/odor source location and concentration distribution using mobile system,” Sensors and Actuators B, vol. 49, no. 1-2, pp. 52–57, 1998. View at Google Scholar · View at Scopus
  3. S. B. Eisenman, E. Miluzzo, N. D. Lane, R. A. Peterson, G. S. Ahn, and A. T. Campbell, “BikeNet: a mobile sensing system for cyclist experience mapping,” ACM Transactions on Sensor Networks, vol. 6, no. 1, pp. 1–39, 2009. View at Publisher · View at Google Scholar · View at Scopus
  4. T. Choudhury, G. Borriello, S. Consolvo et al., “The mobile sensing platform: an embedded activity recognition system,” IEEE Pervasive Computing, vol. 7, no. 2, pp. 32–41, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. B. Lo, S. Thiemjarus, R. King, et al., “Body sensor network-a wireless sensor platform for pervasive healthcare monitoring,” in Proceedings of the 3rd International Conference on Pervasive Computing, 2005.
  6. X. Tant, D. Kim, N. Usher et al., “An autonomous robotic fish for mobile sensing,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’06), pp. 5424–5429, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Arora, P. Dutta, S. Bapat et al., “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Computer Networks, vol. 46, no. 5, pp. 605–634, 2004. View at Publisher ·View at Google Scholar · View at Scopus
  8. Y. C. Tseng, S. P. Kuo, and H. W. Lee, “Location tracking in a wireless sensor network by mobile agents and its data fusion strategies,” Information Processing in Sensor Networks, pp. 554–554, 2003. View at Google Scholar
  9. G. Werner-Allen, J. Johnson, M. Ruiz, J. Lees, and M. Welsh, “Monitoring volcanic eruptions with a wireless sensor network,” in Proceedings of the 2nd European Workshop on Wireless Sensor Networks (EWSN ’05), pp. 108–120, February 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. G. Werner-Allen, K. Lorincz, M. Welsh et al., “Deploying a wireless sensor network on an active volcano,” IEEE Internet Computing, vol. 10, no. 2, pp. 18–25, 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. A. T. Campbell, S. B. Eisenman, N. D. Lane et al., “The rise of people-centric sensing,” IEEE Internet Computing, vol. 12, no. 4, pp. 12–21, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. Accelerometer, http://en.wikipedia.org/wiki/Accelerometer.
  13. A Guide To using IMU (Accelerometer and Gyroscope Devices) in Embedded Applications,http://www.starlino.com/imu_guide.html.
  14. M. Arraigada and M. Partl, “Calculation of displacements of measured accelerations, analysis of two accelerometers and application in road engineering,” in Proceedings of the Swiss Transport Research Conference, 2006.
  15. Gyroscope, http://en.wikipedia.org/wiki/Gyroscope.
  16. Magnetometer, http://en.wikipedia.org/wiki/Magnetometer.
  17. S. Sen, R. Roy Choudhury, and S. Nelakuditi, “CSMA/CN: carrier sense multiple access with collision notification,” IEEE/ACM Transactions on Networking, vol. 20, no. 2, pp. 544–556, 2012. View at Google Scholar
  18. Cross-correlation, http://en.wikipedia.org/wiki/Cross-correlation.
  19. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Bay, T. Tuytelaars, and L. Gool, “SURF: speeded up robust features,” in Proceedings of the Computer Vision (ECCV ’06), pp. 404–417, Springer, Berlin, Germany, 2006.
  21. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’05), pp. 886–893, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  22. K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615–1630, 2005. View at Publisher · View at Google Scholar · View at Scopus
  23. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. View at Publisher · View at Google Scholar· View at Scopus
  24. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
  25. N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and other Kernel-Based Learning Methods, Cambridge University Press, 2000.
  26. C. C. Chang and C. J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, article 27, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. C. J. C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121–167, 1998. View at Google Scholar · View at Scopus
  28. R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval, ACM press, New York, NY, USA, 1999.
  29. B. Schölkopf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, 2001.
  30. I. Constandache, X. Bao, M. Azizyan, and R. R. Choudhury, “Did you see Bob?: human localization using mobile phones,” in Proceedings of the 16th Annual Conference on Mobile Computing and Networking (MobiCom ’10), pp. 149–160, September 2010. View at Publisher · View at Google Scholar ·View at Scopus
  31. D. M. Boore, “Effect of baseline corrections on displacements and response spectra for several recordings of the 1999 Chi-Chi, Taiwan, earthquake,” Bulletin of the Seismological Society of America, vol. 91, no. 5, pp. 1199–1211, 2001. View at Publisher · View at Google Scholar · View at Scopus
  32. D. M. Boore, C. D. Stephens, and W. B. Joyner, “Comments on baseline correction of digital strong-motion data: examples from the 1999 Hector Mine, California, earthquake,” Bulletin of the Seismological Society of America, vol. 92, no. 4, pp. 1543–1560, 2002. View at Publisher · View at Google Scholar ·View at Scopus
  33. D. M. Boore, “Analog-to-digital conversion as a source of drifts in displacements derived from digital recordings of ground acceleration,” Bulletin of the Seismological Society of America, vol. 93, no. 5, pp. 2017–2024, 2003. View at Google Scholar · View at Scopus
  34. I. Constandache, R. R. Choudhury, and I. Rhee, “Towards mobile phone localization without war-driving,” in Proceedings of the IEEE INFOCOM, pp. 1–9, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. J. Paek, J. Kim, and R. Govindan, “Energy-efficient rate-adaptive GPS-based positioning for smartphones,” in Proceedings of the 8th Annual International Conference on Mobile Systems, Applications and Services (MobiSys ’10), pp. 299–314, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. D. H. Kim, Y. Kim, D. Estrin, and M. B. Srivastava, “SensLoc: sensing everyday places and paths using less energy,” in MobiSys 8th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’10), pp. 43–56, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  37. K. Lee, I. Rhee, J. Lee, S. Chong, and Y. Yi, “Mobile data offloading: how much can WiFi deliver?” inProceedings of the 6th International Conference on Emerging Networking Experiments and Technologies (Co-NEXT ’10), p. 26, December 2010. View at Publisher · View at Google Scholar · View at Scopus
  38. A. Thiagarajan, J. Biagioni, T. Gerlich, and J. Eriksson, “Cooperative transit tracking using smart-phones,” in Proceedings of the 8th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’10), pp. 85–98, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  39. A. Schulman, V. Navda, R. Ramjee et al., “Bartendr: a practical approach to energy-aware cellular data scheduling,” in Proceedings of the 16th Annual Conference on Mobile Computing and Networking (MobiCom ’10), pp. 85–96, September 2010. View at Publisher · View at Google Scholar · View at Scopus
  40. S. P. Tarzia, P. A. Dinda, R. P. Dick, and G. Memik, “Indoor localization without infrastructure using the acoustic background spectrum,” in Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services (MobiSys ’11), pp. 155–168, July 2011. View at Publisher · View at Google Scholar · View at Scopus
  41. D. A. Johnson and M. M. Trivedi, “Driving style recognition using a smartphone as a sensor platform,” in Proceedings of the 14th International IEEE Conference on Intelligent Transportation Systems (ITSC ’11), pp. 1609–1615, 2011.
  42. D. Mitrović, “Reliable method for driving events recognition,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, no. 2, pp. 198–205, 2005. View at Publisher · View at Google Scholar ·View at Scopus
  43. J. Dai, J. Teng, X. Bai, Z. Shen, and D. Xuan, “Mobile phone based drunk driving detection,” inProceedings of the 4th International Conference on Pervasive Computing Technologies for Healthcare, pp. 1–8, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  44. K. C. Baldwin, D. D. Duncan, and S. K. West, “The driver monitor system: a means of assessing driver performance,” Johns Hopkins APL Technical Digest, vol. 25, no. 3, pp. 269–277, 2004. View at Google Scholar · View at Scopus
  45. G. Ten Holt, M. Reinders, and E. Hendriks, “Multi-dimensional dynamic time warping for gesture recognition,” in Proceedings of the Conference of the Advanced School for Computing and Imaging (ASCI ’07), 2007.
  46. R. Muscillo, S. Conforto, M. Schmid, P. Caselli, and T. D’Alessio, “Classification of motor activities through derivative dynamic time warping applied on accelerometer data,” in Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS ’07), pp. 4930–4933, 2007.
  47. J. Liu, L. Zhong, J. Wickramasuriya, and V. Vasudevan, “uWave: accelerometer-based personalized gesture recognition and its applications,” Pervasive and Mobile Computing, vol. 5, no. 6, pp. 657–675, 2009. View at Publisher · View at Google Scholar · View at Scopus
  48. J. Kela, P. Korpipää, and J. Mäntyjärvi, “Accelerometer-based gesture control for a design environment,”Personal and Ubiquitous Computing, vol. 10, no. 5, pp. 285–299, 2006. View at Google Scholar
  49. E. Miluzzo, A. Varshavsky, S. Balakrishnan, et al., “Tapprints: your finger taps have fingerprints,” inProceedings of the 10th International Conference on Mobile Systems, Applications, and Services, pp. 323–336, Low Wood Bay, Lake District, UK, 2012.
  50. L. Cai and H. H. Chen, “TouchLogger: inferring keystrokes on touch screen from smartphone motion,” in Proceedings of the 6th USENIX Conference on Hot Topics in Security, p. 9, San Francisco, Calif, USA, 2011.
  51. E. Owusu, J. Han, S. Das, et al., “ACCessory: password inference using accelerometers on smartphones,” in Proceedings of the 12th Workshop on Mobile Computing Systems & Applications, pp. 1–6, San Diego, Calif, USA, 2012.
  52. L. Cai, S. Machiraju, and H. Chen, “Defending against sensor-sniffing attacks on mobile phones,” inProceedings of the 1st ACM Workshop on Networking, Systems, and Applications for Mobile Handhelds, pp. 31–36, Barcelona, Spain, 2009.
  53. S. Agrawal, I. Constandache, and S. Gaonkar, “PhonePoint pen: using mobile phones to write in air,” inProceedings of the 1st ACM Workshop on Networking, Systems, and Applications for Mobile Handhelds, pp. 1–6, Barcelona, Spain, 2009.
  54. B. Clarkson, K. Mase, and A. Pentland, “Recognizing user context via wearable sensors,” in Proceedings of the 4th Intenational Symposium on Wearable Computers, pp. 69–75, October 2000. View at Scopus
  55. S. Gaonkar, J. Li, and R. R. Choudhury, “Micro-Blog: sharing and querying content through mobile phones and social participation,” in Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, pp. 174–186, Breckenridge, Colo, USA, 2008.
  56. E. Miluzzo, N. D. Lane, and K. Fodor, “Sensing meets mobile social networks: the design, implementation and evaluation of the CenceMe application,” in Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems, pp. 337–350, Raleigh, NC, USA, 2008.
  57. M. Azizyan and R. R. Choudhury, “SurroundSense: mobile phone localization using ambient sound and light,” SIGMOBILE Mobile Computing and Communications Review, vol. 13, no. 1, pp. 69–72, 2009.View at Google Scholar
  58. C. Peng, G. Shen, Y. Zhang, Y. Li, and K. Tan, “BeepBeep: a high accuracy acoustic ranging system using COTS mobile devices,” in Proceedings of the 5th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’07), pp. 1–14, Sydney, Australia, November 2007. View at Publisher · View at Google Scholar · View at Scopus
  59. A. Mandai, C. V. Lopes, T. Givargis, A. Haghighat, R. Jurdak, and P. Baldi, “Beep: 3D indoor positioning using audible sound,” in Proceedings of the 2nd IEEE Consumer Communications and Networking Conference (CCNC ’05), pp. 348–353, January 2005. View at Scopus
  60. C. Peng, G. Shen, Y. Zhang, Y. Li, and K. Tan, “BeepBeep: a high accuracy acoustic ranging system using COTS mobile devices,” in Proceedings of the 5th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’07), pp. 397–398, Sydney, Australia, November 2007. View at Publisher · View at Google Scholar · View at Scopus