Archiv der Kategorie: Mobility

The 15 coolest concept cars revealed this year so far

Automakers are pushing bold, innovative ideas forward with their latest concept cars.

Faraday Thumb23Rob Ludacer

Whether it’s a car with nothing inside but a sofa and TV or an electric car resembling the Batmobile, concept cars give us a glimpse of how technology will shape the future of driving.

1. Volkswagen unveiled a microbus concept meant to give a modern spin to the classic Volkswagen bus at the Consumer Electronics Show in January.

1. Volkswagen unveiled a microbus concept meant to give a modern spin to the classic Volkswagen bus at the Consumer Electronics Show in January.

Volkswagen

Called the BUDD-e, the electric car gets up to 373 miles of range.

The doors open with a simple wave of the hand, and you can control the console’s interface by making hand gestures.

The doors open with a simple wave of the hand, and you can control the console's interface by making hand gestures.

Volkswagen

You can also use the interface to control things like the temperature and lighting in your house.

2. The big unveiling to come out of the Consumer Electronics Show was Faraday Future’s concept car, the FFZERO1.

2. The big unveiling to come out of the Consumer Electronics Show was Faraday Future's concept car, the FFZERO1.

Rob Ludacer

It can go from zero to 60 miles per hour in under three seconds.

Four motors placed over each wheel give the car a top speed of 200 miles per hour. It’s also capable of learning the driver’s preferences and automatically adjusting the internal settings.

Four motors placed over each wheel give the car a top speed of 200 miles per hour. It's also capable of learning the driver's preferences and automatically adjusting the internal settings.

Faraday Future

Although Faraday Future plans to release a production car in 2020, the FFZERO1 is just a show car.

3. LeEco, a Chinese tech company, unveiled its Tesla killer concept car at the Consumer Electronics Show.

LeEco is also partners with Faraday Future.

Called the LeSEE, the car has a top speed of 130 miles per hour. It also has an autonomous mode.

Called the LeSEE, the car has a top speed of 130 miles per hour. It also has an autonomous mode.

LeEco

The steering wheel will retract back into the dashboard when the car is in autonomous mode.

4. The Lincoln Navigator concept car comes with giant gullwing doors. It was unveiled at the New York Auto Show in March.

4. The Lincoln Navigator concept car comes with giant gullwing doors. It was unveiled at the New York Auto Show in March.

Ford

We won’t be seeing those doors in the production model of a Lincoln Navigator anytime soon, unfortunately.

The six seats inside can be adjusted 30 different ways, and there’s entertainment consoles on the back of four seats so passengers can watch TV or play games.

The six seats inside can be adjusted 30 different ways, and there's entertainment consoles on the back of four seats so passengers can watch TV or play games.

Ford

There’s even a built-in wardrobe management system in the trunk so you can turn your car into part walk-in closet.

5. BMW’s Vision Next 100 was unveiled at the Geneva Motor Show in March. It comes with an AI system called Companion that can learn your driving preferences and adjust accordingly in advance.

5. BMW's Vision Next 100 was unveiled at the Geneva Motor Show in March. It comes with an AI system called Companion that can learn your driving preferences and adjust accordingly in advance.

BMW

The side panels of the Next 100 are made of carbon fiber.

The steering wheel will retract into the dashboard when the car is in autonomous mode.

The steering wheel will retract into the dashboard when the car is in autonomous mode.

BMW

There’s also a heads-up display that will show information about your route on the windshield.

6. BMW added to its Vision 100 line in June. Here we see the Mini Vision Next 100 that was built for ridesharing.

6. BMW added to its Vision 100 line in June. Here we see the Mini Vision Next 100 that was built for ridesharing.

BMW

The car can recognize who you are when it comes to pick you up and will greet you with personalized lighting.

The steering wheel will shift into the center of the console when the car is in autonomous mode.

The steering wheel will shift into the center of the console when the car is in autonomous mode.

BMW

The BMW also comes with a heads-up display that will show information about your route on the windshield.

7. The last addition to the BMW Vision 100 line is this futuristic Rolls-Royce.

7. The last addition to the BMW Vision 100 line is this futuristic Rolls-Royce.

Rob Ludacer

The Rolls-Royce is also completely autonomous.

Because the car envisions a completely autonomous future, the interior is composed entirely of a two-person, silk sofa and a giant OLED TV.

Because the car envisions a completely autonomous future, the interior is composed entirely of a two-person, silk sofa and a giant OLED TV.

Rolls-Royce

There’s also a secret compartment in the car for storing your luggage.

8. McLaren unveiled a stunning concept car called the 675LT JVCKENWOOD at the Consumer Electronics Show.

8. McLaren unveiled a stunning concept car called the 675LT JVCKENWOOD at the Consumer Electronics Show.

McLaren

The McLaren 675LT comes with a wireless networking system so it could communicate with other cars on the road about traffic and accidents.

The car comes with a steering wheel that looks like a video game controller!

The car comes with a steering wheel that looks like a video game controller!

McLaren

The controller is meant to help the driver control the heads-up display while in motion.

9. Italian automaker Pininfarina unveiled a beautiful hydrogen-powered concept car at the Geneva Motor Show.

9. Italian automaker Pininfarina unveiled a beautiful hydrogen-powered concept car at the Geneva Motor Show.

Pininfarina

The car, called H2 Speed, refuels in just three minutes.

It has a top speed of 186 miles per hour and can go from zero to 62 miles per hour in 3.4 seconds.

It has a top speed of 186 miles per hour and can go from zero to 62 miles per hour in 3.4 seconds.

Pininfarina

The car can regenerate energy from braking.

10. Audi unveiled its connected mobility concept car in April. There’s a longboard integrated in the bumper in case you want to roll from the parking lot to work.

10. Audi unveiled its connected mobility concept car in April. There's a longboard integrated in the bumper in case you want to roll from the parking lot to work.

Audi

It conveniently pulls out when you need it and is stored in the bumper when you’d rather travel on foot!

The car’s infotainment system can calculate the fastest route based on real-time data and will suggest using the longboard if that seems faster.

The car's infotainment system can calculate the fastest route based on real-time data and will suggest using the longboard if that seems faster.

Audi

It will even show you the best parking spot to make the longboard portion of your commute shorter.

11. Aston Martin showed off a beautiful concept car in May called the Vanquish Zagato Concept.

11. Aston Martin showed off a beautiful concept car in May called the Vanquish Zagato Concept.

Aston Martin

All of the body panels in the Vanquish Zagato are made of carbon fiber.

Aston Martin made the car with Italian auto design company Zagato. The two have worked together since 1960.

Aston Martin made the car with Italian auto design company Zagato. The two have worked together since 1960.

Aston Martin

There’s not too many details on this car since it’s just a concept, but it sure is pretty.

12. Jeep showed off a crazy looking wrangler in March at the Easter Jeep Safari, an off road rally.

12. Jeep showed off a crazy looking wrangler in March at the Easter Jeep Safari, an off road rally.

Chrysler

That is a monster car.

The Wrangler Trailcat concept had to be stretched to 12 inches to accommodate the massive engine providing 707 horsepower.

The Wrangler Trailcat concept had to be stretched to 12 inches to accommodate the massive engine providing 707 horsepower.

Chrysler

It comes with racing seats from a Dodge Viper.

13. Toyota unveiled a strange-looking concept car dubbed the uBox to appeal to Generation Z in April.

13. Toyota unveiled a strange-looking concept car dubbed the uBox to appeal to Generation Z in April.

Toyota

The uBox is all-electric.

The interior is entirely customizable so it can transform into a mobile office or fit more people.

The interior is entirely customizable so it can transform into a mobile office or fit more people.

Toyota

It also comes with a nice curved glass roof that lets plenty of light inside.

14. French automaker Renault showed off a stunning, high-tech sports car dubbed the Alpine Vision in February.

The Alpine Vision is a two-door, two-seater sports car.

It can go from zero to 62 miles per hour in 4.5 seconds

The interior is decked out with a LCD gauge cluster in the center console.

15. Lastly, Croatian automaker Rimac designed a stunning, all-electric concept car for the Geneva Motor Show.

15. Lastly, Croatian automaker Rimac designed a stunning, all-electric concept car for the Geneva Motor Show.

Rimac

Called the Concept_One it can accelerate from zero to 62 miles per hour in just 2.6 seconds.

The Concept_One can reach a top speed of 185 miles per hour.

The Concept_One can reach a top speed of 185 miles per hour.

Rimac

It has a regenerative braking system that allows it to generate energy whenever it brakes.

http://www.businessinsider.com/coolest-concept-cars-revealed-in-2016-2016-6

Advertisements

The Evolution of Messengers at Google

Google has announced three new communication apps this week: Spaces, Allo and Duo. That’s in addition to the three it already has. To understand why it’s doing this, and why it’ll do it again, we only need to look to its past.

Twelve years ago, Google began its shift from being „just“ the world’s most popular search engine to something much more: It released Gmail. Soon, the company was offering several options for communication. By 2009 Google users had a pretty robust set of tools at their disposal. Gmail for email, Talk for real-time text and voice chats, Voice for VoIP calling, and Android to facilitate everything else. Unfortunately, this simple delineation would quickly disappear as the company launched more and more services.

Google Wave was the first addition. Announced in mid-2009, it mashed together elements of bulletin boards, instant messaging and collaborative editing to pretty awesome effect. It grew a small but fervent community — I was a big fan — until Google halted development.

Then came Buzz. Launched in 2010, it was Google’s first attempt at a bona fide social network. It failed miserably, not least due to complaints about the way Google forced it upon users and some valid privacy concerns. Although neither Wave nor Buzz really competed with what the company was already offering, that would change when Google launched its next attempt at a social network, Google+.

In addition to standard social networking, Google+ also had two features that facilitated direct communication with individuals and groups: Hangouts and Huddles. Not to be mistaken with the current app, Hangouts at the time offered multiuser video chat for people in the same Circle. Huddle, on the other hand, was an instant messaging app for talking with other Google+ users.

Huddle would soon become Google+ Messenger, offering the same functionality as Google Talk, while Hangouts would expand to seriously encroach on Google Voice. Within a year, Google had added the ability to make „audio-only“ calls by inviting users to join Hangouts over a regular phone line.

Google now had two apps for everything, coupled with the problem that many users — even on its Android platform — were still using SMS to communicate on the go. It began work to rectify this and unify its disparate platforms. In 2013 we got an all-new Hangouts, available cross-platform and on the web. It merged the functionality of Hangouts and Messenger, and it also replaced Talk within Gmail if you opted to upgrade. Voice was still out in the cold and SMS wasn’t integrated, but the company was moving in the right direction.

In late 2013, Google added SMS to Hangouts, and in Android 4.4 it replaced Messaging as the OS default for texting. By Oct. 2014 Google had integrated VoIP into Hangouts as well. It finally had one app for everything.

You could assert that Hangouts was a better app because of the confusing mess that preceded it. Google tried lots of things and put the best elements from all of its offerings into a single app.

That arguably should have been the end of the story, but it’s not. For whatever reason — probably because it figured out that a lot of Android users didn’t use Hangouts — Google released another app in Nov. 2014 called Messenger. This Messenger had nothing to do with Google+ but instead was a simple app focused on SMS and MMS. Hangouts could and can still handle your texts, but Messenger is now standard on Nexus phones and can be installed on any Android phone from the Play Store. This confusing muddle means that if you have, say, a new flagship Samsung phone, you’ll have two apps capable of handling your SMS (Samsung’s app and Hangouts), with the possibility of adding a third with Messenger.

Hangouts, for the most part, has been doing a fine job.

Still, SMS isn’t exactly a burning priority for most people, and Hangouts, for the most part, has been doing a fine job. I can’t say I use it that often — my conversations are mostly through Facebook Messenger and WhatsApp, because that’s where my friends are — but when I do, it’s a pleasant-enough experience. The same can be said for Google+: It’s actually a great social network now, aside from the fact that barely anyone uses it.

That’s the issue that Google faces today and the reason why these new apps exist. More people are using Facebook Messenger than Hangouts. More people are using WhatsApp than Hangouts. More people are using Snapchat than Hangouts. And everyone uses everything other than Google+.

So we now have three new apps from Google, each performing pretty different tasks. The first is Spaces. Think of it as Google+ redux redux redux. It takes the service’s fresh focus on communities and collections and puts it into an app that exists outside the social network. The end result is a mashup of Slack, Pinterest, Facebook Groups and Trello. It’s promising, but, as of writing, it’s very much a work in progress.

Next up is Allo, a reaction to Facebook Messenger and Microsoft’s efforts in the chatbot space. It uses machine learning to streamline conversations with auto replies and also offers a virtual assistant that’ll book restaurants for you, answer questions and do other chatbotty things. Just like Spaces exists outside Google+, Allo exists outside Hangouts. You don’t even need a Google account to sign up, just a phone number — much like how WhatsApp doesn’t require a Facebook account.

Finally we have Duo, which is by far the most focused of the three. It basically duplicates Hangouts‘ original function: video calling. According to the PR, it makes mobile video calls „fast“ and „simple,“ and it’s only going to be available on Android and iOS. Both Duo and Allo also have the distinction of offering end-to-end encryption — although Allo doesn’t do so by default — the absence of which has been something privacy advocates have hated about Hangouts.

This summer, when Duo and Allo become available, Google users will be at another confusing impasse. Want to send a message to a friend? Pick from Hangouts, Allo or Messenger. Want to make a video call? Hangouts or Duo. Group chat? Hangouts, Allo or Spaces. It’s not user-friendly, and it’s not sustainable.

Sure, Facebook sustains two chat services (WhatsApp and its own Messenger) just fine, but it bought WhatsApp as a fully independent, hugely popular app and has barely changed a thing. Google doesn’t have that luxury. Instead, it’ll borrow another Facebook play: Test new features on a small audience and integrate. Over the past couple of years Facebook has released Slingshot, Rooms, Paper, Riff, Strobe, Shout, Selfied and Moments. I’m probably missing a few.

All of these apps were essentially built around a single feature: private chats, ephemeral messaging, a prettier news feed, selfies, etc. The vast majority won’t get traction on their own, but their features might prove useful enough to fold into the main Facebook and Messenger apps. And if one of them takes off, no problem, you’ve got another successful app.

This has to be Google’s strategy for Allo, Duo and Spaces. We don’t know what Google’s communication offerings will look like at the end of this year, let alone 2017. But chances are that Google will continue to float new ideas before eventually merging the best of them into a single, coherent application, as it did with Hangouts. And then it’ll start the process again. In the meantime, Google will spend money developing x number of duplicate apps, and users will have to deal with a confusing mess of applications on their home screens.

 

http://www.engadget.com/2016/05/19/why-google-cant-stop-making-messaging-apps/

Look out Elon – Porsche showing off the Mission E

Porsche-MissionE-2 Porsche-MissionE-3 Porsche-MissionE-4 Porsche-MissionE-5 Porsche-MissionE-6 Porsche-MissionE-7

US-Electric-Car-Sales

 

In September, Porsche showed off the Mission E, a fully electric and fully beautiful concept made to dethrone Tesla motors as the EV industry’s king of cool.

Today, Porsche announced it’s investing more than a billion dollars to bring the Mission E to production. As in, you’ll be able to buy one. We’re light on details—like the size of the battery, or when we’ll actually see one on the road—but we’ve got the most important numbers. The motor (or motors, Porsche hasn’t said) will produce more than 600 horsepower. The four-seater Mission E will go from 0 to 62 mph in under 3.5 seconds. And it will go 310 miles on a charge.

Porsche, which faces increasingly strict fuel emission standards from US and European authorities, been working with batteries for a few years now, with top notch results. It already offers plug-in hybrid versions of the Panamera and Cayenne, it’s successfully raced a 911 hybrid. Then there’s the flat-out amazing gas-electric 918 Spyder supercar and 919 Hybrid that won at Le Mans this year. So it makes sense to make the next step a full electric.

Compared to Tesla’s current range-topper, the excellent Model S P90D, the Mission E will offer a bit less power and a slower acceleration time. But Porsche wins on range—the longest-legged Tesla goes roughly 286 miles on a charge. Here, the Germans have a second advantage: They’re working on an 800-volt charger that will power the car up to 80 percent in just 15 minutes, half the time it takes the Tesla.

Porsche, which faces increasingly strict fuel emission standards from US and European authorities, been working with batteries for a few years now, with top notch results. It already offers plug-in hybrid versions of the Panamera and Cayenne, it’s successfully raced a 911 hybrid. Then there’s the flat-out amazing gas-electric 918 Spyder supercar and 919 Hybrid that won at Le Mans this year. So it makes sense to make the next step a full electric.

Porsche plans to build the battery into the floor of the car, like Tesla does, so you can expect a very low center of gravity, great news for performance. But really, the Mission E wins on looks. The Model S and Model X SUV are lovely designs, but the Porsche is simply gorgeous, in the way only a Porsche can be. We’ve only seen the concept version, but hopefully Porsche will be smart enough to change as little as possible on the way to production.

http://www.wired.com/2015/12/porsches-electric-mission-e-is-poised-to-whoop-teslas-model-s/

Apple’s Growth Machine Starts to Sputter

Source: http://www.bloombergview.com/articles/2015-10-27/apple-s-iphone-growth-machine-starts-to-sputter

Apple has been corporate America’s most ridiculously unbelievable growth story. And now it’s over. Take a deep breath. Exhale. Get used to it.

Yes, the Apple we have come to know was a gravity-defying growth machine. Apple is the most valuable company in the world, yet it increased its revenue in the last 12 months by more than last year’s sales at Coca-Cola. Apple rang up enough operating cash in the last year to buy 625,000 Tesla Model X cars — nearly one for each person in San Francisco. (I’ll take blue with tan leather interior, please.) The company has given us so many eye-popping numbers that any feat short of Tim Cook colonizing Mars is underwhelming.

With the setup of high expectations, the most crucial numbers are now more low-Earth orbit than deep space. Unit sales of the iPhone for the three months ended Sept. 26 climbed 22 percent from those in the period a year earlier. Just a few months ago, Apple sold 60 percent more iPhones than it did a year before. Wall Street will be thrilled if iPhone sales increase at all in the holiday quarter, compared with the frenzied sales of the iPhone 6 during the 2014 holidays. Apple’s own forecast, which is often conservative, is for a tiny increase in revenue in the three months ending in December.

The existential question for Apple is whether the company is in another lull before a turbocharge from the iPhone 7 or something else, or whether nonspectacular growth is the new normal.

It’s true that this isn’t the first time Apple seemed to run out of steam. Just before the introduction of the iPhone 6 models a year ago, Apple posted five consecutive quarters of single-digit revenue growth, according to Bloomberg data. Sales took off again once people snapped up the larger-screen iPhones.

What’s different this time is Apple is in part to blame for doubts about its growth trajectory. The company has fallen into an unhealthy reliance on a single product: the iPhone. The smartphones generated just more than half of Apple’s sales before the introduction of the iPhone 6. That number has risen to 63 percent or more in each of the last four quarters.

90823-20151027204645000000000

 

Part of that shift, of course, is the remarkable sales run for the iPhone. It’s also because the Apple Watch, Apple Music or potential future electric minivans aren’t big enough to pick up the slack right now, and maybe never will. Sales of Mac computers are defying the shrinking PC market, but revenue gains are pedestrian. The iPad seems to have settled into a rut as a nice-but-not-essential consumer gadget that users don’t upgrade as often as the company would like. Revenue from iPads fell for the seventh quarter in a row, by 20 percent in the latest three months.

Apple is used to defying predictions that it can’t top itself. Those predictions look more likely to come true than ever before.

Apple’s Chinese Miracle Is Over

Source: http://www.bloombergview.com/articles/2015-10-28/apple-s-chinese-miracle-is-over

Perhaps the most important number in Apple’s quarterly release on Tuesday came from China, and it’s not the good news Apple makes out. The company’s over reliance on the Chinese market is starting to hinder its progress despite management’s attempts to give it a positive spin.

During Tuesday’s earnings call, Apple chief executive Tim Cook sang the praises of the Chinese market, saying it will one day be Apple’s largest. In fiscal 2015, which ended for Apple on Sept. 26, Greater China provided 25 percent of the company’s revenue, for the first time overtaking Europe, responsible for just 21.6 percent of Apple sales. An economic slowdown? Not according to Cook, who is worth quoting at length here:

Frankly, if I were to shut off my web and shut off the TV and just look at how many customers are coming in our stores regardless of whether they’re buying, how many people are coming online, and in addition looking at our sales trends, I wouldn’t know there was any economic issue at all in China. And so I don’t know how unusual we are with that. I think that there’s a misunderstanding, probably particularly in the Western world, about China’s economy, which contributes to the confusion. That said, I don’t think it’s growing as fast as it was; but I also don’t think that Apple’s results are largely dependent on minor changes in growth.

The statistics Cook cites in support of this view are impressive: 87 percent growth in iPhone sales year-on-year in Greater China (which includes Hong Kong, Taiwan and Macau) despite the entire market’s 4 percent growth; revenue almost twice as high in the last quarter as a year ago; and the iPhone 6 now the bestselling smartphone in China, with the iPhone 6 Plus at number three. These numbers are less relevant, however, than two others: a drop in quarter-on-quarter sales in Greater China and an erosion of Apple’s overall market share there.

In the last quarter of fiscal 2015, Apple made $12.5 billion in revenue in Greater China, a 5.4 percent drop compared to the previous three months, despite the inclusion of the first weekend of iPhone 6s sales in the fourth quarter, 2015 data. In 2014, the new iPhone 6 wasn’t immediately available in China, so the fourth quarter didn’t benefit from the new product boost — and still sales were higher than in the previous three months.

Apple in China

Cook is wrong to say the Chinese slowdown isn’t affecting his company’s sales. The effect has been immediate and quite obvious. But Apple’s market share in the Asia Pacific region, which includes China, wasn’t growing even before it manifested itself.

According to data compiled by Bloomberg Intelligence, in the second quarter of this year, Apple’s market share of smartphone unit shipments in the region dropped to 7.7 percent from 10.8 percent in the previous quarter as Chinese leaders Huawei and Xiaomi increased their shares. Apple is the Asian smartphone market leader in terms of value, but its share by that measure also dropped in the second quarter — to 34.1 percent from 42.7 percent in the previous three months. Again, Huawei and Xiaomi posted gains, although Korean producers such as Samsung and LG also managed to pick up some of Apple’s losses. As Apple’s revenue in the region dropped, it was unlikely to have made share gains in the last quarter.

Cook is banking on the future growth of the Chinese middle class, and that’s an obvious long-term bet to make, but under the current economic conditions, this growth is not likely to be explosive. Besides, Apple won’t even be able to grow its sales at the same rate because many Chinese consumers will opt for better-value devices from local producers, as they’re already doing, judging by the market share data.

Improving distribution in China yielded strong revenue gains for Apple this year. Greater China accounted for 53 percent of the company’s revenue growth in fiscal 2015. Unless China’s economic troubles are miraculously cured over the next year or Huawei and Xiaomi stop making cutting-edge devices for a fraction of Apple’s prices, this growth engine has stalled. Nor does Apple have any comparable opportunities for extensive growth anywhere else in the world.

Cook’s bet on China was, of course, no mistake: It would be a crime for a device producer not to develop a strong presence in the world’s most populous country. Focusing on China was a business decision that produced gains comparable to a ground-breaking product launch, especially in 2015. There are no more miracles coming out of China, however, and no more technological rabbits coming out of Apple’s hat. It’s time for some stagnation and retrenchment — at least by this company’s remarkably high standards.

Source: http://www.bloombergview.com/articles/2015-10-28/apple-s-chinese-miracle-is-over

accelerometer and gyroscope sensor data can create a serious privacy breach

Source:

http://www.hindawi.com/journals/ijdsn/2013/272916/

 

International Journal of Distributed Sensor Networks
Volume 2013 (2013), Article ID 272916, 11 pages
http://dx.doi.org/10.1155/2013/272916
Research Article

A Study of Mobile Sensing Using Smartphones

School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China

Received 8 December 2012; Accepted 15 January 2013

Academic Editor: Chao Song

Copyright © 2013 Ming Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Traditional mobile sensing-based applications use extra equipments which are unrealistic for most users. Smartphones develop in a rapid speed in recent years, and they are becoming indispensable carry-on of daily life. The sensors embedded in them provide various possibilities for mobile applications, and these applications are helping and changing the way of our living. In this paper, we analyze and discuss existing mobile applications; after that, future directions are pointed out.

1. Introduction

The word sensing builds a bridge between real world and virtual world; with the help of various sensors, man-made devices are able to feel the world like God-made creatures do. Bell may be the first generation of sensors; people tie up a bell to a string so that when there is a vibration on the string, the bell will ring. Bell is a very powerful and effective sensor; it contains two parts: detection and processing. When a bell detects a vibration, it will generate a period of ringing and the volume of the ringing is proportional to the amplitude of the vibration. However, bell is the kind of sensor that connects real world to real world. With the development of electronic devices, a new man-made world has been building. This world is called virtual world; many complicated calculations are running in this world so that people in real world can enjoy their lives. Virtual world needs data to keep running, and it is far from enough to input data into the virtual world depending on human operations. Sensor is a way to sense the world and interpret the sensed information to the data form of the virtual world; therefore, sensing becomes an important part of research field and industry field.

Early sensing-based applications are mostly used for research purposes or used in some specific areas. References [1, 2] propose localization methods for finding odor sources using gas sensors and anemometric sensors. Reference [3] uses a number of sensors embedded into a cyclist’s bicycle to gather quantitative data about the cyclist’s rides; this information would be useful for mapping the cyclist experience. Reference [4] uses body-worn sensors to build an activity recognition system, and [5] uses body-worn sensors for healthcare monitoring. Reference [6] proposes a robotic fish carrying sensors for mobile sensing. Also, in Wireless Sensor Networks (WSN), there are a lot of sensing-based applications. References [7, 8] deploy wireless sensors to track the movement of mobile objects. References [9, 10] deploy sensors for monitoring volcano.

People-centric sensing mentioned in [11] uses smartphones for mobile sensing. Smartphones are very popular and becoming indispensable carry-on for people in recent years; they are embedded with various sensors which could be used for many interesting applications. Unlike specific sensors which are used for specific areas, sensors in smartphones could provide unlimited possibilities for applications to help and change the life of people; also, using smartphone instead of specific equipment makes an application easier to be accepted by users.

In this paper, we will discuss some existing interesting sensing-based applications using smartphones and give some possible future directions. Section 2 gives detailed descriptions of sensors embedded in modern smartphones; Section 3 introduces some sensing-based applications; Section 4 gives a conclusion and future directions.

2. Sensors in Smartphones

As Figure 1 shows, modern smartphones have several kinds of sensors. The most popular sensors which most smartphones have are accelerometer, gyroscope, magnetometer, microphone, and camera. In this section, we will discuss the characteristics of the sensors.

272916.fig.001
Figure 1: Sensors inside of smartphones.
2.1. Accelerometer

An accelerometer measures proper acceleration, which is the acceleration it experiences relative to free fall and is the acceleration felt by people and objects. To put it another way, at any point in spacetime the equivalence principle guarantees the existence of a local inertial frame, and an accelerometer measures the acceleration relative to that frame. Such accelerations are popularly measured in terms of g-force [12].

The principle of accelerometer is using inertial force. Try to imagine a box with six walls, a ball is floating in the middle of the box because no force is added to the ball (e.g., the box may be in the outer space) [13]. When the box moves to the right direction, the ball will hit the left wall. The left wall is pressure sensitive that it can measure the force of hitting applied to the left wall; therefore, the acceleration can be measured. Because of gravity, when the box is placed at earth, the ball will keep pressing the bottom wall of the box and give constant ~9.8 m/s2 acceleration. The gravity force will affect the measurement of accelerometer for measuring speed or displacement of an object in a three-dimension. The gravity force must be subtracted before any measurement. However, the gravity force can be taken as an advantage of detecting the rotation of a device. When a user rotates his smartphone, the content he/she is watching will switch between portrait and landscape. As Figure 2shows, when the screen of smartphone is in a portrait condition, -axis will sense the gravity; when the screen of smartphone is in a landscape condition, -axis will sense the gravity. According to this, users can rotate their screens without affecting their reading experiences.

272916.fig.002
Figure 2: Screen rotation.

In theory, the displacement can be calculated aswhere : displacement, : initial displacement, : initial velocity, and : acceleration.

Equation (1) is a continuous function; the we get in real environment is discrete due to sampling. To calculate the displacement according to discrete values, (2) has to be used aswhere : continuous acceleration, : th sample, and : time increment.

Then, the velocity and displacement can be calculated as the following [14]:

The value the accelerometer returns is three-dimensional as Figure 2 shows; therefore, will be calculated as the following shows:where , and are vectors.

2.2. Gyroscope

Accelerometer is good at measuring the displacement of an object; however, it is inaccurate to measure the spin movement of the device, which is an easy thing for gyroscope.

A gyroscope is a device for measuring or maintaining orientation, based on the principles of angular momentum. Mechanically, a gyroscope is a spinning wheel or disk in which the axle is free to assume any orientation. Although this orientation does not remain fixed, it changes in response to an external torque much lesser and in a different direction than it would be without the large angular momentum associated with the disk’s high rate of spin and moment of inertia. The device’s orientation remains nearly fixed, regardless of the mounting platform’s motion because mounting the device in a gimbal minimizes external torque [15].

Gyroscope is a very sensitive device; it is good at detecting the spin movement. Same as accelerometer, gyroscope returns three-dimensional values; the coordinate system is as Figure 2 shows. The value gyroscope returns is angular velocity which indicates how fast the device rotates around the axes. The gyroscope can be calculated aswhere : angular velocity and : vectors of angular velocity around x-, y-, and z-axes.

2.3. Magnetometer

A magnetometer is a measuring instrument used to measure the strength and perhaps the direction of magnetic fields [16]. Accelerometer and gyroscope are able to detect the direction of a movement; however, the direction is a relative direction; it obeys the coordinate system that a smartphone uses. Sometimes, different smartphones need to synchronize their directions; therefore, a magnetometer is needed to get an absolute direction (the direction obeys the coordinate system of earth).

The magnetometer returns three-dimensional values; if the device is placed horizontally, the orientation angle can be calculated as

Until now, we introduced three types of sensors: accelerometer, gyroscope, and magnetometer. With the help of the three types of sensors, smartphone can estimate its own all kind of movements. However, in real environment, errors of measurement happen all the time; we will talk about a way to correct the offset error of magnetometer, and the other two sensors may use the same way to correct their errors.

Firstly, place the magnetometer horizontally, rotate it in a uniform speed, measure the value of and , put axis horizontally, and rotate to measure the value . Calculate the offset value on the three axes as

2.4. Microphones

Microphone is a very common sensor; it is usually used for recording sound. The problem is how to deal with the recorded sound. The most common way is to find a known period of sound in a sound record. Cross-correlation is a method to search a piece of signal in a mixed signal [17]. In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time lag applied to one of them. This is also known as a sliding dot product or sliding inner product. It is commonly used for replacing a long signal for a shorter ones, which is known feature. It also has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology [18].

Cross-correlation can be calculated as (8) shows. Suppose that the known symbol pattern of sound wave of turn signal is of length is the complex number representing the received symbol, then the cross-correlation at a shift position is

If we use a sound wave to cross-correlate itself, for example, the sound wave as Figure 3 shows, the result will be shown as in Figure 4. The spike indicates the existence of a piece of signal.

272916.fig.003
Figure 3: A sound record of turn signal of automobile.
272916.fig.004
Figure 4: Cross-correlation.
2.5. Camera

Camera captures vision information from real world. From the human perspective, vision contains most information we get. However, pattern recognition in computer area is not mature enough to work as human does. In this section, we will briefly introduce principle the pattern recognition.

A photo the camera records can be expressed as a matrix of light intensity of each pixel (here we take the grey-mode photo as an example). Suppose that the source matrixes (or as we call them dictionary) are, as the matrix needed to be recognized is , then the pattern recognition is proceeded as (9) shows. matrix in the dictionary is the result:

Pattern recognition is far more complicated than (9); there are many good algorithms in pattern recognition area, like SIFT [1923] and SVM [2429]. But the recognition rate is still not good enough for practical applications.

3. Applications

In this section, we will introduce a few interesting sensing-based applications using smartphones. We divide the applications we are going to discuss into two categories: accelerometer, gyroscope, and magnetometer; microphone and camera.

3.1. Accelerometer, Gyroscope, and Magnetometer
3.1.1. Trace Track

Searching people in a public place is difficult; for example, a person is in a conference hall, a library, or a shopping mall, there are many crowds around, it is very difficult to find the target person. Even if the person tells you where he/she is, it is frustrating to find the place in an unfamiliar environment. Maps may be helpful but not always handy. Smartphones provide possibilities to develop an electronic escort service using opportunistic user intersections. Through periodically learning the walking trails of different individuals, as well as how they encounter each other in space time, a route can be computed between any pair of persons [30]. The escort system could guide a user to the vicinity of a desired person in a public place. Escort does not rely on GPS, WiFi, or war driving to locate a person—the escort user only needs to follow an arrow displayed on the phone [30].

Escort system presents an interest idea that a smartphone is an escort which tells you how many steps you walk and which direction you are heading to [30], see Figure 5(a). GPS is a way to achieve the idea; however, GPS cannot work in an indoor environment; WiFi localization is a good indoor localization method; however, it cannot guarantee, there are enough WiFi fingerprints for localization. Escort system uses accelerometer and gyroscope to achieve the idea. However, the displacement calculated based on accelerometer is inaccurate; the reasons are the jerky movement of smartphone in people’s pocket and inherent error of measurement [3133]; the displacement error may reach 100 m after 30 m walk, see Figure 5(b). Escort identifies an acceleration signature in human walking patterns to avoid the problem. This signature arises from the natural up- and down-bounces of the human body while walking and can be used to count the number of steps walked [30]. The physical displacement can then be computed by multiplying step count with the user’s step size which is, a function of the user’s weight and height [30], see Figure 5(c). Escort varies the step size with an error factor drawn from a Gaussian distribution centered on 0 and standard deviation 0.15 m [30]. This better accommodates the varying human step size [30].

fig5
Figure 5: (a) Accelerometer readings (smoothened) from a walking user. (b) Displacement error with double integration for two users. (c) Error with step count method.

Compass (magnetometer) is also used in the escort system. Like accelerometer has measurement errors, the compass noise is caused by several factors, including user sway, movement irregularities, magnetic fields in the surroundings, and the sensor’s internal bias. Because these factors are related to the user, surroundings, and the sensor, the noise is difficult to predict and compensate. To characterize the compass noise, escort ran 100 experiments using 2 Nokia 6210 Navigator phones and has observed an average bias of 8 degrees and a standard deviation of 9 degrees [30]. In addition to this large noise range, escort made two consistent observations as follows: when the user is walking in a constant direction, the compass readings stabilize quickly on a biased value and exhibit only small random oscillations, and after each turn, a new bias is imposed [30]. Escort identifies two states of the user: walking in a constant direction and turning based on the two observations. Turns are identified when the compass headings change more significantly than due to random oscillations. The turn identification algorithm uses the following condition [30]:where denotes the average compass readings over a time period (e.g., 1 second), is the standard deviation of compass readings during , and is a guard factor. While on a constant direction, escort compensates the stabilized compass reading with the average bias and reports the resulting value as the direction of the user. During turns, escort considers the sequence of readings reported by the compass [30].

Similar usage of accelerometer and magnetometer appears in [34]. It uses accelerometer to estimate the walking trace of people and correct the trace periodically by GPS. Accelerometer used as a supplement of GPS localization is a popular way, and many researches focused on this [3540].

Smartphone is not only used on walking people but also on people on vehicles.

3.1.2. Dangerous Drive

When drivers are sitting in a vehicle, their smartphones are able to measure the acceleration, velocity, and turns through sensors embedded. Because the smartphone is very popular, using smartphone is an easy way to implement mobile sensing.

Driving style can be divided into two categories: nonaggressive and aggressive. To study the vehicle safety system, we need to understand and recognize the driving events. Potentially aggressive driving behavior is currently a leading cause of traffic fatalities in the United States, and drivers are unaware that they commit potentially aggressive actions daily [41]. To increase awareness and promote driver safety, a novel system that uses Dynamic Time Warping (DTW) and smartphone-based sensor fusion (accelerometer, gyroscope, magnetometer, GPS, and video) to detect, recognize, and record these actions without external processing being proposed [41].

The system collects the motion data from the the accelerometer and gyroscope in the smartphone continuously at a rate of 25 Hz in order to detect specific dangerous driving behaviors. Typical dangerous driving behaviors are hard left and right turns, swerves, and sudden braking and acceleration patterns. These dangerous driving behaviors indicate potentially aggressive driving that would cause danger to both pedestrians and other drivers. The system first determines when a dangerous behavior starts and ends using endpoint detection. Once the system has a signal representing a maneuver, it is possible to compare it to stored maneuvers (templates) to determine whether or not it matches an aggressive event [41].

References [4244] use accelerometer and gyroscope for driving event recognition. Reference [42] includes two single-axis accelerometers, two single-axis gyroscopes, and a GPS unit (for velocity) attached to a PC for processing. While the system included gyroscopes for inertial measurements, they were not used in the project. Hidden Markov Models (HMM) were trained and used only on the acceleration data for the recognition of simple driving patterns. Reference [43] used accelerometer data from a mobile phone to detect drunk driving patterns through windowing and variation threshold. A Driver Monitor System was created in [44] to monitor the driving patterns of the elderly. This system involved three cameras, a two-axis accelerometer, and a GPS receiver attached to a PC. The authors collected large volumes of data for 647 drivers. The system had many components, one of them being detection of erratic driving using accelerometers. Braking and acceleration patterns were detected, as well as high-speed turns via thresholding. Additionally, the data could be used to determine the driving environment (freeway versus city) based on acceleration patterns.

References [4547] use accelerometer and gyroscope to detect gesture recognition. The uWave paper [47] and the gesture control work of Kela et al. [48] explore gesture recognition using DTW and HMM algorithm,s respectively. They both used the same set of eight simple gestures which included up, down, left, right, two opposing direction circles, square, and slanted 90-degree angle movements for their final accuracy reporting. Four of these eight gestures are one-dimensional in nature. The results proved that using DTW with one training sample was just as effective as HMMs.

3.1.3. Phone Hack

Accelerometer is so sensitive that it can even detect finger press at the screen of the smartphone. Reference [49] shows that the location of screen taps on modern smartphones and tablets can be identified from accelerometer and gyroscope readings. The findings have serious implications, as we demonstrate that an attacker can launch a background process on commodity smartphones and tablets, and silently monitor the user’s inputs, such as keyboard presses and icon taps [49]. While precise tap detection is nontrivial, requiring machine-learning algorithms to identify fingerprints of closely spaced keys, sensitive sensors on modern devices aid the process [49]. Reference [49] presents TapPrints, a framework for inferring the location of taps on mobile device touchscreens using motion sensor data combined with machine-learning analysis. By running tests on two different off-the-shelf smartphones and a tablet computer, the results show that identifying tap locations on the screen and inferring English letters could be done with up to 90% and 80% accuracy, respectively [49]. By optimizing the core tap detection capability with additional information, such as contextual priors, the core threat may further be magnified [49].

On both Android and iOS systems, location sensor like GPS is the only one which requires explicit user access permission, the reason probably is that people are not willing to be tracked and consider their locations information as privacy. But accelerometer and gyroscope sensors do not require explicit user permission on of the two mentioned operating systems. Because the sensors are mostly used for gaming, Android system does not restrict the access of accelerometer and gyroscope; background services with the two sensors are allowed. Moreover, there is work aimed at the standardization of JavaScript access to a device’s accelerometer and gyroscope sensors in order for any web application to perform, for example, website layout adaptation [49]. Reference [49] shows that accelerometer and gyroscope sensor data can create a serious privacy breach. In particular, it is demonstrated that it is possible to infer where people tap on the screen and what people type by applying machine-learning analysis to the stream of data from these two motion sensors. The reason the work focuses on the accelerometer and gyroscope sensors is that they are able to capture tiny device vibrations and angle rotations, respectively, with good precision [49].

Figure 6 shows a sample of the raw accelerometer data after a user has tapped on the screen—the timing of each tap marked by the digital pulses on the top line. As we can see, each tap generates perturbations in the accelerometer sensor readings on the three axes, particularly visible along the -axis, which is perpendicular to the phone’s display. Gyroscope data exhibits similar behavior and is not shown [49]. Some related works [5052] are also about using accelerometer and gyroscope for phone hacks.

272916.fig.006
Figure 6: The square wave in the top line identifies the occurrence of a tap. Two particular taps have also been highlighted by marking their boundaries with dashed vertical lines. Notice that the accelerometer sensor readings (on the -axis in particular) show very distinct patterns during taps. Similar patterns can also be observed in the gyroscope.
3.1.4. Phone Gesture

The first two sections are about using accelerometer, gyroscope, and magnetometer to detect a long-distance movement; actually, if the sensors are used to detect gestures, some interesting applications may appear.

The ability to note down small pieces of information, quickly and ubiquitously, can be useful. Reference [53] proposes a system called PhonePoint Pen that uses the in-built accelerometer in mobile phones to recognize human writing. By holding the phone like a pen, a user should be able to write short messages or even draw simple diagrams in the air. The acceleration due to hand gestures can be converted into an image and sent to the user’s Internet email address for future reference [53].

The work is done without gyroscope, because the smartphone it used lacks gyroscope. There are two main issues to be solved: coping with background vibration (noise); computing displacement of phone.

Accelerometers are sensitive to small vibrations. Figure 7(a) reports acceleration readings as the user draws a rectangle using 4 strokes (around −350 units on the -axis is due to earth’s gravity). A significant amount of jitter is caused by natural hand vibrations. Furthermore, the accelerometer itself has measurement errors. It is necessary to smooth this background vibration (noise) in order to extract jitter-free pen gestures. To cope with vibrational noise, the system smoothes the accelerometer readings by applying a moving average over the last nreadings (). The results are presented in Figure 7(b) (the relevant movements happen in the -plane, so we removed the -axis from the subsequent figures for better visual representation) [53].

fig7
Figure 7: (a) Raw accelerometer data while drawing a rectangle (note gravity on the -axis). (b) Accelerometer noise smoothing.

The phone’s displacement determines the size of the character. The displacement is computed as a double integral of acceleration, that is, , where is the instantaneous acceleration. In other words, the algorithm first computes the velocity (the integration of acceleration) followed by the displacement (the integration of velocity). However, due to errors in the accelerometer, the cumulative acceleration and deceleration values may not sum to zero even after the phone has come to rest. This offset translates into some residual constant velocity. When this velocity is integrated, the displacement and movement direction become erroneous. In order to reduce velocity-drift errors, the velocity to zero at some identifiable points needs to be reset. The stroke mechanism described earlier is therefore used. Characters are drawn using a set of strokes separated by short pauses. Each pause is an opportunity to reset velocity to zero and thus correct displacement. Pauses are detected by using a moving window over consecutive accelerometer readings () and checking if the standard deviation in the window is smaller than some threshold. This threshold is chosen empirically based on the average vibration caused when the phone was held stationary. All acceleration values during a pause are suppressed to zero. Figure 8(a) shows the combined effect of noise smoothing and suppression. Further, velocity is set to zero at the beginning of each pause interval. Figure 8(b) shows the effect of resetting the velocity. Even if small velocity drifts are still present, they have a small impact on the displacement of the phone [53].

fig8
Figure 8: (a) Accelerometer readings after noise smoothing and suppression. (b) Resetting velocity to zero in order to avoid velocity drifts.
3.2. Microphone and Camera
3.2.1. Surround Sense

There are some research works [5457] about using smartphones to sense the context of surroundings. A growing number of mobile computing applications are centered around the user’s location. The notion of location is broad, ranging from physical coordinates (latitude/longitude) to logical labels (like Starbucks, McDonalds). While extensive research has been performed in physical localization, there have been few attempts to recognize logical locations. Reference [57] argues that the increasing number of sensors on mobile phones presents new opportunities for logical localization. Reference [57] postulates that ambient sound, light, and color in a place convey a photoacoustic signature that can be sensed by the phone’s camera and microphone. In-built accelerometers in some phones may also be useful in inferring broad classes of user motion, often dictated by the nature of the place. By combining these optical, acoustic, and motion attributes, it may be feasible to construct an identifiable fingerprint for logical localization. Hence, users in adjacent stores can be separated logically, even when their physical positions are extremely close [57] see Figures 9 and 10.

272916.fig.009
Figure 9: Sound fingerprints from 3 adjacent stores.
272916.fig.0010
Figure 10: Color/light fingerprint in the HSL format from the Bean Traders’ coffee shop. Each cluster is represented by a different symbol.

Reference [57] takes advantage of microphone and camera to collect the surrounding fingerprint so that it can provide logical localization.

3.2.2. Localization

References [5860] use audio for localization and the accuracy is improved to a high level. Reference [58] operates in a spontaneous, ad hoc, and device-to-device context without leveraging any preplanned infrastructure. It is a pure software-based solution and uses only the most basic set of commodity hardware—a speaker, a microphone, and some form of device-to-device communication—so that it is readily applicable to many low-cost sensor platforms and to most commercial-off-the-shelf mobile devices like cell phones and PDAs [58]. High accuracy is achieved through a combination of three techniques: two-way sensing, self-recording, and sample counting. To estimate the range between two devices, each will emit a specially designed sound signal (“Beep”) and collect a simultaneous recording from its microphone [58]. Each recording should contain two such beeps, one from its own speaker and the other from its peer [58]. By counting the number of samples between these two beeps and exchanging the time duration information with its peer, each device can derive the two-way time of flight of the beeps at the granularity of the sound sampling rate [58]. This technique cleverly avoids many sources of inaccuracy found in other typical time-of-arrival schemes, such as clock synchronization, non-real-time handling, and software delays [58].

Reference [58] sends beep sound to calculate the distance between two objects; the microphone will distinguish original sound and echo to get the time interval and, therefore, get a distance, see Figure 11.

272916.fig.0011
Figure 11: Illustration of event sequences in BeepBeep ranging procedure. The time points are marked for easy explanation and no timestamping is required in the proposed ranging mechanism.

4. Discussions and Conclusion

Sensors are the key factors of developing more and more interesting applications on the smartphones, and the sensors make the smartphone different from traditional computing devices like computer. Most applications used accelerometer and gyroscope because they are somehow the most accurate sensors. However, the vision contains huge information. We believe that camera and pattern recognition will be used more and more in the future.

Acknowledgments

This work is supported by National Science Foundation under Grant nos. 61103226, 60903158, 61170256, 61173172, and 61103227 and the Fundamental Research Funds for the Central Universities under Grant no. ZYGX2010J074 and ZYGX2011J102.

References

  1. H. Ishida, K. Suetsugu, T. Nakamoto, and T. Moriizumi, “Study of autonomous mobile sensing system for localization of odor source using gas sensors and anemometric sensors,” Sensors and Actuators A, vol. 45, no. 2, pp. 153–157, 1994. View at Google Scholar · View at Scopus
  2. H. Ishida, T. Nakamoto, and T. Moriizumi, “Remote sensing of gas/odor source location and concentration distribution using mobile system,” Sensors and Actuators B, vol. 49, no. 1-2, pp. 52–57, 1998. View at Google Scholar · View at Scopus
  3. S. B. Eisenman, E. Miluzzo, N. D. Lane, R. A. Peterson, G. S. Ahn, and A. T. Campbell, “BikeNet: a mobile sensing system for cyclist experience mapping,” ACM Transactions on Sensor Networks, vol. 6, no. 1, pp. 1–39, 2009. View at Publisher · View at Google Scholar · View at Scopus
  4. T. Choudhury, G. Borriello, S. Consolvo et al., “The mobile sensing platform: an embedded activity recognition system,” IEEE Pervasive Computing, vol. 7, no. 2, pp. 32–41, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. B. Lo, S. Thiemjarus, R. King, et al., “Body sensor network-a wireless sensor platform for pervasive healthcare monitoring,” in Proceedings of the 3rd International Conference on Pervasive Computing, 2005.
  6. X. Tant, D. Kim, N. Usher et al., “An autonomous robotic fish for mobile sensing,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’06), pp. 5424–5429, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Arora, P. Dutta, S. Bapat et al., “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Computer Networks, vol. 46, no. 5, pp. 605–634, 2004. View at Publisher ·View at Google Scholar · View at Scopus
  8. Y. C. Tseng, S. P. Kuo, and H. W. Lee, “Location tracking in a wireless sensor network by mobile agents and its data fusion strategies,” Information Processing in Sensor Networks, pp. 554–554, 2003. View at Google Scholar
  9. G. Werner-Allen, J. Johnson, M. Ruiz, J. Lees, and M. Welsh, “Monitoring volcanic eruptions with a wireless sensor network,” in Proceedings of the 2nd European Workshop on Wireless Sensor Networks (EWSN ’05), pp. 108–120, February 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. G. Werner-Allen, K. Lorincz, M. Welsh et al., “Deploying a wireless sensor network on an active volcano,” IEEE Internet Computing, vol. 10, no. 2, pp. 18–25, 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. A. T. Campbell, S. B. Eisenman, N. D. Lane et al., “The rise of people-centric sensing,” IEEE Internet Computing, vol. 12, no. 4, pp. 12–21, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. Accelerometer, http://en.wikipedia.org/wiki/Accelerometer.
  13. A Guide To using IMU (Accelerometer and Gyroscope Devices) in Embedded Applications,http://www.starlino.com/imu_guide.html.
  14. M. Arraigada and M. Partl, “Calculation of displacements of measured accelerations, analysis of two accelerometers and application in road engineering,” in Proceedings of the Swiss Transport Research Conference, 2006.
  15. Gyroscope, http://en.wikipedia.org/wiki/Gyroscope.
  16. Magnetometer, http://en.wikipedia.org/wiki/Magnetometer.
  17. S. Sen, R. Roy Choudhury, and S. Nelakuditi, “CSMA/CN: carrier sense multiple access with collision notification,” IEEE/ACM Transactions on Networking, vol. 20, no. 2, pp. 544–556, 2012. View at Google Scholar
  18. Cross-correlation, http://en.wikipedia.org/wiki/Cross-correlation.
  19. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Bay, T. Tuytelaars, and L. Gool, “SURF: speeded up robust features,” in Proceedings of the Computer Vision (ECCV ’06), pp. 404–417, Springer, Berlin, Germany, 2006.
  21. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’05), pp. 886–893, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  22. K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615–1630, 2005. View at Publisher · View at Google Scholar · View at Scopus
  23. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. View at Publisher · View at Google Scholar· View at Scopus
  24. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
  25. N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and other Kernel-Based Learning Methods, Cambridge University Press, 2000.
  26. C. C. Chang and C. J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, article 27, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. C. J. C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121–167, 1998. View at Google Scholar · View at Scopus
  28. R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval, ACM press, New York, NY, USA, 1999.
  29. B. Schölkopf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, 2001.
  30. I. Constandache, X. Bao, M. Azizyan, and R. R. Choudhury, “Did you see Bob?: human localization using mobile phones,” in Proceedings of the 16th Annual Conference on Mobile Computing and Networking (MobiCom ’10), pp. 149–160, September 2010. View at Publisher · View at Google Scholar ·View at Scopus
  31. D. M. Boore, “Effect of baseline corrections on displacements and response spectra for several recordings of the 1999 Chi-Chi, Taiwan, earthquake,” Bulletin of the Seismological Society of America, vol. 91, no. 5, pp. 1199–1211, 2001. View at Publisher · View at Google Scholar · View at Scopus
  32. D. M. Boore, C. D. Stephens, and W. B. Joyner, “Comments on baseline correction of digital strong-motion data: examples from the 1999 Hector Mine, California, earthquake,” Bulletin of the Seismological Society of America, vol. 92, no. 4, pp. 1543–1560, 2002. View at Publisher · View at Google Scholar ·View at Scopus
  33. D. M. Boore, “Analog-to-digital conversion as a source of drifts in displacements derived from digital recordings of ground acceleration,” Bulletin of the Seismological Society of America, vol. 93, no. 5, pp. 2017–2024, 2003. View at Google Scholar · View at Scopus
  34. I. Constandache, R. R. Choudhury, and I. Rhee, “Towards mobile phone localization without war-driving,” in Proceedings of the IEEE INFOCOM, pp. 1–9, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. J. Paek, J. Kim, and R. Govindan, “Energy-efficient rate-adaptive GPS-based positioning for smartphones,” in Proceedings of the 8th Annual International Conference on Mobile Systems, Applications and Services (MobiSys ’10), pp. 299–314, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. D. H. Kim, Y. Kim, D. Estrin, and M. B. Srivastava, “SensLoc: sensing everyday places and paths using less energy,” in MobiSys 8th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’10), pp. 43–56, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  37. K. Lee, I. Rhee, J. Lee, S. Chong, and Y. Yi, “Mobile data offloading: how much can WiFi deliver?” inProceedings of the 6th International Conference on Emerging Networking Experiments and Technologies (Co-NEXT ’10), p. 26, December 2010. View at Publisher · View at Google Scholar · View at Scopus
  38. A. Thiagarajan, J. Biagioni, T. Gerlich, and J. Eriksson, “Cooperative transit tracking using smart-phones,” in Proceedings of the 8th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’10), pp. 85–98, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  39. A. Schulman, V. Navda, R. Ramjee et al., “Bartendr: a practical approach to energy-aware cellular data scheduling,” in Proceedings of the 16th Annual Conference on Mobile Computing and Networking (MobiCom ’10), pp. 85–96, September 2010. View at Publisher · View at Google Scholar · View at Scopus
  40. S. P. Tarzia, P. A. Dinda, R. P. Dick, and G. Memik, “Indoor localization without infrastructure using the acoustic background spectrum,” in Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services (MobiSys ’11), pp. 155–168, July 2011. View at Publisher · View at Google Scholar · View at Scopus
  41. D. A. Johnson and M. M. Trivedi, “Driving style recognition using a smartphone as a sensor platform,” in Proceedings of the 14th International IEEE Conference on Intelligent Transportation Systems (ITSC ’11), pp. 1609–1615, 2011.
  42. D. Mitrović, “Reliable method for driving events recognition,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, no. 2, pp. 198–205, 2005. View at Publisher · View at Google Scholar ·View at Scopus
  43. J. Dai, J. Teng, X. Bai, Z. Shen, and D. Xuan, “Mobile phone based drunk driving detection,” inProceedings of the 4th International Conference on Pervasive Computing Technologies for Healthcare, pp. 1–8, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  44. K. C. Baldwin, D. D. Duncan, and S. K. West, “The driver monitor system: a means of assessing driver performance,” Johns Hopkins APL Technical Digest, vol. 25, no. 3, pp. 269–277, 2004. View at Google Scholar · View at Scopus
  45. G. Ten Holt, M. Reinders, and E. Hendriks, “Multi-dimensional dynamic time warping for gesture recognition,” in Proceedings of the Conference of the Advanced School for Computing and Imaging (ASCI ’07), 2007.
  46. R. Muscillo, S. Conforto, M. Schmid, P. Caselli, and T. D’Alessio, “Classification of motor activities through derivative dynamic time warping applied on accelerometer data,” in Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS ’07), pp. 4930–4933, 2007.
  47. J. Liu, L. Zhong, J. Wickramasuriya, and V. Vasudevan, “uWave: accelerometer-based personalized gesture recognition and its applications,” Pervasive and Mobile Computing, vol. 5, no. 6, pp. 657–675, 2009. View at Publisher · View at Google Scholar · View at Scopus
  48. J. Kela, P. Korpipää, and J. Mäntyjärvi, “Accelerometer-based gesture control for a design environment,”Personal and Ubiquitous Computing, vol. 10, no. 5, pp. 285–299, 2006. View at Google Scholar
  49. E. Miluzzo, A. Varshavsky, S. Balakrishnan, et al., “Tapprints: your finger taps have fingerprints,” inProceedings of the 10th International Conference on Mobile Systems, Applications, and Services, pp. 323–336, Low Wood Bay, Lake District, UK, 2012.
  50. L. Cai and H. H. Chen, “TouchLogger: inferring keystrokes on touch screen from smartphone motion,” in Proceedings of the 6th USENIX Conference on Hot Topics in Security, p. 9, San Francisco, Calif, USA, 2011.
  51. E. Owusu, J. Han, S. Das, et al., “ACCessory: password inference using accelerometers on smartphones,” in Proceedings of the 12th Workshop on Mobile Computing Systems & Applications, pp. 1–6, San Diego, Calif, USA, 2012.
  52. L. Cai, S. Machiraju, and H. Chen, “Defending against sensor-sniffing attacks on mobile phones,” inProceedings of the 1st ACM Workshop on Networking, Systems, and Applications for Mobile Handhelds, pp. 31–36, Barcelona, Spain, 2009.
  53. S. Agrawal, I. Constandache, and S. Gaonkar, “PhonePoint pen: using mobile phones to write in air,” inProceedings of the 1st ACM Workshop on Networking, Systems, and Applications for Mobile Handhelds, pp. 1–6, Barcelona, Spain, 2009.
  54. B. Clarkson, K. Mase, and A. Pentland, “Recognizing user context via wearable sensors,” in Proceedings of the 4th Intenational Symposium on Wearable Computers, pp. 69–75, October 2000. View at Scopus
  55. S. Gaonkar, J. Li, and R. R. Choudhury, “Micro-Blog: sharing and querying content through mobile phones and social participation,” in Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, pp. 174–186, Breckenridge, Colo, USA, 2008.
  56. E. Miluzzo, N. D. Lane, and K. Fodor, “Sensing meets mobile social networks: the design, implementation and evaluation of the CenceMe application,” in Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems, pp. 337–350, Raleigh, NC, USA, 2008.
  57. M. Azizyan and R. R. Choudhury, “SurroundSense: mobile phone localization using ambient sound and light,” SIGMOBILE Mobile Computing and Communications Review, vol. 13, no. 1, pp. 69–72, 2009.View at Google Scholar
  58. C. Peng, G. Shen, Y. Zhang, Y. Li, and K. Tan, “BeepBeep: a high accuracy acoustic ranging system using COTS mobile devices,” in Proceedings of the 5th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’07), pp. 1–14, Sydney, Australia, November 2007. View at Publisher · View at Google Scholar · View at Scopus
  59. A. Mandai, C. V. Lopes, T. Givargis, A. Haghighat, R. Jurdak, and P. Baldi, “Beep: 3D indoor positioning using audible sound,” in Proceedings of the 2nd IEEE Consumer Communications and Networking Conference (CCNC ’05), pp. 348–353, January 2005. View at Scopus
  60. C. Peng, G. Shen, Y. Zhang, Y. Li, and K. Tan, “BeepBeep: a high accuracy acoustic ranging system using COTS mobile devices,” in Proceedings of the 5th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’07), pp. 397–398, Sydney, Australia, November 2007. View at Publisher · View at Google Scholar · View at Scopus

Windows Phone is as good as dead

 

Windows-phone-kill

Image: Mashable composite, Microsoft

Analysis

With Wednesday’s layoffs, Microsoft, saddled with the losing mobile hand that is Windows Phone, has essentially folded. The bulk of the 7,800 people let go are from the company’s phone division, a tacit admission that its big plans for Windows Phone haven’t exactly worked out.

The company’s not leaving the casino, though: Windows Phone, the platform, isn’t going anywhere, even as Microsoft greatly scales back its hardware ambitions. The company has labored for years to create both a full-featured mobile operating system as well as an ecosystem of devices — PC, phone, tablet and more — that all use the same code base. It would be silly to just abandon its mobile platform, especially as people spend more and more of their time on smartphones.

In fact, if you’re not one of the 7,800 people losing their jobs, there’s actually a lot to like in Satya Nadella’s explanation: Microsoft will continue to build Windows Lumia handsets, but only three types: flagships, business-focused enterprise phones and low-end budget devices.

 

They’re retreating from being a mainstream player

They’re retreating from being a mainstream player,“ says Martin Reynolds, vice president at Gartner Research. „They’ll continue to bring products to market, but not particularly aggressively.“The move represents a clear refocusing, putting Microsoft’s phones in the arenas where they might actually score a few punches before Android and the iPhone walk away with all the market share. It also rightly ditches the current strategy of offering several different Lumias, each with region-specific models, which led to a muddled brand and a confusing market strategy message to consumers.

Retreating forward

Even without the model shake-up, trimming the fat on the handset business Microsoft acquired from Nokia was probably inevitable. With a few exceptions (hello, curved screens), smartphone design and technology have more or less plateaued — it’s no coincidence that both iOS and Android have essentially taken a „bye“ in 2015, with few feature updates. Big hardware teams aren’t really needed to build good smartphones in 2015, as illustrated by upstart Chinese companies like Xiaomi and OnePlus.

„Things have changed in the last few years,“ says Reynolds. „You don’t have [to be a] big company to run a small phone business. They certainly don’t need the design teams and manufacturing people going forward.“

Still, there are new Lumias — and certainly a new flagship — coming soon. A couple of months after Windows 10 debuts, Windows 10 for phones will arrive, and, as Nadella suggested in his letter to employees, those phones will emphasize the big differentiators in the Windows ecosystem. Commentators like Daniel Rubino at Windows Central almost have you believing that a leaner-and-meaner Microsoft mobile division will be poised to succeed, albeit with lowered expectations, once Windows 10 is fully formed.

Almost.

That point of view overlooks the crux of the matter: Windows Phone’s fate was never in the hands of Microsoft. What the company does in mobile at this point is virtually irrelevant. It designed a beautiful (and influential!) user interface, offered sweeter deals to developers than competitors, and helped engineer some of the most sophisticated cameras ever seen on mobile.

None of it mattered. Developers and consumers didn’t respond, locked in a deadly catch-22: If the apps weren’t there, consumers wouldn’t buy the phones; if there weren’t enough people on the platform, developers wouldn’t bother creating apps.

„Windows Phone is not even a blip on [developers‘] radar,“ says Richard Hay, a longtime Microsoft observer and contributor to SuperSite for Windows. „They’re not going to start flocking to it, because what’s the draw? You’re still going to have the app gap.“

The „app gap“ ultimately dug Windows Phone’s grave, and even though it’s only got one foot in it, today’s news will be widely perceived as an admission that the other will soon follow. If there were other Windows Phone manufacturers, it might be a different story, but Microsoft makes 97% of the Windows Phones being sold, according to Ad Duplex. If they’re scaling back, who’s going to step up?

Windows 10 and mobile

If there’s a way Microsoft can resuscitate Windows Phone, it’s with Windows 10. Its ecosystem strategy doesn’t depend on it, but with the new OS, Windows Phones will be more connected to the platform than before, sharing all the same code and development tools.

„The universal Windows platform helps,“ says Hay. „Will that persuade developers to develop for handsets and smaller tablets? Is it enough to come back from the edge? i’m just not sure it is.“

That means Windows developers will be able to create Windows Phone versions of their apps with minimal effort, and some of Windows Phone’s key differentiators, like Cortana, will get a chance to shine on PCs, which could ultimately have a positive impact on the platform.

Finally, there’s Continuum — the feature that allows Windows apps to adapt from PC to tablet to phone seamlessly and lets a Windows Phone theoretically act as your PC when it’s plugged into an external screen. And although there will always be performance concerns when trying to do PC with a mobile processor, it’s a pretty cool trick.

Continuum, though, has only the slimmest chance of being the ace in the hole that wins the day — any day — for Windows Phone. Even for enterprise customers, it’s hard to picture any of Windows 10’s differentiators winning over users, especially now that we’re firmly in a BYOD and single-device world.

 

Even if Continuum ends up being an X factor, who’s left to give Windows Phones a chance?

Even if Continuum ends up being an X factor, who’s left to give Windows Phones a chance? Microsoft is certainly hoping today’s belt-tightening and the Windows 10 launch will lead to some kind of success in mobile, albeit on redefined terms. But it’s not acting in a vacuum. Today’s retreat — or rather the perception of it — may have sealed Windows Phone’s fate. Who would believe the recommendation of a Windows handset after today?Without those new users, developers will have even less incentive to create apps. And without those experiences, Windows Phone will be even more of shell than it is now. What then?

It’s admirable that Microsoft is taking painful steps to preserve what it’s built, but it’s hard not to see its Windows Phone restructuring as delaying the inevitable. Yes, by reducing its ambitions, it’s no longer losing on big bets. But in mobile, there really isn’t a low-stakes table.

Source: http://mashable.com/2015/07/08/windows-phone-dead/