accelerometer and gyroscope sensor data can create a serious privacy breach

Source:

http://www.hindawi.com/journals/ijdsn/2013/272916/

 

International Journal of Distributed Sensor Networks
Volume 2013 (2013), Article ID 272916, 11 pages
http://dx.doi.org/10.1155/2013/272916
Research Article

A Study of Mobile Sensing Using Smartphones

School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China

Received 8 December 2012; Accepted 15 January 2013

Academic Editor: Chao Song

Copyright © 2013 Ming Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Traditional mobile sensing-based applications use extra equipments which are unrealistic for most users. Smartphones develop in a rapid speed in recent years, and they are becoming indispensable carry-on of daily life. The sensors embedded in them provide various possibilities for mobile applications, and these applications are helping and changing the way of our living. In this paper, we analyze and discuss existing mobile applications; after that, future directions are pointed out.

1. Introduction

The word sensing builds a bridge between real world and virtual world; with the help of various sensors, man-made devices are able to feel the world like God-made creatures do. Bell may be the first generation of sensors; people tie up a bell to a string so that when there is a vibration on the string, the bell will ring. Bell is a very powerful and effective sensor; it contains two parts: detection and processing. When a bell detects a vibration, it will generate a period of ringing and the volume of the ringing is proportional to the amplitude of the vibration. However, bell is the kind of sensor that connects real world to real world. With the development of electronic devices, a new man-made world has been building. This world is called virtual world; many complicated calculations are running in this world so that people in real world can enjoy their lives. Virtual world needs data to keep running, and it is far from enough to input data into the virtual world depending on human operations. Sensor is a way to sense the world and interpret the sensed information to the data form of the virtual world; therefore, sensing becomes an important part of research field and industry field.

Early sensing-based applications are mostly used for research purposes or used in some specific areas. References [1, 2] propose localization methods for finding odor sources using gas sensors and anemometric sensors. Reference [3] uses a number of sensors embedded into a cyclist’s bicycle to gather quantitative data about the cyclist’s rides; this information would be useful for mapping the cyclist experience. Reference [4] uses body-worn sensors to build an activity recognition system, and [5] uses body-worn sensors for healthcare monitoring. Reference [6] proposes a robotic fish carrying sensors for mobile sensing. Also, in Wireless Sensor Networks (WSN), there are a lot of sensing-based applications. References [7, 8] deploy wireless sensors to track the movement of mobile objects. References [9, 10] deploy sensors for monitoring volcano.

People-centric sensing mentioned in [11] uses smartphones for mobile sensing. Smartphones are very popular and becoming indispensable carry-on for people in recent years; they are embedded with various sensors which could be used for many interesting applications. Unlike specific sensors which are used for specific areas, sensors in smartphones could provide unlimited possibilities for applications to help and change the life of people; also, using smartphone instead of specific equipment makes an application easier to be accepted by users.

In this paper, we will discuss some existing interesting sensing-based applications using smartphones and give some possible future directions. Section 2 gives detailed descriptions of sensors embedded in modern smartphones; Section 3 introduces some sensing-based applications; Section 4 gives a conclusion and future directions.

2. Sensors in Smartphones

As Figure 1 shows, modern smartphones have several kinds of sensors. The most popular sensors which most smartphones have are accelerometer, gyroscope, magnetometer, microphone, and camera. In this section, we will discuss the characteristics of the sensors.

272916.fig.001
Figure 1: Sensors inside of smartphones.
2.1. Accelerometer

An accelerometer measures proper acceleration, which is the acceleration it experiences relative to free fall and is the acceleration felt by people and objects. To put it another way, at any point in spacetime the equivalence principle guarantees the existence of a local inertial frame, and an accelerometer measures the acceleration relative to that frame. Such accelerations are popularly measured in terms of g-force [12].

The principle of accelerometer is using inertial force. Try to imagine a box with six walls, a ball is floating in the middle of the box because no force is added to the ball (e.g., the box may be in the outer space) [13]. When the box moves to the right direction, the ball will hit the left wall. The left wall is pressure sensitive that it can measure the force of hitting applied to the left wall; therefore, the acceleration can be measured. Because of gravity, when the box is placed at earth, the ball will keep pressing the bottom wall of the box and give constant ~9.8 m/s2 acceleration. The gravity force will affect the measurement of accelerometer for measuring speed or displacement of an object in a three-dimension. The gravity force must be subtracted before any measurement. However, the gravity force can be taken as an advantage of detecting the rotation of a device. When a user rotates his smartphone, the content he/she is watching will switch between portrait and landscape. As Figure 2shows, when the screen of smartphone is in a portrait condition, -axis will sense the gravity; when the screen of smartphone is in a landscape condition, -axis will sense the gravity. According to this, users can rotate their screens without affecting their reading experiences.

272916.fig.002
Figure 2: Screen rotation.

In theory, the displacement can be calculated aswhere : displacement, : initial displacement, : initial velocity, and : acceleration.

Equation (1) is a continuous function; the we get in real environment is discrete due to sampling. To calculate the displacement according to discrete values, (2) has to be used aswhere : continuous acceleration, : th sample, and : time increment.

Then, the velocity and displacement can be calculated as the following [14]:

The value the accelerometer returns is three-dimensional as Figure 2 shows; therefore, will be calculated as the following shows:where , and are vectors.

2.2. Gyroscope

Accelerometer is good at measuring the displacement of an object; however, it is inaccurate to measure the spin movement of the device, which is an easy thing for gyroscope.

A gyroscope is a device for measuring or maintaining orientation, based on the principles of angular momentum. Mechanically, a gyroscope is a spinning wheel or disk in which the axle is free to assume any orientation. Although this orientation does not remain fixed, it changes in response to an external torque much lesser and in a different direction than it would be without the large angular momentum associated with the disk’s high rate of spin and moment of inertia. The device’s orientation remains nearly fixed, regardless of the mounting platform’s motion because mounting the device in a gimbal minimizes external torque [15].

Gyroscope is a very sensitive device; it is good at detecting the spin movement. Same as accelerometer, gyroscope returns three-dimensional values; the coordinate system is as Figure 2 shows. The value gyroscope returns is angular velocity which indicates how fast the device rotates around the axes. The gyroscope can be calculated aswhere : angular velocity and : vectors of angular velocity around x-, y-, and z-axes.

2.3. Magnetometer

A magnetometer is a measuring instrument used to measure the strength and perhaps the direction of magnetic fields [16]. Accelerometer and gyroscope are able to detect the direction of a movement; however, the direction is a relative direction; it obeys the coordinate system that a smartphone uses. Sometimes, different smartphones need to synchronize their directions; therefore, a magnetometer is needed to get an absolute direction (the direction obeys the coordinate system of earth).

The magnetometer returns three-dimensional values; if the device is placed horizontally, the orientation angle can be calculated as

Until now, we introduced three types of sensors: accelerometer, gyroscope, and magnetometer. With the help of the three types of sensors, smartphone can estimate its own all kind of movements. However, in real environment, errors of measurement happen all the time; we will talk about a way to correct the offset error of magnetometer, and the other two sensors may use the same way to correct their errors.

Firstly, place the magnetometer horizontally, rotate it in a uniform speed, measure the value of and , put axis horizontally, and rotate to measure the value . Calculate the offset value on the three axes as

2.4. Microphones

Microphone is a very common sensor; it is usually used for recording sound. The problem is how to deal with the recorded sound. The most common way is to find a known period of sound in a sound record. Cross-correlation is a method to search a piece of signal in a mixed signal [17]. In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time lag applied to one of them. This is also known as a sliding dot product or sliding inner product. It is commonly used for replacing a long signal for a shorter ones, which is known feature. It also has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology [18].

Cross-correlation can be calculated as (8) shows. Suppose that the known symbol pattern of sound wave of turn signal is of length is the complex number representing the received symbol, then the cross-correlation at a shift position is

If we use a sound wave to cross-correlate itself, for example, the sound wave as Figure 3 shows, the result will be shown as in Figure 4. The spike indicates the existence of a piece of signal.

272916.fig.003
Figure 3: A sound record of turn signal of automobile.
272916.fig.004
Figure 4: Cross-correlation.
2.5. Camera

Camera captures vision information from real world. From the human perspective, vision contains most information we get. However, pattern recognition in computer area is not mature enough to work as human does. In this section, we will briefly introduce principle the pattern recognition.

A photo the camera records can be expressed as a matrix of light intensity of each pixel (here we take the grey-mode photo as an example). Suppose that the source matrixes (or as we call them dictionary) are, as the matrix needed to be recognized is , then the pattern recognition is proceeded as (9) shows. matrix in the dictionary is the result:

Pattern recognition is far more complicated than (9); there are many good algorithms in pattern recognition area, like SIFT [1923] and SVM [2429]. But the recognition rate is still not good enough for practical applications.

3. Applications

In this section, we will introduce a few interesting sensing-based applications using smartphones. We divide the applications we are going to discuss into two categories: accelerometer, gyroscope, and magnetometer; microphone and camera.

3.1. Accelerometer, Gyroscope, and Magnetometer
3.1.1. Trace Track

Searching people in a public place is difficult; for example, a person is in a conference hall, a library, or a shopping mall, there are many crowds around, it is very difficult to find the target person. Even if the person tells you where he/she is, it is frustrating to find the place in an unfamiliar environment. Maps may be helpful but not always handy. Smartphones provide possibilities to develop an electronic escort service using opportunistic user intersections. Through periodically learning the walking trails of different individuals, as well as how they encounter each other in space time, a route can be computed between any pair of persons [30]. The escort system could guide a user to the vicinity of a desired person in a public place. Escort does not rely on GPS, WiFi, or war driving to locate a person—the escort user only needs to follow an arrow displayed on the phone [30].

Escort system presents an interest idea that a smartphone is an escort which tells you how many steps you walk and which direction you are heading to [30], see Figure 5(a). GPS is a way to achieve the idea; however, GPS cannot work in an indoor environment; WiFi localization is a good indoor localization method; however, it cannot guarantee, there are enough WiFi fingerprints for localization. Escort system uses accelerometer and gyroscope to achieve the idea. However, the displacement calculated based on accelerometer is inaccurate; the reasons are the jerky movement of smartphone in people’s pocket and inherent error of measurement [3133]; the displacement error may reach 100 m after 30 m walk, see Figure 5(b). Escort identifies an acceleration signature in human walking patterns to avoid the problem. This signature arises from the natural up- and down-bounces of the human body while walking and can be used to count the number of steps walked [30]. The physical displacement can then be computed by multiplying step count with the user’s step size which is, a function of the user’s weight and height [30], see Figure 5(c). Escort varies the step size with an error factor drawn from a Gaussian distribution centered on 0 and standard deviation 0.15 m [30]. This better accommodates the varying human step size [30].

fig5
Figure 5: (a) Accelerometer readings (smoothened) from a walking user. (b) Displacement error with double integration for two users. (c) Error with step count method.

Compass (magnetometer) is also used in the escort system. Like accelerometer has measurement errors, the compass noise is caused by several factors, including user sway, movement irregularities, magnetic fields in the surroundings, and the sensor’s internal bias. Because these factors are related to the user, surroundings, and the sensor, the noise is difficult to predict and compensate. To characterize the compass noise, escort ran 100 experiments using 2 Nokia 6210 Navigator phones and has observed an average bias of 8 degrees and a standard deviation of 9 degrees [30]. In addition to this large noise range, escort made two consistent observations as follows: when the user is walking in a constant direction, the compass readings stabilize quickly on a biased value and exhibit only small random oscillations, and after each turn, a new bias is imposed [30]. Escort identifies two states of the user: walking in a constant direction and turning based on the two observations. Turns are identified when the compass headings change more significantly than due to random oscillations. The turn identification algorithm uses the following condition [30]:where denotes the average compass readings over a time period (e.g., 1 second), is the standard deviation of compass readings during , and is a guard factor. While on a constant direction, escort compensates the stabilized compass reading with the average bias and reports the resulting value as the direction of the user. During turns, escort considers the sequence of readings reported by the compass [30].

Similar usage of accelerometer and magnetometer appears in [34]. It uses accelerometer to estimate the walking trace of people and correct the trace periodically by GPS. Accelerometer used as a supplement of GPS localization is a popular way, and many researches focused on this [3540].

Smartphone is not only used on walking people but also on people on vehicles.

3.1.2. Dangerous Drive

When drivers are sitting in a vehicle, their smartphones are able to measure the acceleration, velocity, and turns through sensors embedded. Because the smartphone is very popular, using smartphone is an easy way to implement mobile sensing.

Driving style can be divided into two categories: nonaggressive and aggressive. To study the vehicle safety system, we need to understand and recognize the driving events. Potentially aggressive driving behavior is currently a leading cause of traffic fatalities in the United States, and drivers are unaware that they commit potentially aggressive actions daily [41]. To increase awareness and promote driver safety, a novel system that uses Dynamic Time Warping (DTW) and smartphone-based sensor fusion (accelerometer, gyroscope, magnetometer, GPS, and video) to detect, recognize, and record these actions without external processing being proposed [41].

The system collects the motion data from the the accelerometer and gyroscope in the smartphone continuously at a rate of 25 Hz in order to detect specific dangerous driving behaviors. Typical dangerous driving behaviors are hard left and right turns, swerves, and sudden braking and acceleration patterns. These dangerous driving behaviors indicate potentially aggressive driving that would cause danger to both pedestrians and other drivers. The system first determines when a dangerous behavior starts and ends using endpoint detection. Once the system has a signal representing a maneuver, it is possible to compare it to stored maneuvers (templates) to determine whether or not it matches an aggressive event [41].

References [4244] use accelerometer and gyroscope for driving event recognition. Reference [42] includes two single-axis accelerometers, two single-axis gyroscopes, and a GPS unit (for velocity) attached to a PC for processing. While the system included gyroscopes for inertial measurements, they were not used in the project. Hidden Markov Models (HMM) were trained and used only on the acceleration data for the recognition of simple driving patterns. Reference [43] used accelerometer data from a mobile phone to detect drunk driving patterns through windowing and variation threshold. A Driver Monitor System was created in [44] to monitor the driving patterns of the elderly. This system involved three cameras, a two-axis accelerometer, and a GPS receiver attached to a PC. The authors collected large volumes of data for 647 drivers. The system had many components, one of them being detection of erratic driving using accelerometers. Braking and acceleration patterns were detected, as well as high-speed turns via thresholding. Additionally, the data could be used to determine the driving environment (freeway versus city) based on acceleration patterns.

References [4547] use accelerometer and gyroscope to detect gesture recognition. The uWave paper [47] and the gesture control work of Kela et al. [48] explore gesture recognition using DTW and HMM algorithm,s respectively. They both used the same set of eight simple gestures which included up, down, left, right, two opposing direction circles, square, and slanted 90-degree angle movements for their final accuracy reporting. Four of these eight gestures are one-dimensional in nature. The results proved that using DTW with one training sample was just as effective as HMMs.

3.1.3. Phone Hack

Accelerometer is so sensitive that it can even detect finger press at the screen of the smartphone. Reference [49] shows that the location of screen taps on modern smartphones and tablets can be identified from accelerometer and gyroscope readings. The findings have serious implications, as we demonstrate that an attacker can launch a background process on commodity smartphones and tablets, and silently monitor the user’s inputs, such as keyboard presses and icon taps [49]. While precise tap detection is nontrivial, requiring machine-learning algorithms to identify fingerprints of closely spaced keys, sensitive sensors on modern devices aid the process [49]. Reference [49] presents TapPrints, a framework for inferring the location of taps on mobile device touchscreens using motion sensor data combined with machine-learning analysis. By running tests on two different off-the-shelf smartphones and a tablet computer, the results show that identifying tap locations on the screen and inferring English letters could be done with up to 90% and 80% accuracy, respectively [49]. By optimizing the core tap detection capability with additional information, such as contextual priors, the core threat may further be magnified [49].

On both Android and iOS systems, location sensor like GPS is the only one which requires explicit user access permission, the reason probably is that people are not willing to be tracked and consider their locations information as privacy. But accelerometer and gyroscope sensors do not require explicit user permission on of the two mentioned operating systems. Because the sensors are mostly used for gaming, Android system does not restrict the access of accelerometer and gyroscope; background services with the two sensors are allowed. Moreover, there is work aimed at the standardization of JavaScript access to a device’s accelerometer and gyroscope sensors in order for any web application to perform, for example, website layout adaptation [49]. Reference [49] shows that accelerometer and gyroscope sensor data can create a serious privacy breach. In particular, it is demonstrated that it is possible to infer where people tap on the screen and what people type by applying machine-learning analysis to the stream of data from these two motion sensors. The reason the work focuses on the accelerometer and gyroscope sensors is that they are able to capture tiny device vibrations and angle rotations, respectively, with good precision [49].

Figure 6 shows a sample of the raw accelerometer data after a user has tapped on the screen—the timing of each tap marked by the digital pulses on the top line. As we can see, each tap generates perturbations in the accelerometer sensor readings on the three axes, particularly visible along the -axis, which is perpendicular to the phone’s display. Gyroscope data exhibits similar behavior and is not shown [49]. Some related works [5052] are also about using accelerometer and gyroscope for phone hacks.

272916.fig.006
Figure 6: The square wave in the top line identifies the occurrence of a tap. Two particular taps have also been highlighted by marking their boundaries with dashed vertical lines. Notice that the accelerometer sensor readings (on the -axis in particular) show very distinct patterns during taps. Similar patterns can also be observed in the gyroscope.
3.1.4. Phone Gesture

The first two sections are about using accelerometer, gyroscope, and magnetometer to detect a long-distance movement; actually, if the sensors are used to detect gestures, some interesting applications may appear.

The ability to note down small pieces of information, quickly and ubiquitously, can be useful. Reference [53] proposes a system called PhonePoint Pen that uses the in-built accelerometer in mobile phones to recognize human writing. By holding the phone like a pen, a user should be able to write short messages or even draw simple diagrams in the air. The acceleration due to hand gestures can be converted into an image and sent to the user’s Internet email address for future reference [53].

The work is done without gyroscope, because the smartphone it used lacks gyroscope. There are two main issues to be solved: coping with background vibration (noise); computing displacement of phone.

Accelerometers are sensitive to small vibrations. Figure 7(a) reports acceleration readings as the user draws a rectangle using 4 strokes (around −350 units on the -axis is due to earth’s gravity). A significant amount of jitter is caused by natural hand vibrations. Furthermore, the accelerometer itself has measurement errors. It is necessary to smooth this background vibration (noise) in order to extract jitter-free pen gestures. To cope with vibrational noise, the system smoothes the accelerometer readings by applying a moving average over the last nreadings (). The results are presented in Figure 7(b) (the relevant movements happen in the -plane, so we removed the -axis from the subsequent figures for better visual representation) [53].

fig7
Figure 7: (a) Raw accelerometer data while drawing a rectangle (note gravity on the -axis). (b) Accelerometer noise smoothing.

The phone’s displacement determines the size of the character. The displacement is computed as a double integral of acceleration, that is, , where is the instantaneous acceleration. In other words, the algorithm first computes the velocity (the integration of acceleration) followed by the displacement (the integration of velocity). However, due to errors in the accelerometer, the cumulative acceleration and deceleration values may not sum to zero even after the phone has come to rest. This offset translates into some residual constant velocity. When this velocity is integrated, the displacement and movement direction become erroneous. In order to reduce velocity-drift errors, the velocity to zero at some identifiable points needs to be reset. The stroke mechanism described earlier is therefore used. Characters are drawn using a set of strokes separated by short pauses. Each pause is an opportunity to reset velocity to zero and thus correct displacement. Pauses are detected by using a moving window over consecutive accelerometer readings () and checking if the standard deviation in the window is smaller than some threshold. This threshold is chosen empirically based on the average vibration caused when the phone was held stationary. All acceleration values during a pause are suppressed to zero. Figure 8(a) shows the combined effect of noise smoothing and suppression. Further, velocity is set to zero at the beginning of each pause interval. Figure 8(b) shows the effect of resetting the velocity. Even if small velocity drifts are still present, they have a small impact on the displacement of the phone [53].

fig8
Figure 8: (a) Accelerometer readings after noise smoothing and suppression. (b) Resetting velocity to zero in order to avoid velocity drifts.
3.2. Microphone and Camera
3.2.1. Surround Sense

There are some research works [5457] about using smartphones to sense the context of surroundings. A growing number of mobile computing applications are centered around the user’s location. The notion of location is broad, ranging from physical coordinates (latitude/longitude) to logical labels (like Starbucks, McDonalds). While extensive research has been performed in physical localization, there have been few attempts to recognize logical locations. Reference [57] argues that the increasing number of sensors on mobile phones presents new opportunities for logical localization. Reference [57] postulates that ambient sound, light, and color in a place convey a photoacoustic signature that can be sensed by the phone’s camera and microphone. In-built accelerometers in some phones may also be useful in inferring broad classes of user motion, often dictated by the nature of the place. By combining these optical, acoustic, and motion attributes, it may be feasible to construct an identifiable fingerprint for logical localization. Hence, users in adjacent stores can be separated logically, even when their physical positions are extremely close [57] see Figures 9 and 10.

272916.fig.009
Figure 9: Sound fingerprints from 3 adjacent stores.
272916.fig.0010
Figure 10: Color/light fingerprint in the HSL format from the Bean Traders’ coffee shop. Each cluster is represented by a different symbol.

Reference [57] takes advantage of microphone and camera to collect the surrounding fingerprint so that it can provide logical localization.

3.2.2. Localization

References [5860] use audio for localization and the accuracy is improved to a high level. Reference [58] operates in a spontaneous, ad hoc, and device-to-device context without leveraging any preplanned infrastructure. It is a pure software-based solution and uses only the most basic set of commodity hardware—a speaker, a microphone, and some form of device-to-device communication—so that it is readily applicable to many low-cost sensor platforms and to most commercial-off-the-shelf mobile devices like cell phones and PDAs [58]. High accuracy is achieved through a combination of three techniques: two-way sensing, self-recording, and sample counting. To estimate the range between two devices, each will emit a specially designed sound signal (“Beep”) and collect a simultaneous recording from its microphone [58]. Each recording should contain two such beeps, one from its own speaker and the other from its peer [58]. By counting the number of samples between these two beeps and exchanging the time duration information with its peer, each device can derive the two-way time of flight of the beeps at the granularity of the sound sampling rate [58]. This technique cleverly avoids many sources of inaccuracy found in other typical time-of-arrival schemes, such as clock synchronization, non-real-time handling, and software delays [58].

Reference [58] sends beep sound to calculate the distance between two objects; the microphone will distinguish original sound and echo to get the time interval and, therefore, get a distance, see Figure 11.

272916.fig.0011
Figure 11: Illustration of event sequences in BeepBeep ranging procedure. The time points are marked for easy explanation and no timestamping is required in the proposed ranging mechanism.

4. Discussions and Conclusion

Sensors are the key factors of developing more and more interesting applications on the smartphones, and the sensors make the smartphone different from traditional computing devices like computer. Most applications used accelerometer and gyroscope because they are somehow the most accurate sensors. However, the vision contains huge information. We believe that camera and pattern recognition will be used more and more in the future.

Acknowledgments

This work is supported by National Science Foundation under Grant nos. 61103226, 60903158, 61170256, 61173172, and 61103227 and the Fundamental Research Funds for the Central Universities under Grant no. ZYGX2010J074 and ZYGX2011J102.

References

  1. H. Ishida, K. Suetsugu, T. Nakamoto, and T. Moriizumi, “Study of autonomous mobile sensing system for localization of odor source using gas sensors and anemometric sensors,” Sensors and Actuators A, vol. 45, no. 2, pp. 153–157, 1994. View at Google Scholar · View at Scopus
  2. H. Ishida, T. Nakamoto, and T. Moriizumi, “Remote sensing of gas/odor source location and concentration distribution using mobile system,” Sensors and Actuators B, vol. 49, no. 1-2, pp. 52–57, 1998. View at Google Scholar · View at Scopus
  3. S. B. Eisenman, E. Miluzzo, N. D. Lane, R. A. Peterson, G. S. Ahn, and A. T. Campbell, “BikeNet: a mobile sensing system for cyclist experience mapping,” ACM Transactions on Sensor Networks, vol. 6, no. 1, pp. 1–39, 2009. View at Publisher · View at Google Scholar · View at Scopus
  4. T. Choudhury, G. Borriello, S. Consolvo et al., “The mobile sensing platform: an embedded activity recognition system,” IEEE Pervasive Computing, vol. 7, no. 2, pp. 32–41, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. B. Lo, S. Thiemjarus, R. King, et al., “Body sensor network-a wireless sensor platform for pervasive healthcare monitoring,” in Proceedings of the 3rd International Conference on Pervasive Computing, 2005.
  6. X. Tant, D. Kim, N. Usher et al., “An autonomous robotic fish for mobile sensing,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’06), pp. 5424–5429, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Arora, P. Dutta, S. Bapat et al., “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Computer Networks, vol. 46, no. 5, pp. 605–634, 2004. View at Publisher ·View at Google Scholar · View at Scopus
  8. Y. C. Tseng, S. P. Kuo, and H. W. Lee, “Location tracking in a wireless sensor network by mobile agents and its data fusion strategies,” Information Processing in Sensor Networks, pp. 554–554, 2003. View at Google Scholar
  9. G. Werner-Allen, J. Johnson, M. Ruiz, J. Lees, and M. Welsh, “Monitoring volcanic eruptions with a wireless sensor network,” in Proceedings of the 2nd European Workshop on Wireless Sensor Networks (EWSN ’05), pp. 108–120, February 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. G. Werner-Allen, K. Lorincz, M. Welsh et al., “Deploying a wireless sensor network on an active volcano,” IEEE Internet Computing, vol. 10, no. 2, pp. 18–25, 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. A. T. Campbell, S. B. Eisenman, N. D. Lane et al., “The rise of people-centric sensing,” IEEE Internet Computing, vol. 12, no. 4, pp. 12–21, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. Accelerometer, http://en.wikipedia.org/wiki/Accelerometer.
  13. A Guide To using IMU (Accelerometer and Gyroscope Devices) in Embedded Applications,http://www.starlino.com/imu_guide.html.
  14. M. Arraigada and M. Partl, “Calculation of displacements of measured accelerations, analysis of two accelerometers and application in road engineering,” in Proceedings of the Swiss Transport Research Conference, 2006.
  15. Gyroscope, http://en.wikipedia.org/wiki/Gyroscope.
  16. Magnetometer, http://en.wikipedia.org/wiki/Magnetometer.
  17. S. Sen, R. Roy Choudhury, and S. Nelakuditi, “CSMA/CN: carrier sense multiple access with collision notification,” IEEE/ACM Transactions on Networking, vol. 20, no. 2, pp. 544–556, 2012. View at Google Scholar
  18. Cross-correlation, http://en.wikipedia.org/wiki/Cross-correlation.
  19. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Bay, T. Tuytelaars, and L. Gool, “SURF: speeded up robust features,” in Proceedings of the Computer Vision (ECCV ’06), pp. 404–417, Springer, Berlin, Germany, 2006.
  21. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’05), pp. 886–893, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  22. K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615–1630, 2005. View at Publisher · View at Google Scholar · View at Scopus
  23. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. View at Publisher · View at Google Scholar· View at Scopus
  24. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
  25. N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and other Kernel-Based Learning Methods, Cambridge University Press, 2000.
  26. C. C. Chang and C. J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, article 27, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. C. J. C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121–167, 1998. View at Google Scholar · View at Scopus
  28. R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval, ACM press, New York, NY, USA, 1999.
  29. B. Schölkopf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, 2001.
  30. I. Constandache, X. Bao, M. Azizyan, and R. R. Choudhury, “Did you see Bob?: human localization using mobile phones,” in Proceedings of the 16th Annual Conference on Mobile Computing and Networking (MobiCom ’10), pp. 149–160, September 2010. View at Publisher · View at Google Scholar ·View at Scopus
  31. D. M. Boore, “Effect of baseline corrections on displacements and response spectra for several recordings of the 1999 Chi-Chi, Taiwan, earthquake,” Bulletin of the Seismological Society of America, vol. 91, no. 5, pp. 1199–1211, 2001. View at Publisher · View at Google Scholar · View at Scopus
  32. D. M. Boore, C. D. Stephens, and W. B. Joyner, “Comments on baseline correction of digital strong-motion data: examples from the 1999 Hector Mine, California, earthquake,” Bulletin of the Seismological Society of America, vol. 92, no. 4, pp. 1543–1560, 2002. View at Publisher · View at Google Scholar ·View at Scopus
  33. D. M. Boore, “Analog-to-digital conversion as a source of drifts in displacements derived from digital recordings of ground acceleration,” Bulletin of the Seismological Society of America, vol. 93, no. 5, pp. 2017–2024, 2003. View at Google Scholar · View at Scopus
  34. I. Constandache, R. R. Choudhury, and I. Rhee, “Towards mobile phone localization without war-driving,” in Proceedings of the IEEE INFOCOM, pp. 1–9, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. J. Paek, J. Kim, and R. Govindan, “Energy-efficient rate-adaptive GPS-based positioning for smartphones,” in Proceedings of the 8th Annual International Conference on Mobile Systems, Applications and Services (MobiSys ’10), pp. 299–314, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. D. H. Kim, Y. Kim, D. Estrin, and M. B. Srivastava, “SensLoc: sensing everyday places and paths using less energy,” in MobiSys 8th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’10), pp. 43–56, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  37. K. Lee, I. Rhee, J. Lee, S. Chong, and Y. Yi, “Mobile data offloading: how much can WiFi deliver?” inProceedings of the 6th International Conference on Emerging Networking Experiments and Technologies (Co-NEXT ’10), p. 26, December 2010. View at Publisher · View at Google Scholar · View at Scopus
  38. A. Thiagarajan, J. Biagioni, T. Gerlich, and J. Eriksson, “Cooperative transit tracking using smart-phones,” in Proceedings of the 8th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’10), pp. 85–98, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  39. A. Schulman, V. Navda, R. Ramjee et al., “Bartendr: a practical approach to energy-aware cellular data scheduling,” in Proceedings of the 16th Annual Conference on Mobile Computing and Networking (MobiCom ’10), pp. 85–96, September 2010. View at Publisher · View at Google Scholar · View at Scopus
  40. S. P. Tarzia, P. A. Dinda, R. P. Dick, and G. Memik, “Indoor localization without infrastructure using the acoustic background spectrum,” in Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services (MobiSys ’11), pp. 155–168, July 2011. View at Publisher · View at Google Scholar · View at Scopus
  41. D. A. Johnson and M. M. Trivedi, “Driving style recognition using a smartphone as a sensor platform,” in Proceedings of the 14th International IEEE Conference on Intelligent Transportation Systems (ITSC ’11), pp. 1609–1615, 2011.
  42. D. Mitrović, “Reliable method for driving events recognition,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, no. 2, pp. 198–205, 2005. View at Publisher · View at Google Scholar ·View at Scopus
  43. J. Dai, J. Teng, X. Bai, Z. Shen, and D. Xuan, “Mobile phone based drunk driving detection,” inProceedings of the 4th International Conference on Pervasive Computing Technologies for Healthcare, pp. 1–8, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  44. K. C. Baldwin, D. D. Duncan, and S. K. West, “The driver monitor system: a means of assessing driver performance,” Johns Hopkins APL Technical Digest, vol. 25, no. 3, pp. 269–277, 2004. View at Google Scholar · View at Scopus
  45. G. Ten Holt, M. Reinders, and E. Hendriks, “Multi-dimensional dynamic time warping for gesture recognition,” in Proceedings of the Conference of the Advanced School for Computing and Imaging (ASCI ’07), 2007.
  46. R. Muscillo, S. Conforto, M. Schmid, P. Caselli, and T. D’Alessio, “Classification of motor activities through derivative dynamic time warping applied on accelerometer data,” in Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS ’07), pp. 4930–4933, 2007.
  47. J. Liu, L. Zhong, J. Wickramasuriya, and V. Vasudevan, “uWave: accelerometer-based personalized gesture recognition and its applications,” Pervasive and Mobile Computing, vol. 5, no. 6, pp. 657–675, 2009. View at Publisher · View at Google Scholar · View at Scopus
  48. J. Kela, P. Korpipää, and J. Mäntyjärvi, “Accelerometer-based gesture control for a design environment,”Personal and Ubiquitous Computing, vol. 10, no. 5, pp. 285–299, 2006. View at Google Scholar
  49. E. Miluzzo, A. Varshavsky, S. Balakrishnan, et al., “Tapprints: your finger taps have fingerprints,” inProceedings of the 10th International Conference on Mobile Systems, Applications, and Services, pp. 323–336, Low Wood Bay, Lake District, UK, 2012.
  50. L. Cai and H. H. Chen, “TouchLogger: inferring keystrokes on touch screen from smartphone motion,” in Proceedings of the 6th USENIX Conference on Hot Topics in Security, p. 9, San Francisco, Calif, USA, 2011.
  51. E. Owusu, J. Han, S. Das, et al., “ACCessory: password inference using accelerometers on smartphones,” in Proceedings of the 12th Workshop on Mobile Computing Systems & Applications, pp. 1–6, San Diego, Calif, USA, 2012.
  52. L. Cai, S. Machiraju, and H. Chen, “Defending against sensor-sniffing attacks on mobile phones,” inProceedings of the 1st ACM Workshop on Networking, Systems, and Applications for Mobile Handhelds, pp. 31–36, Barcelona, Spain, 2009.
  53. S. Agrawal, I. Constandache, and S. Gaonkar, “PhonePoint pen: using mobile phones to write in air,” inProceedings of the 1st ACM Workshop on Networking, Systems, and Applications for Mobile Handhelds, pp. 1–6, Barcelona, Spain, 2009.
  54. B. Clarkson, K. Mase, and A. Pentland, “Recognizing user context via wearable sensors,” in Proceedings of the 4th Intenational Symposium on Wearable Computers, pp. 69–75, October 2000. View at Scopus
  55. S. Gaonkar, J. Li, and R. R. Choudhury, “Micro-Blog: sharing and querying content through mobile phones and social participation,” in Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, pp. 174–186, Breckenridge, Colo, USA, 2008.
  56. E. Miluzzo, N. D. Lane, and K. Fodor, “Sensing meets mobile social networks: the design, implementation and evaluation of the CenceMe application,” in Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems, pp. 337–350, Raleigh, NC, USA, 2008.
  57. M. Azizyan and R. R. Choudhury, “SurroundSense: mobile phone localization using ambient sound and light,” SIGMOBILE Mobile Computing and Communications Review, vol. 13, no. 1, pp. 69–72, 2009.View at Google Scholar
  58. C. Peng, G. Shen, Y. Zhang, Y. Li, and K. Tan, “BeepBeep: a high accuracy acoustic ranging system using COTS mobile devices,” in Proceedings of the 5th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’07), pp. 1–14, Sydney, Australia, November 2007. View at Publisher · View at Google Scholar · View at Scopus
  59. A. Mandai, C. V. Lopes, T. Givargis, A. Haghighat, R. Jurdak, and P. Baldi, “Beep: 3D indoor positioning using audible sound,” in Proceedings of the 2nd IEEE Consumer Communications and Networking Conference (CCNC ’05), pp. 348–353, January 2005. View at Scopus
  60. C. Peng, G. Shen, Y. Zhang, Y. Li, and K. Tan, “BeepBeep: a high accuracy acoustic ranging system using COTS mobile devices,” in Proceedings of the 5th ACM International Conference on Embedded Networked Sensor Systems (SenSys ’07), pp. 397–398, Sydney, Australia, November 2007. View at Publisher · View at Google Scholar · View at Scopus

Elon Musk: How I Became The Real ‚Iron Man‘ by transforming 3 industries

This is how Elon Musk transformed three industries and became the real Tony Stark.

Elon Musk, the entrepreneur who helped create PayPal, built America’s first viable fully electric car company, started the nation’s biggest solar energy supplier, and may make commercial space travel a reality in our lifetime (Documentation 2014)

Conversation with Elon Musk in Detroit on electric cars and rock bottom oil prices (2015)

Conversation with Elon Musk on SpaceX, Tesla and his personal life

This is the new Tesla Model X (2015)

 

Elon Musik playing video games in his Bel Air 7 Bedroom Mansion (2014)

Elon Musk launches Tesla Model X

Elon Musk debuts the long awaited Tesla Model X at a special launch event held at Tesla’s Fremont factory in front of lucky owners and members of the media. Elon unveils more details about the falcon wings opening mechanism which includes an innovative sonar sensor that goes through metal.

Elon Musk: „We try to be a leader in apocalyptic defense scenarios“

Lessons from Taylor Swift and her ‘raving fan culture’

http://humanelevation.tonyrobbins.com/blog/career/taylor-swift-pop-star-power-broker

Forget everything you think you know about Taylor Swift. She’s no longer a performer, singer, pop star or celebrity — strictly speaking. She is CEO and executive manager of the brand Taylor Swift. And she’s building it in a brilliant way.

No CEO under 30 since Mark Zuckerberg has successfully launched and maintained a consistent aura of excitement, engagement and growth more effectively than Swift. What’s her secret? She instinctually and innately keeps her fan base suspended in a constant state of hysteria through many of Tony Robbins’ 7 Strategies for Creating Raving Fans.

Give more than you promote

As profiled by the Los Angeles Times, Taylor Swift was born into a generation who came of age with social media and seamlessly integrate online communication within the full narrative of their relationships. But Swift is more than just adept at social trends and platforms; she makes direct and authentic connections with fans by casually replying to their posts, liking and favoriting their pictures and reposting their own selfies holding her album cover. As Matt Britton, CEO of youth maturing agency MRY, told the New York Times, „When you do that, you generate a kind of advocacy and excitement that no level of advertising could.“

She doesn’t use social for marketing, she uses it for mentioning. And it means everything to her fans.

Once these simple but lasting connections are made, fans feel more invested in her everyday life and activities, the way they would with any friend. Swift’s tweets and posts act as a delightful flip-side to the infamously brutal singe line, late-night email replies Steve Jobs would fire back at Apple fans.

Create unexpected surprises

Swift’s mastery of social media as a tool to add more value to her fans goes so much further than conversational replies and likes. In the weeks leading up to the launch of her, now, multi-platinum „1989“ album, she personally “Tay-lurked“ popular channels like Instagram, Tumblr and Twitter to read raw customer feedback, rewarding supporters with personal invitations to a “special Taylor Swift opportunity,” bringing fans into one of her several “Secret Sessions” pre-release listening events held at one of her own homes in Beverly Hills, Nashville, New York and Rhode Island.

Any company can throw a product pre-release bash, but Swift personally appeared at each of them, turning what could have been a publicity calculation into a casual hangout — sitting on the floor and informally talking through her inspirations and influences, as naturally as any 25-year-old would. The magnified social media outpouring from these events helped rocket her album debut sales to over one million on release.

“She has been able to take one person and spread herself out into millions of itty-bitty pieces of Taylor Swift and touch as many people as possible,” Mr. Britton said. “When you do that, you generate a kind of advocacy and excitement that no level of advertising could.”

Run your business in open transparency

Swift’s intuitive promotional style make headlines, but it’s her business moves that send waves through industry and spin headlines across international media. Having built a powerful public platform, she’s become renown for her ability to spotlight and create change for professional artists everywhere, regardless of fame or audience size. Many fans jokingly ask her over Twitter, to help save a canceled television show or to add features to Instagram. But this only underlines her ability to create real movement through messaging.

Most notably, she removed her entire catalog from Spotify’s streaming after issuing a public statement clearly written by her personally, rather than drafted by a PR firm.

„Music is art, and art is important and rare. Important, rare things are valuable. Valuable things should be paid for. It’s my opinion that music should not be free, and my prediction is that individual artists and their labels will someday decide what an album’s price point is. I hope they don’t underestimate themselves or undervalue their art.“

In all issues of communication, from casual social media to press releases, Swift’s brand messages are entirely consistent in their openness and earnestness. This naturally became international news, fueling debates around royalty payments for artists everywhere. As reported by Time, artists on Spotify streaming earn on average less than one cent per play, or $0.006 and $0.0084.

She was only getting started. In what’s been heralded as the first significant sign of the post-Jobs Apple, Swift then released this statement challenging Apple Music’s launch policy to withhold payment to artists in exchange for the value of promotion:

„I’m sure you are aware that Apple Music will be offering a free 3 month trial to anyone who signs up for the service. I’m not sure you know that Apple Music will not be paying writers, producers, or artists for those three months. I find it to be shocking, disappointing, and completely unlike this historically progressive and generous company.“

What’s shocking is how quickly Apple, under Tim Cook’s leadership, conceded. Within hours, one of the most profitable, successful and globally admired companies, in history, changed direction and agreed to pay streaming royalties. Her short showdown with Apple Music’s policies is an example of the leadership any CEO, owner or entrepreneur can incorporate — by thinking beyond self-interest to how can improve experience for others, you give your customers a reason to believe.

And when you make them believe in you they’ll always buy from you.

Always reward your best clients and give back

Her most powerful, surprising and genius move was also her most simple and heartfelt.

On December 31st, 2014, Swift released a low-fi, homemade video of herself wrapping gifts she’d personally chosen for a selection of her fans, in her own home. Intercut with the footage were real, raw fan reactions as they unwrapped the presents and read the hand-written notes, most hardly able to speak through tears of joy — receiving personal presents from Swift naturally caused the merriest of meltdowns. The video reaches an unexpected pinnacle as Swift then drives from New York to Connecticut to personally deliver gifts to one special fan and her young son. Rather than simply dropping off for a photo op, she spends time playing and visiting — all the action underscored by her newly released single.

The video, likely produced for less than the costs of the gifts, rebranded the holiday as „Swiftmas“ and naturally went viral. It immediately platformed up the media chain into a feel-good headline story, creating an extravaganza of free PR and marketing time as clips were played endlessly through mainstream news channels and media broadcasts. With not a penny spent on ad placement, PR or traditional marketing pushes.

For Swift, this move was effortless. Entirely congruent to her personality — heartfelt, earnest and generous. From a more calculated personality, a stunt like this could easily backfire and create cynicism or hostility. Instead, her fans celebrated, non-fans took notice and marketers reveled in her show of gentle force, all connecting with her ability to contribute to her fans’ experience and create joy.

The obvious question becomes — what will she do next?

A celebratory culture empowers employees and inspires a long term audience that vocalizes your message, recruits new customers and helps you generate greater sales. If you can create a culture of raving fans, congruously within your image, you will provide more value than anyone else in your field.

Swift falls in love with her fans, not her fame

The most expensive and consuming challenge for any business is acquiring new customers — so, conversely, the simplest solution is to continually serve that same customer while compelling them to recruit, rave and spread your story.

Any organization, leader or brand who can maintain a high level of customer loyalty will always overwhelm the competition though their consistent ability to anticipate, surprise and fulfill their audience’s needs at the most fundamental and personal levels. The most successful organizations remain so because consumers and clients adopt the brand identity into their own, through purchase, use, external signaling (logos, merchandise) and social events.

Discover new and entirely authentic ways to delight, empower and surprise your customers and watch as they become an audience of raving fans — like the thousands of stadiums of smiling faces cheering for Taylor Swift.

The wireless carriers will no longer be gatekeepers to their Customers

http://www.wired.com/2015/10/google-apple-bid-best-cell-service/

The wireless carriers will no longer be gatekeepers to their Customers

In tackling public safety networks, the company has created technology that lets companies freely bid on available wireless infrastructure, and it hopes to bring this to the wireless networks the rest of us use.

In other words, it wants to create a truly free market for wireless service, one in which companies like Apple and Google bid for the use of services from Verizon, AT&T, T-Mobile, Sprint, and beyond.

Such a market would work much like the electricity market works today. “Basically, supply and demand meet and a price is set,” says Ganley. The end result: consumers can move freely among networks. “Apple and Google and others are already experimenting with this kind of thing, and you will see this continue,” Ganley says. “The wireless carriers will no longer be gatekeepers to others. They will be something else. And those others will be able to come in and compete.”

That would be an enormous change. Yes, phones can “roam” between networks. If you use AT&T at home in San Francisco, you can hop on another network when you visit London. But AT&T controls where you can roam and how much you’re charged. Plus, you can’t roam on networks that don’t use the same type of wireless network AT&T uses. Simply put, Rivada aims to provide unfettered roaming. All wireless services would become one thing, much like the power grid is one thing.

Better Coverage, Lower Prices

Such an “open access” network would significantly reduce the power and the profile of the big-name wireless carriers, but it would provide better coverage and lower prices for consumers, says Peter Cramton, a University of Maryland economics professor who recently co-authored a paper exploring this very idea. (Although Rivada sponsored the paper, Cramton says it only reflects his opinions and those of his co-author, Linda Doyle, of the University of Dublin).

“This idea has legs, and in fact, it’s inevitable,” he says. “This can take advantage of the existing infrastructure of the current incumbent carriers and allow things happen in a much more economic and faster way. It can transform an oligopoly market into an Internet ecosystem, where anyone with a good idea can enter.”

Under this model, wireless carriers auction off access to their infrastructure, and then companies bid according to what they think they’ll need in the months or years to come. If those needs change, companies could even bid in near realtime. Apple could bid. Google could bid. And so could wireless carriers themselves. Verizon, for instance, could not only offering its own infrastructure to the auction, but bid on the infrastructure of other carriers as well. “This is very much a two-sided market,” Cramton explains.

Feeling the Effects

Of course, the biggest carriers will likely resist such a model. But in the long run, they’d acquiesce. “AT&T and Verizon, the dominant incumbents in the US, will not have incentives to encourage an open access market initially,” Cramton says. “But the market effects are going to come from T-Mobile and Sprint and smaller carriers.”

We’re moving toward one giant network and a truly free market for wireless.

Think of it this way: Just a few years ago, carriers never would have backed Project Fi or the iPad SIM. But Google and Apple make the phones people want to use, and in an effort to compete with AT&T and Verizon (which together control about 70 percent of the market and 90 percent of the revenue), T-Mobile and Sprint have agreed to work with Google on Project Fi. Meanwhile, AT&T as well as T-Mobile and Sprint are working with Apple on the iPad SIM. As time goes on and more people use these technologies, AT&T and Verizon will have no choice but to play along.

And so it will be with the “open access” network Cramton describes. “The weakness of the small carriers is coverage. They can’t economically build the kind of coverage that AT&T and Verizon can. The open access model allows them to build coverage and compete much more effectively.”

Congestion Pricing

Mexico, where a single carrier dominates the market, is already eyeing this setup. And here in the US, Rivada is pushing to build a public access network with its technology. Seven years ago, the government set aside a swath of wireless spectrum for such a network, but the carriers balked at the opportunity. Now, the Department of Commerce is working to finally realize this network, called FirstNet, and Rivada believes it can help. If the network is built with Rivada’s model, carriers could provide spectrum to first responders as needed, and then offer access for other uses when the system isn’t being used in emergencies.

“With the open access model, the price is determined by congestion,” Cramton says, meaning the price goes up for infrastructure that can serve areas where users are particularly hungry for wireless access. “That creates revenue, especially in the major, congested markets, like New York, LA, Chicago, Washington.”

The hope, however, is that the same basic model is applied to other domestic wireless networks. “This can, and should, apply not just to public safety networks but to a wider tract of the wireless spectrum,” Ganley says. “The ways of allocating spectrum that we have been using are the equivalent of granting royal charters of the East India company of old. It’s an oligopoly over pretty vast tracts of the radio spectrum.”

Political Movement

That will take some doing, not only politically but technically. But the market already is shifting in this direction. In the US, every major carrier is moving toward the LTE wireless network standard, which will make it much easier for phones to switch among carriers. And Apple and Google are building phones with wireless antennae that work with practically any wireless spectrum.

Meanwhile, Rivada says it already offers the technology needed to provide an open access market—and that it will work with existing wireless infrastructure. In short, we’re laying the groundwork for one giant wireless network. “This,” Cramton says, “can provide universal service.”

EuGH erklärt Safe Harbor für ungültig

Quelle: http://www.heise.de/newsticker/meldung/Datenschutz-bei-Facebook-Co-EuGH-erklaert-Safe-Harbor-fuer-ungueltig-2838025.html

US-Unternehmen seien verpflichtet, in Europa geltende Schutzregeln außer acht zu lassen, wenn US-Behörden aus Gründen der nationalen Sicherheit beziehungsweise des öffentlichen Interesses Zugriff auf persönliche Daten verlangen. Gleichzeitig gebe es für EU-Bürger keine Möglichkeit, per Rechtsbehelf die Löschung ihrer Daten zu verlangen. Das verletze „den Wesensgehalt des Grundrechts auf wirksamen Rechtsschutz“, das dem Wesen eines Rechtsstaats inhärent sei. Deswegen sei Safe Harbor ungültig und Irlands Datenschutzbehörde müsse nun prüfen, ob Facebooks Übermittlung von Daten europäischer Nutzer in die USA auszusetzen sei.

Predicting the Future

Source: „http://www.wired.com/2015/10/googles-lame-demo-shows-us-far-robo-car-come/“

Killing the Driver

Google has been developing this technology for six years, and is taking a distinctly different approach than everyone else. Conventional automakers are rolling out features piecemeal, over the course of many years, starting with active safety features like automatic braking and lane departure warnings.

Google doesn’t give a hoot about anything less than a completely autonomous vehicle, one that reduces “driving” to little more than getting in, typing in a destination, and enjoying the ride. It wants a consumer-ready product ready in four years.

The Silicon Valley juggernaut is making rapid progress. Its fleet of modified Lexus SUVs and prototypes has racked up 1.2 million autonomous miles on public roads, and covers 10,000 more each week. Most of that has been done in Mountain View, and Google expanded its testing to Austin last summer.

It’s unclear how this technology will reach consumers, but Google is more likely to sell its software than manufacture its own cars. At the very least, it won’t sell this dinky prototype to the public.

Predicting the Future

As the Google car moves, its laser, camera, and radar systems constantly scan the environment around it, 360 degrees and up to 200 yards away.

“We look at the world around us, and we detect objects in the scene, we categorize them as different types,” says Dmitri Dolgov, the project’s chief engineer. The car knows the difference between people, cyclists, cars, trucks, ambulances, cones, and more. Based on those categories and its surroundings, it anticipates what they’re likely to do.

Making those predictions is likely the most crucial work the team is doing, and it’s based on the huge amount of time the cars have spent dealing with the real world. Anything one car sees is shared with every other car, and nothing is forgotten. From that data, the team builds probabilistic models for the cars to follow.

“All the miles we’ve driven and all the data that we’ve collected allowed us to build very accurate models of how different types of objects behave,” Dolgov says. “We know what to expect from pedestrians, from cyclists, from cars.”

Those are the key learnings the test drive on the roof parking lot was meant to show off. If I may anthropomorphize: The car spotted a person on foot walking near its route and figured, “You’re probably going to jaywalk.” It saw a car coming up quickly from left and thought, “There’s a good chance you’re going to keep going and cut me off.” When the cyclist in front put his left arm out, the car understood that as a turn signal.

This is how good human drivers think. And the cars have the added advantage of better vision, quicker processing times, and the inability to get distracted, or tired, or drunk, or angry.

Detecting Anomalies

The great challenge of making a car without a steering wheel a human can grab is that the car must be able to handle every situation it encounters. Google acknowledges there’s no way to anticipate and model for every situation. So the team created what it calls “anomaly detection.”

If the cars see behavior or an object they can’t categorize, “they understand their own limitations,” Dolgov says. “They understand that there’s something really crazy going on and they might not be able to make really good, confident predictions about the future. So they take a very conservative approach.”

One of Google’s cars once encountered a woman in a wheelchair, armed with a broom, chasing a turkey. Seriously. Unsurprisingly, this was a first for the car. So the car did what a good human driver would have done. It slowed down, Dolgov says, and let the situation play out. Then it went along its way. Unlike a human, though, it did not make a video and post it on Instagram.

Österreich: Die Steiermark will Teststrecke für selbstfahrende Autos werden

Source: http://futurezone.at/science/steiermark-will-teststrecke-fuer-selbstfahrende-autos-werden/155.431.535

In der Steiermark sollen autonome Fahrzeuge getestet werden

In der Steiermark sollen autonome Fahrzeuge getestet werden – Foto: Audi
Steiermark will Modellregion für autonomes Fahren werden, in der die Hersteller ihre selbstfahrenden Fahrzeuge testen können. Davon sollen auch die 220 steirischen Automobilzuliefer-Firmen profitieren.

Bei den Technologiegesprächen in Alpbach im August hat Infrastrukturminister Alois Stöger „Teststrecken“ für autonom fahrende Autos angekündigt, am Montag hat die Steiermark offiziell aufgezeigt, diese Testregion werden zu wollen. „Wir haben in der Steiermark die perfekten Voraussetzungen dafür, die österreichische Modellregion zu werden“, sagt Franz Lückler, CEO des ACstyria Autoclusters. Gemeinsam mit der Politik und der Wirtschaft wurde offiziell das „Projekt Z“ gestartet, bei dem die Steiermark zur Teststrecke werden will. „Es gibt 220 Unternehmen, die im AutoCluster zusammengefasst sind, von AT&S, Magna, AVL-List, NXP, ams bis hin zu Infineon. Sie alle leisten bereits heute einen wertvollen Beitrag für die Zukunft der Mobilität.“ Das autonome Fahren könne zu einem Umsatzturbo für die steirischen Zulieferbetriebe werden.

pk.jpg
Verkaufs- und Marketing-Vorstand bei Magna, Gerd Brusius (stehend), ACstyria-CEO Franz Lückler, Wirtschaftslandesrat Christian Buchmann (v. l. hockend) – Foto: ACstyria/Kanizaj

Schützenhilfe hat der ACStyria Autocluster von Wirtschaftslandesrat Christian Buchmann bekommen, der bereits an das Infrastrukturministerium herangetreten ist:  „Die Steiermark hat schon früh erkannt, dass Mobilität eine spannende Thematik ist“, so Buchmann. „50.000 Menschen sind bei uns allein im Mobilitätsbereich beschäftigt, die Wertschöpfung beträgt 14,5 Milliarden Euro.“ Hinzu komme, dass dadurch eine enge Zusammenarbeit mit außeruniversitären und universitären Instituten, allen voran der TU Graz, bestehe.

Gesellschaftliche Akzeptanz gefordert

Die Wirtschaft steht freilich hinter dem Projekt Z. „Wir sind schon seit Jahren aktiv in diesem Feld unterwegs“, sagt AT&S-Generaldirektor Andreas Gerstenmayer, der auch Vorsitzender des Forschungsrats in der Steiermark. „Wir arbeiten mit den bedeutendsten Zulieferern zusammen und sind bei der Entwicklung von Assistenz-, Fahrzeugerkennungs-Systemen oder auch der Car2Car-Communication beteiligt.“ Doch neben Teststrecken fordert Gerstenmayer vor allem eines, „eine gesellschaftliche Akzeptanz. Die technischen Lösungen gibt es ja schon, aber die Ängste in der Bevölkerung müssen abgebaut werden.“ Autonomes Fahren bringe mehr Sicherheit, und das müsse man den Menschen klar machen, denn für 90 Prozent aller Verkehrsunfälle sei der Mensch verantwortlich. Man müsse die Menschen von den positiven Seiten der Technologie überzeugen, dürfe aber freilich nicht auf die heiklen Themen wie Datennutzung und Datensicherheit vergessen.

„Europa muss bei diesem Thema auch vorne dabei sein“, sagt Magna-Vizepräsident Gerd Brusius, der sich einen raschen Start des Projekt Z wünscht. „Wir brauchen die Möglichkeit, autonomes Fahren im rechtlichen Rahmen hier zu testen, um die Wettbewerbsfähigkeit zu erhalten.“ Es gäbe ohnehin noch sehr viele Themen, die in diesem Zusammenhang geklärt werden müssen, von Gesetzen bis hin zu versicherungstechnischen Fragen. Brusius: „Tatsache ist, dass diese Technologie die Zukunft des Automobils drastisch verändern wird.“

Intelligente Maschinen manipulieren im Auftrag von Politik und Werbeindustrie

Quelle: http://futurezone.at/digital-life/mit-intelligenten-maschinen-gegen-auslaenderfeindlichkeit/153.456.202

Oliviero Stock forscht an Systemem, die Menschen beeinflussen können. In Zukunft könnten so Einstellungen und Meinungen in der Bevölkerung geändert werden.

Oliviero Stock entwickelt Software, die Menschen beeinflusst. Das funktioniert nicht durch Argumente, sondern indem an die Emotionen appelliert wird. Derzeit funktioniert das beispielsweise, indem vorhandene Texte so modifiziert werden, dass sie eine bestimmte Botschaft vermitteln, etwa durch das Einfügen oder Austauschen von Adjektiven. In Zukunft sollen intelligente Algorithmen allerdings in der Lage sein, durch Humor und Kreativität, selbsttätig Agenden zu verfolgen. Die Werbeindustrie und die Politik könnten versuchen, mit derartiger Software die Menschen zielgerichtet zu manipulieren. Der Schutz vor automatisierter Einflussnahme könnte ebenfalls durch intelligente Software erfolgen. Oliviero Stock, der derzeit amCenter for Information and Communication Technology in Trentoforscht, ist derzeit für die neunte ACM Konferenz für Empfehlungssysteme, die von der TU Wien ausgerichtet wird, in der österreichischen Hauptstadt. Die futurezone hat ihn interviewt.

hqdefault.jpg
Oliviero Stock – Foto: YouTube Screenshot

Hat Sie je eine Maschine zum Lachen gebracht?
Wirklich laut gelacht habe ich wegen einer Maschine nur, wenn es um unerwartetes Verhalten aufgrund eines Fehlers geht, aber das zählt nicht. Gelächelt habe ich aber schon aufgrund von Software, die gewollt humorvoll war. Vor Jahren gab es ein Programm, das sich über Abkürzungen lustig gemacht hat, es hieß “Hahacronym”. Die Software hat etwa aus “MIT, dem Massachusetts Institute of Technology”, “Mythical Institute of Theology” gemacht. Bei solchen Beispielen hab ich geschmunzelt 

Können Maschinen Eigenschaften wie Humor oder Kreativität, die üblicherweise mit menschlicher Intelligenz in Verbindung gebracht werden, entwickeln?
Der weltbeste Schachspieler ist eine Maschine. Schach mag zwar nur einen eng begrenzten Fähigkeitenkatalog verlangen, aber ich glaube, es verlangt eine gewisse Intelligenz, nicht nur rohe Rechenkraft. Ich habe früher Schachspiele zwischen menschlichen Profis und dem Computerschach Weltmeister organisiert. Die Kommentatoren und die menschlichen Spieler waren sich stets einig, dass sie am meisten von der Kreativität des Spiels der Maschinen beeindruckt waren.

Sie arbeiten aber auf einem ganz anderen Gebiet, nämlich mit Sprache, die eher im künstlerischen Bereich angesiedelt ist. Können Maschinen auch hier reüssieren?
Ich glaube, auf lange Sicht werden sich Maschinen genau wie Menschen verhalten können. Ob sie je so etwas wie Bewusstsein erlangen werden, kann ich nicht sagen.

Ob eine Maschine Bewusstsein emuliert oder tatsächlich eines entwickelt, ist auch schwer zu sagen.
Genau. Wir haben in jüngerer Vergangenheit im Bereich der Computerlinguistik einen Umbruch erlebt. Bis vor 20 Jahren war das Ziel, dass Maschinen Sprache verstehen und die Forscher dabei Einblicke in die Prozesse im menschlichen Hirn erlangen, die mit Sprache zusammenhängen. Durch das Internet steht praktisch alles, was je geschrieben wurde, für maschinelle Analyse zur Verfügung. Aktuelle Systeme lernen durch Datenanalyse. Das war ein Paradigmenwechsel. In bestimmten, eng begrenzten lingusitischen Einsatzgebieten sind Maschinen heute dadurch schon fast so geschickt wie Menschen. In anderen Bereichen, etwa Rhetorik, hinken die Maschinen aber noch weit hinterher.

Das heißt der statistische Lernansatz hat Grenzen?
Ja, es gibt Dinge, die mit Lern-Algorithmen schwer zu erreichen sind. Diese Art von Lernen hat mit menschlicher Sprachacquisition nichts zu tun. Hier wird es in Zukunft eine Mischung der beiden Ansätze geben, eine Kombination aus Ansätzen, die regelbasiert bzw. korpusbasiert sind (d.h. Lernen aus bestehenden Texten). Das Wissen über menschliche Sprachverarbeitung wird wieder wichtiger.

Wenn kein Bewusstsein, werden Maschinen so etwas wie Persönlichkeit entwickeln?
Ja. Aber so weit sind wir noch nicht.

Müssen wir uns bald vor einer Armee aus Werbe-Bots fürchten, die uns im Internet jagen und versuchen, mir Dinge schmackhaft zu machen, indem sie an unsere Emotionen appellieren?
Das ist eine Sorge, die es gibt. Es gibt Platz für Missbrauch, wie überall. Wir konzentrieren uns darauf, die Technologie voranzutreiben. Jedenfalls könnten solche Systeme ja auch für soziale Anliegen verwendet werden, etwa um die Einstellung der Bevölkerung zu Ausländern zu ändern oder einen gesunden Lebensstil schmackhaft zu machen.

Das ist ethisch schwierig.
Ja, deshalb beschäftigen meine Kollegen und ich uns seit dem Beginn unserer Arbeit mit ethischen Fragen. Es geht nicht nur darum, ob Dinge gut oder schlecht für die Gesellschaft sind, sondern auch darum, dass die Systeme ihre Entscheidungen selbst an das moralisch Annehmbare anpassen. Ein kluges System wird in Zukunft verstehen, was es tun darf und was nicht. In einigen Situationen ist es etwa akzeptabel, wenn ein persuasives System Menschen zu einer Handlung bewegt, die ihnen widerstrebt , etwa wenn ein Haus in Flammen steht und die Bewohner nicht rechtzeitig hinausgehen. Dann wäre selbst eine maschinelle Lüge vertretbar.

Trotzdem ist die maschinelle Manipulation von Menschen problematisch.
Ja, selbst bei einem guten Zweck bleibt die Frage, wer die Entscheidungen trifft. Kurzzeitig wird das wichtigste Thema sein, wer die Systeme kontrolliert – die Verantwortlichen müssen anständig sein. Die Fähigkeiten der Systeme werden sich aber verbessern und langfristig werden die Programme soziale Akteure sein, die in der Gesellschaft selbsttätig ihren Aufgaben nachgehen. Dazu müssen sie aber gut und böse voneinander unterscheiden können. Das wird passieren, ob wir es wollen oder nicht.

Kann das nicht ins Auge gehen?
Es gibt ein sehr geringes Risiko, dass die Maschinen zu Ungunsten der Menschen die Kontrolle übernehmen, aber das halte ich für unwahrscheinlich und von solcher Technologie sind wir noch sehr, sehr weit weg. Ich fürchte mich eher vor Menschen. Es gibt fast nichts, was Menschen sich nicht gegenseitig antun würden. Die Maschinen werden uns helfen, solche Auswüchse in Zaum zu halten.

Macht es für die Werbung einen Unterschied, ob sie von Mensch oder Maschine kreiert wurde?
Unseren Erhebungen zufolge sind Menschen sehr empfindlich, wenn es um Maschinen geht, die versuchen, sie zu beeinflussen. Wenn man sie ganz allgemein danach fragt, ohne eine bestimmte Art von Beeinflussung zu nennen, sagen sie fast immer, dass das inakzeptabel ist. Wenn man aber einen bestimmten Zweck nennt, finden sie solche Beeinflussung oft durchaus annehmbar (etwa, wenn es um Hilfe bei der Erreichung ihrer eigenen Ziele geht). Es ist bemerkenswert, dass Menschen selten mit der traditionellen, durch Menschen erzeugten Werbung ein Problem haben.

Kann man Menschen vor unerwünschten Beeinflussungsversuchen schützen?
Wir haben ein System entwickelt, das Werbungen auf den Arm nimmt, den Subvertizer. Außerdem arbeiten wir an Software, die Menschen vor Beeinflussung schützen sollen, indem sie sie erkennen. Dabei geht es nicht um Banner, die auf Nutzer zugeschnitten sind. Wir beschäftigen uns mit dem Wording von Texten. So erkennen wir Beeinflussungsversuche selbst in Zeitungsartikeln. Auch Journalisten versuchen ja, Einfluss zu nehmen, etwa durch die Wahl der Adjektive. Unser System soll das erkennen, den Nutzer warnen und sogar eine gesäuberte Version des Textes anbieten. So können etwa unerwünschte Beeinflussungsversuche bekämpft werden.

Die Möglichkeit, Werbeslogans automatisch zu erzeugen, wäre perfekt für die Werbeindustrie.
Die Werbeindustrie wird unter den ersten Branchen sein, die auf solche Systeme setzt.

Ist die Technologie heute schon im Einsatz?
Nein. Heute läuft alles noch über statische Banner.

Wann wird sich das ändern?
Ich glaube, dass Technologien, wie sie von uns entwickelt wurden, nicht mehr weit von einer Markteinführung entfernt sind. Vor allem werden damit Botschaften genau auf Personen und Situationen zugeschnitten werden. Das ist für die Werbeindustrie attraktiv.

Wären längerfristig auch Bots denkbar, die versuchen, Menschen im Gespräch zu überzeugen?
Wir entwickeln derzeit keine Bots. Wir beschäftigen uns mit Überzeugungsstrategien, die im Monolog funktionieren, in Form von Text. Es gibt Arbeiten zu Sprachverarbeitung in Dialogsystemen, aber das ist ein ganz anderes Feld mit eigenen Schwierigkeiten. Chatbots basieren heute auf Regeln, da ist noch viel Arbeit nötig. Die heutigen Systeme sind trivial. Wir werden in Zukunft aber interessantere Systeme zu Gesicht bekommen, die auf die zuvor angesprochene Kombination aus Regeln und lernfähigen Algorithmen setzen.

Wie schwer ist es, der Software ethische Grundsätze beizubringen?
Bei selbstfahrenden Autos wird seit kurzem über ethische Systeme diskutiert. Bei Überzeugungssoftware ist das schwieriger, weil zwei Komponenten bedacht werden müssen: die Handlung oder Einstellung, die Ziel der Beeinflussung ist; und der Charakter der Beeinflussung selbst.

Kann eine Maschine wirklich das Gute vom Bösen unterscheiden?
Wir untersuchen bei unserer Arbeit nicht, was gut oder böse ist, sondern was Menschen akzeptieren, auf Grund ihrer aktuellen moralischen Einschätzungen. Da gibt es individuelle Unterschiede.

Wo stehen wir bei diesem Thema?
Die heutige Forschung steht noch am Anfang. Die Überzeugungssysteme müssen subtiler werden, da ist noch einige Arbeit nötig, auch die Persönlichkeit muss berücksichtigt werden.

Sie haben auch Systeme gebastelt, die kreative Schlagzeilen im Stil der britischen Boulevardblätter erstellt. Muss ich mir einen neuen Job suchen?
In naher Zukunft werden die Systeme hauptsächlich Hilfen für Kreative sein. Wir können nämlich schon recht gute, kreative Ergebnisse erzielen, sind aber noch nicht gut darin, die Schlechten herauszufiltern. Maschinen werden Kreativen helfen, sie schneller und effektiver machen. Sie müssen sich also noch keinen anderen Job suchen, sondern werden vielleicht durch die Maschinen ein viel effektiverer Journalist werden. Sie können ihren maschinellen Partner ja sogar geheimhalten, wenn Sie wollen.

Sind Jobs in anderen Branchen durch intelligente Systeme in Gefahr?
Ich kenne die Debatte um einen möglichen Jobverlust durch intelligente Systeme. Ich stimme zu, es wird Jobs geben, die es nicht mehr geben wird, wie schon in der industriellen Revolution. Nur dass diesmal auch bestimmte intellektuelle Arbeiten betroffen sein werden. Aber wie damals wird Boykott keine gute Antwort sein. Die Gesellschaft sollte so klug sein, Wege zu finden, die Maschinen zur Schaffung einer ausbalancierten Gesellschaft einzusetzen. Die Struktur der Arbeit muss sich vielleicht ändern.

Frei nach Marx werden die Arbeiter von heute also Zeit für Poesie und Wein haben?
Ich habe keine spezifische Lösung. Aber das Problem muss ernst genommen und studiert werden. Moderne Theorien und Experimente werden hoffentlich zu einem guten Ergebnis führen. Falsch wäre jedenfalls, die Weiterentwicklung intelligenter Systeme zu begrenzen. Man kann diese Entwicklung nicht aufhalten. Die Menschheit muss neue Auffassungen der Gesellschaft ausarbeiten und damit experimentieren.

Quelle: http://futurezone.at/digital-life/mit-intelligenten-maschinen-gegen-auslaenderfeindlichkeit/153.456.202

Mission Electro: Porsche greift Tesla an

Source: http://futurezone.at/produkte/mission-e-porsche-greift-tesla-an/152.791.507

Porsche präsentierte auf der Internationalen Automobil Ausstellung sein Konzeptfahrzeug „Mission E“. Der E-Porsche soll auf eine Reichweite von 500 Kilometer kommen.

Das strombetriebene Fahren erfasst nun auch die  Sportwagenmarke Porsche. Der Chef der VW-Tochter, Matthias Müller, hatte sich noch zu Beginn des Jahres zurückhaltend über einen E-Porsche geäußert, am Montag zeigte der Konzern auf der Internationalen Automobil Ausstellung (IAA) in Frankfurt sein Konzeptfahrzeug „Mission E“ -.

Porsche-E1 Porsche-E2 Porsche-E3

500 Kilometer Reichweite

Der E-Sportwagen soll ebenso wie der US-Konkurrent Tesla S auf eine Reichweite von 500 Kilometer kommen. Der „Mission E“ soll über 600 PS (440 kw) verfügen und in unter 3,5 Sekunden von null auf 100 km/h beschleunigen können, wie es in einer Presseaussendung der VW-Tochter heißt.

Porsche wirbt auch damit, dass die Batterie des Sportwagens schon binnen 15 Minuten zu 80 Prozent aufgeladen werden kann. Dafür soll das „Porsche Turbo Charging“ über einen 800-Volt-Anschluss sorgen. Die Instrumente des E-Autos sollen über Blick- und Gestensteuerung bedient werden können, „teilweise sogar über Hologramme“, wie es in der Porsche-Aussendung heißt.