Click here to Skip to main content
15,886,199 members
Articles / Mobile Apps / Android
Article

Using Gesture Recognition as Differentiation Feature on Android

13 Jul 2015CPOL6 min read 18.3K   1  
In the paper, we will introduce how to get useful information from sensor data, and then we will use an Intel Context Sensing SDK example to demonstrate flick detect, shake detect, glyph detect.

This article is in the Product Showcase section for our sponsors at CodeProject. These articles are intended to provide you with information on products and services that we consider useful and of value to developers.

Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for Android, Internet of Things, Intel® RealSense™ Technology and Windows to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathon’s, contests, roadshows, and local events.

Overview

Sensors found in mobile devices typically include accelerometer, gyroscope, magnetometer, pressure, and ambient light sensor. Users generate motion events when they move, shake, or tilt the device. We can use a sensor’s raw data to realize motion recognition. For example, you can mute your phone by flipping your phone when a call is coming or you can launch your camera application when you lift your device. Using sensors to create convenient features helps to promote a better user experience.

Intel® Context Sensing SDK for Android* v1.6.7 has released several new context types, like device position, ear touch, flick gesture, and glyph gesture. In the paper, we will introduce how to get useful information from sensor data, and then we will use an Intel Context Sensing SDK example to demonstrate flick detect, shake detect, glyph detect.

Introduction

A common question is how to connect sensors to the application processor (AP) from the hardware layer. Figure 1 shows three ways for sensors to be connected to the AP: direct attach, discrete sensor hub, and ISH (integrated sensor hub).

Image 1

Figure 1. Comparing different sensor solutions

When sensors are connected to the AP, it is a direct attach. The problem, however, is direct attach consumes AP power to detect data changes. The next evolution is a discrete sensor hub. It can overcome power consumption problems, and the sensor can work in an always-on method. Even if the AP enters the S3[1] status, a sensor hub can use an interrupt signal to wake up the AP. The next evolution is an integrated sensor. Here, the AP contains a sensor hub, which holds costs down for the whole device BOM.

A sensor hub is an MCU (Multipoint Control Unit), and you can compile your algorithm using available languages (C/C++ language), then download the binary to the MCU. In 2015, Intel will release CherryTrail-T platform for tablets, SkyLake platform for 2in1 devices, both employing sensor hubs. See [2] for more information about the use of integrated sensor hubs.

Figure 2, illustrating the sensor coordinate system, shows the accelerometer measures velocity along the x, y, z axis, and the gyroscope measures rotation around the x, y, z axis.

Image 2

Figure 2. Accelerometer and gyroscope sensor coordinate system

Image 3

Figure 3. Acceleration values on each axis for different positions[3]

Table 1 shows new gestures included in the Android Lollipop release.

Table 1: Android* Lollipop’s new gestures

Name Description
SENSOR_STRING_TYPE_PICK_UP_GESTURE Triggers when the device is picked up regardless of whatever was before (desk, pocket, bag).
SENSOR_STRING_TYPE_GLANCE_GESTURE Enables briefly turning on the screen to allow the user to glance at content on screen based on a specific motion.
SENSOR_STRING_TYPE_WAKE_GESTURE Enables waking up the device based on a device specific motion.
SENSOR_STRING_TYPE_TILT_DETECTOR Generates an event each time a tilt event is detected.

These gestures are defined in the Android Lollipop source code directory /hardware/libhardware/include/hardware/sensor.h.

Gesture recognition process

The gesture recognition process contains preprocessing, feature extraction, and a template matching stage. Figure 4 shows the process.

Image 4

Figure 4. Gesture recognition process

In the following content, we will analyze the process.

Preprocessing

After getting the raw data, data preprocessing is started. Figure 5 shows a gyroscope data graph when a device is right flicked once. Figure 6 shows an accelerometer data graph when a device is right flicked once.

Image 5

Figure 5. Gyroscope sensor data change graph (RIGHT FLICK ONCE)

Image 6

Figure 6. Accelerometer sensor data change graph (RIGHT FLICK ONCE)

We can write a program to send sensor data by network interface using Android devices, and then write a Python* script that will be run on a PC. So we can dynamically get the sensor graphs from the devices.

This step contains the following items:

  • A pc running a Python script to receive sensor data.
  • A DUT-run application to collect sensor data, and then send this data to the network.
  • An Android adb command to configure the receive and send port (adb forward tcp: port tcp: port).

Image 7

Figure 7. How to dynamically show sensor data graphs

In this stage we will remove singularity and as is common we use a filter to cancel noise. The graph in Figure 8 shows that the device is turned 90, and then turned back to the initial position.

Image 8

Figure 8. Remove drift and noise singularity[4]

Feature extraction

A sensor may contain some signal noise that can affect the recognition results. For example, FAR (False Acceptance Rate) and FRR (False Rejection Rates) show rates of recognition rejection. By using different sensors data fusion we can get more accurate recognition results. Sensor fusion[5] has been applied in many mobile devices. Figure 9 shows an example of using the accelerometer, magnetometer, and gyroscope sensor to get device orientation. Commonly, feature extraction uses FFT and zero-crossing methods to get feature values. The accelerometer and magnetometer are very easily interfered with by EMI. We usually need to calibrate these sensors.

Image 9

Figure 9. Get device orientations using sensor fusion [4]

Features contain max/min value, peak and valley, we can extract these data to enter the next step.

Template Matching

By simply analyzing the graph of the accelerometer sensor, we find that:

  • A typical left flick gesture contains two valleys and one peak
  • A typical left flick twice gesture contains three valleys and two peaks

This implies that we can design very simple state machine-based flick gesture recognition. Compared to the HMM[6] model based gesture recognition, it is more robust and has higher algorithm precision.

Image 10

Figure 10. Left flick one/twice accelerometer and gyroscope graph

Case study: Intel® Context Sensing SDK

Intel Context Sensing SDK[7] uses sensor data as a provider to transfer sensor data to context sensing services. Figure 11 shows detailed architecture information.

Image 11

Figure 11. Intel® Context Sensing SDK and Legacy Android* architecture

Currently the SDK supports glyph, flick, and ear_touch gesture recognition. You can get more information from the latest release notes[8]. Refer to the documentation to learn how to develop applications. The following is that device running the ContextSensingApiFlowSample sample application.

Image 12

Figure 12. Intel® Context SDK support for the flick gesture [7]

Intel® Context Sensing SDK support flick direction is accelerometer sensor x axis and z axis direction, not support z axis flick.

Image 13

Figure 13. Intel® Context SDK support for an ear-touch gesture [7]

Image 14

Figure 14. Intel® Context SDK support for the glyph gesture [7]

Image 15

Figure 15. Intel® Context SDK sample application (ContextSensingApiFlowSample)

Summary

Sensors are widely applied to modern computing devices with motion recognition in mobile devices as a significant differentiation feature to attract users. Sensor usage is a very important feature to promote user experience in mobile devices. The currently released Intel Context Sensing SDK v1.6.7 accelerates the simple usage of sensors that all users are seeking.

About the Author

Li Liang earned a Master’s degree in signal and information processing from Changchun University of Technology. He joined Intel in 2013 as an application engineer working on client computing enabling. He focuses on differentiation enabling on the Android platform, for example, multi-windows, etc.

Reference

[1] http://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface

[2] http://ishfdk.iil.intel.com/download

[3] http://cache.freescale.com/files/sensors/doc/app_note/AN4317.pdf

[4] http://www.codeproject.com/Articles/729759/Android-Sensor-Fusion-Tutorial

[5] http://en.wikipedia.org/wiki/Sensor_fusion

[6] http://en.wikipedia.org/wiki/Hidden_Markov_model

[7] https://software.intel.com/sites/default/files/managed/01/37/Context-Sensing-SDK_ReleaseNotes_v1.6.7.pdf

[8] https://software.intel.com/en-us/context-sensing-sdk

Useful links

https://source.android.com/devices/sensors/sensor-stack.html

https://graphics.ethz.ch/teaching/former/scivis_07/Notes/Slides/07-featureExtraction.pdf

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
United States United States
Intel is inside more and more Android devices, and we have tools and resources to make your app development faster and easier.


Comments and Discussions

 
-- There are no messages in this forum --