Click here to Skip to main content
15,867,453 members
Articles / Internet of Things

Wearable Foot Gesture Based Text Entry System for People with Upper Body Disabillity

,
Rate me:
Please Sign up or sign in to vote.
5.00/5 (15 votes)
7 Aug 2016CPOL27 min read 34K   237   11   15
An assistive technology initiative for patients with upper body disability

Abstract

Upper body paralysis is one of the more uncommon categories of paralysis. Central Cord Syndrome, Cruciate paralysis and many such Spinal cord or brain related abnormalities may result in such a paralysis. We don't hear or talk about them much because they are not as widely observed as for instance paraplegia. Due to rare cases, this has drawn least research and development in medical field and you hardly see many papers on the subject. Not only the case of paralysis, and development in medical field and you hardly see many papers on the subject. Not only the case of paralysis, but there are also other forms of abnormality that makes the hands less usable. Numbness in hands is one such medical problem which is not so uncommon. Many people also lose their hands due to accidents. In short, there are people in this world who are dependent on their feet more than their hands due to their medical conditions.

The question that we need to ask here is "have we done enough as technologists for this group of people"? At an age of IoT and wearable where people are talking about recording the heartbeat in cloud, stuff like fit bands, sleep analysis, we must also try to adopt technology for helping the group of people that needs technological support and assistance.

The image below of an Indian artist called Sheela Sharma was the main inspiration behind this work.

Image 1

Let's be frank, there has been a lot of work with various gesture technologies including head gesture, hand gesture, eye gesture, lip gesture. But have we done enough with foot gestures? Have our efforts to assist people with hand disability been sufficient?

We feel not. I feel we can do better to come up with new technologies and create solutions that can be used by people who need them badly.

This project "Wearable Foot Gesture Based Text Entry System", is our effort to highlight the need for developing a feasible solution for people with upper body (especially hand) disability by providing them with a solution for typing text just by four movements of their feet: UP, DOWN, LEFT and RIGHT gestures.

The article will have two components; An android app where we will be using a common smartphone for gesture recognition and then an Intel Edison based solution. We feel that Intel Edison based "pure embedded IoT" solution needs a more feasible wearable implementation where as the Android phone being widely used can be used immediately. We have built a framework for foot gesture recognition.

Looks simple? Technically speaking, it may look like quite an easy task, but try wrapping an Arduino or a phone in your foot and then try to generate gesture with accuracy and minimum latency. Because our foot movement is immensely limited in comparison to hand movement, tracking, manipulating, filtering and using foot gesture becomes quite a challenging task.

In this article, we will not only try to cover the technical aspects, but also the physiological aspects of the system.

1. Introduction

Problem Statement

According to WHO report, currently around 15% of the total world’s population live with some form of disability. Assistive and rehabilitation solutions have been developed and used successfully to make the life of patients easier.

As upper body disability is an extremely small fraction of the total cases of disability, not many dedicated solutions are found for these patients.

People with hand numbness or reverse paraplegia are dependant on their feet to carry out many of their routine tasks.

In order to develop assistive technology for this small group of disabled people, recognizing foot gesture becomes an important direction. The people with upper body or hand disability face a lot of difficulty in their day to day activities. Typing and text entry is one such area that definitely needs support.

Watch this amazing Youtube video of typing with feet. One may well argue that why can't disabled patients try to use a regular keyboard like the mentioned video?

Because, even such typing needs a specific position of the keyboards and a sitting posture which may not be viable for many cases.

So we introduce foot gesture based text entry system where the foot gestures are recognized and are converted into text. Here is the summary of the objective of our work.

Objective

The main objective of the project is to recognize the foot gestures. These gestures are converted to the typing text. Various kinds of foot gestures are generated using smart-phone and there are different actions for the different gestures, i.e., for basic 4 gestures of the foot (Up, down, right, left). There are four different actions, two of which are to display dot and dash symbols. The sequence of these symbols are converted to the English character with the help of Morse code converter module.

A Note on Foot Gesture Recognition

Gestures can be recognized broadly in two ways:

  1. A Computer Vision solution for gesture recognition and
  2. Wearable solutions

A computer vision solution needs the user to be right in front of the camera all the time. This is more or less a low cost and affordable solution. The wearable is an electronic item that can be worn on the body. It may be a simple locket, clothes, bracelet, micro controller device and smart-phone. Wearable solutions are the devices meant specifically for a particular purpose. Hearing aid is one such wearable medical equipment that almost all of us are familiar with.

Creating dedicated medical solution at the production level needs a huge investment and a big timeline. Researchers therefore prefer a "proof of concept" or prototyping for knowing that the concept works. We want to keep our focus to wearable solution for foot gesture based text entry system here. More so, we will present the proof of concept with smart phones which are available and used commonly. We are also going to cover early prototype of pure IoT based solution developed with Intel Edison.

The simplest wearable is the smart-phone (don't you carry it in your pocket or bag all the time?). Smart phones have various kinds of sensors like accelerometer, gyroscope, vibration sensor, ambient sensor, etc. Out of this, the accelerometer is the prominent sensor which determines the acceleration of our hand movement, foot movement, etc. For example: Temple Run game.

With understanding of the problem, our objective and technical aspects of the system, let us describe our proposed framework in short.

The proposed system involved is the first of its type, there no technical assistance work previously using foot gesture, we have used foot gestures for providing gesture movement, the smart-phones as a wearable are used to provide the inputs, the foot gestures are used to enter the text and display messages, it is used to develop the typewriter for the upper body paralysis people who can make use of their feet by providing various simple gestures to perform simple task and display message for their need and with only basic movements, people will be able to type the text. It contains 4 gesture movements, i.e., up, down, left, right and space. By these gesture movements, text is displayed.

Technical Concept of the Work

A hand gesture based on smart phones are extremely popular, more so in mobile gaming. Aren't we all accustomed to Temple Run and tilting the phone to change the direction of the runner?

However, recognizing leg gesture needs little more effort than just coding. As phones were never developed to be used with legs, identifying the way you can do it is a bit of challenge.

There are various ways, one can think of using the phone as foot wearable. One of the common use cases may come to our mind that putting it in our socks may be an optimal idea. But when we started working for this project, we realized that leg movements are much more restricted in comparison to hand movements. Putting the phone inside socks eliminate the possibility of detection of any movement of the instep. So, in case one goes with socks based wearable, the use case would need the user to move his entire leg. Not all of us are as flexible with our legs (unless you are Messy or Christiano Ronaldo). People with upper body disability often have stronger and more flexible legs. Even then, needing them to move the entire leg in the air in all directions for generating gestures makes less sense.

After doing a lot of trial and error and experiments, we realized that putting the phone on top of the elbow with a band gives us better data. So we went ahead with a wearable solution for foot based typing with phone attached with instep with a band.

The next challenge was to classify the data from accelerometer of the mobile to gestures. We could go with some machine learning techniques like fuzzy logic. But, we thought we can classify the data with linear logic (simple if else). Because we not only wanted to develop a mobile based solution but also wanted to port the solution into an IoT device app, we logged accelerometer data in a measured way, performed matlab based analysis and finally created the simple logic for desired gesture recognition.

This framework required about a couple of months of work. Once done, we could easily import the logic in Intel Edison in a matter of two days!

When a person moves his leg either in up direction or down direction, the accelerometer data, z will be varying. Moving the leg to the top, gravity will be decreasing. Moving the leg to the down gravity will be increasing. Similarly when we move leg to left side, x direction will be decreasing and moving the leg to the right side, x will be increasing.

The acceleration is measured and taken the average, if z is more than x and y then up and down movement, if z is decreasing, it is up movement and if z is increasing, it is down movement. Similarly, if x is higher than z and y, then it is left-right movement. If decreasing left movement and if increasing right movement. Based on this simple principle, we developed a system that could primarily recognize four gestures.

Then we used Morse Code mapping of the gestures to text entry system. As I had already worked with a similar project couple of years back called Head Texter, this part wan't too difficult. The next issue was to map these detected gestures in real time to the Morse code mapping system.

Morse code was predominately used during 2nd World War. It forms the fundamental principle of telegraphs system. The Morse code depends on propagation on only 2 symbols, i.e., dot(.) and dash(-). There is a code sequence of dash for A, sequence of dots for B and even for numbers.

We are going to take advantage of 2nd world war and map the preliminary motion of UP and DOWN into DOT (.) and DASH(-). For dash, it is up gesture of foot and for dot down gesture of foot. For deleting a sequence, we use left gesture and for OK, we give right gesture of the foot. Remember intersteller tesseract scene of how complex quantum data was encoded into Morse code which finally saved the world? (As we had our real time Morse code based text entry system video and article out at least three years earlier to Intersteller, we can safely assume that Mr Nolan must have been reading CodeProject articles!!!!!)

We used our C# interface developed in the course of Head Texter project to convert gestures into text. The last piece of puzzle was to be able to send the gestures from wearable device to PC. We use the most popular IoT protocol called Mqtt, which is almost a de-facto in M2M communication. As, most of the IoT devices, operating systems, programming languages support Mqtt, we went ahead with Mqtt for sending the detected gesture to PC from phone.

Well, that's about the story of this project, why we started, what we did and the technologies we used. But, we want to elaborate bit more technical and other aspects of the project before we finally shift to coding.

Scope of the Work

Even though the system is primarily developed as a text entry assistant for upper body disability patients, it can be used in other assistive use cases of both patients and in many cases, formal people (how about changing the TV channel just by leg movement when you are watching your favourite show from your cozy couch?)

This work can be used for controlling various kinds of applications like Robotics, Home automation, Army vehicle, Translating remote automobile control and in case of television context such that the person during night time lying and watching television needs to change the channels or needs to mute can do using foot gestures.

Limitations

The few limitations on the foot gesture purely depends on classifying sensor value using raw data because foot movement differ from one user to another, it requires significant bit of training for users which is able to get accurate gesture working, it also uses transmission latency which uses intermediate protocol to transmit data.

2. Requirement Specifications

Hardware Requirement

  1. PC (minimum of 4GB RAM, Intel i3Core and above processor)
  2. Android Mobile ( Android 4.0) and above
  3. Internet Connectivity ( Wi-Fi)
  4. Intel Edison board, Grove Shield and Accelerometer Sensor
  5. Power bank for Edison

Software Requirement

  1. Windows 7 and above operating system
  2. Framework: .NET Framework 4.0 and above
  3. IDE: Visual Studio .NET 2012 and above, Android Studio
  4. Intel XDK IoT edison for developing Device App
  5. PuTTY for communicating with device
  6. Mosquitto Server for Windows, With 32 Open SSL
  7. Language: C#, Android-Java, Node.js for IoT

3. System Design

There are some IT guys who are never done without a reference design! In this chapter, we would like to focus a little on the design issues. Even though the summary of the methodology is already explained, it is always a good idea to follow an SRS. So, we are going to elaborate some of the concepts with design diagrams to make it easy for the reader to understand the concept better.

System Architecture

Image 2

Figure: System Architecture

The above figure shows the System architecture of our proposed foot gesture based typing mechanism. The system comprises of the smart phone which is used as an wearable and it is tied to user’s foot with the help of the wrist band. The smart phone has the sensor called accelerometer which is used to measure the acceleration of the foot movement. The acceleration data of the phone is collected for every movement of the foot. This data is aggregated and classified into the certain set of gestures. These gesture data are then transferred to the PC through the server called MQTT server. The MQTT server/ protocol is the protocol that connects the PC app with the smart phone app. The PC on the other side is programmed/installed with the Morse code converter module which converts the gesture data into the dash and dot and then to the English character. Whenever the user makes the foot movement, the smart phone app identifies the movement and classify it to the gesture set and sends this data on the MQTT server. The MQTT server identifies the particular PC to which this data has to be

The Pc receives this gesture data from the MQTT server and then converts it to the dash(-) and dot(.) based on the foot movement. After this, the dot and dash sequence is converted to the English character with the help of the Morse code. There are different set of gestures for the different actions. The Up movement of the foot is given for entering the dash(-) symbol and the down movement of the foot is given for entering the dot(.) symbol. The right movement of the foot is given for displaying the character w.r.t the current sequence of the dot and dash and the left movement of the foot is given for the deleting the sequence.

There is a code called Morse code which was predominately used during 2nd World War. It forms the fundamental principle of telegraphs system. The Morse code depends on propagation on only 2 symbols i.e.. dot(.) and dash(-). There is a code sequence of dash for A, sequence of dots for B and even for numbers.

We are going to take advantage of 2nd world war and map the preliminary motion of up and down into dot and dash. For dash, it is up gesture of foot and for dot, down gesture of foot. For deleting a sequence, we use left gesture and for OK, we give right gesture of the foot.

Dataflow Diagram

Level 0th DFD

Image 3

Figure: Level 0th DFD

Level 1st DFD

Image 4

Figure: Level 1st DFD

Use Case Diagram

Image 5

Figure: Use Case diagram

Sequence Diagram

Image 6

Figure: Sequence diagram

Activity Diagram

Image 7

Figure: Activity Diagram

4. Coding

Android Code

Image 8

Figure: Android layout design

The UI of the project is rather simple. We have only one EditText named as edServer where user needs to enter MqTT broker address. The Connect button is named as btnConnect. When user clicks the button, the sensor manager must be activated and start sensing the data. The classified data will be published to a Mqtt channelin the broker.

A TextView named tvStatus is provided for debugging purpose so that accelerometer data, Mqtt messages, classification result can be displayed.

The layout of the above UI is as given below:

XML
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent"
    android:layout_height="match_parent" 
    android:paddingLeft="@dimen/activity_horizontal_margin"
    android:paddingRight="@dimen/activity_horizontal_margin"
    android:paddingTop="@dimen/activity_vertical_margin"
    android:paddingBottom="@dimen/activity_vertical_margin" 
    tools:context=".MainActivity">

    <TextView android:text="Foot Gesture Recognition[MqTT Topic: rupam/gesture]" 
              android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:id="@+id/tvHello" />

    <TextView
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:textAppearance="?android:attr/textAppearanceSmall"
        android:text="MqTT Server"
        android:id="@+id/tvServer"
        android:layout_marginTop="56dp"
        android:layout_below="@+id/tvHello"
        android:layout_alignParentLeft="true"
        android:layout_alignParentStart="true" />

    <EditText
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:id="@+id/edServer"
        android:width="200dp"
        android:layout_alignTop="@+id/tvServer"
        android:layout_toRightOf="@+id/tvServer"
        android:layout_instepndOf="@+id/tvServer"
        android:text="192.168.1.3" />

    <Button
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="Connect"
        android:id="@+id/btnConnect"
        android:layout_below="@+id/edServer"
        android:layout_centerHorizontal="true" />

    <TextView
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:textAppearance="?android:attr/textAppearanceSmall"
        android:text="Status"
        android:id="@+id/tvStatus"
        android:layout_centerVertical="true"
        android:layout_centerHorizontal="true" />

</RelativeLayout>

We use Eclipse Paho Mqtt client library jar for Mqtt implementation. You need to go to physical project directory through your explorer, create a folder called lib, paste the jar file there. From Android Studio left upper tab, select Project, go to lib in the tree, right click on the added jar file and select 'Add as Library'.

Gradle build will rebuild the project in the background.

Implement SensorEventListener from your MainActivity class.

Java
public class MainActivity extends Activity implements SensorEventListener

Declare a IMqttClient called sampleClient which will be used for communicating with broker.

Sensor class obnject sensor will be used to listen to change in accelerometer data and process it. edServer, tvStatus and btnConnect are Java elements corresponding to the UI elements which will be initialized from onCreate() method using findViewById(). We use a mqtt channel called "rupam/gesture" which will be stored in a variable named topic.

C#
private SensorManager sensorManager;
private Sensor sensor;
EditText edServer;
Button btnConnect;
TextView tvStatus;

String sAddress = "iot.eclipse.org";
String sUserName = null;
String sPassword = null;
String sDestination = null;
String sMessage = null;

String topic        = "rupam/gesture";
String content      = "FORWARD";
int qos             = 0;
String broker       = "tcp://iot.eclipse.org:1883";
//String broker       = "tcp://192.168.1.103:1883";
String clientId     = "RUPAM_DAS";
MemoryPersistence persistence = new MemoryPersistence();
IMqttClient sampleClient;

As Android doesn't permit the main thread to call any network related calls, we abstract the network calls (i.e., connecting to the broker and publishing messages) in AsyncTask. For connecting to broker, we take a ConnectionClass.

Notice, Looper.prepare() call in the connection method. We realized that if you target the code to Android 5 or above, the AsyncTask hust doesn't give back the call to main thread. After hours of searching in forums, we found in Stackoverflow that without this statement, the control is simply not handed back to the UI thread. Also, if you do not put the call in a conditional statement, checking if the looper is already initialized, it will keep creating new looper instance which will crash your App in no time.

Paho client expects the Mqtt broker to have tcp:// precedence. Once connected, we can publish messages through sample client. As we do not need to retrieve any data, we avoid subscribing the client to any channel for message reception.

Java
////////CONNECTION CLASS/////
public class ConnectionClass extends AsyncTask<ActivityGroup, String, String>
{
    Exception exc = null;

    @Override
    protected void onPreExecute() {
        super.onPreExecute();
    }
    @Override protected String doInBackground(ActivityGroup... params) {
        try
        {
            if (Looper.myLooper()==null)
                Looper.prepare();
            sampleClient = new MqttClient(broker, clientId, persistence);

            //    sampleClient=new MqttClient(broker,clientId);
            MqttConnectOptions connOpts = new MqttConnectOptions();
            connOpts.setCleanSession(true);
            IMqttToken imt=sampleClient.connectWithResult(connOpts);

            Log.d("MQTT MODULE.....","....DONE..."+
            sampleClient.getServerURI()+"--"+imt.getResponse().getPayload());

            if(sampleClient.isConnected()) {
                return "CONNECTED";
            }
            else
            {
                return "Connection Failed.....";
            }
        }
        catch(Exception ex )
        {
            Log.d("MQTT MODULE", "CONNECTION FAILED " + 
            ex.getMessage() + " broker: " + broker + " clientId " + clientId);
            //   Toast.makeText(MainActivity.this, "FAILED", Toast.LENGTH_LONG).show();
            // tv2.setText("Failed!!");
            return "FAILED";
        }
        // return null;
    }
    @Override protected void onPostExecute(String result) {
        super.onPostExecute(result);

        if(result!= null)
        {
            isConnected=true;
            tvStatus.setText(result);          
        }
    }
}

Publish or sending messages is less resource intensive task. Therefore, we avoid creating a separate class. As there will be many messages, we will also not require to notify the user about every message send. Therefore, we create a simple Anonymous extension of AsyncTask class and publish the message.

Java
void Send(String content)
    {
        final String data=content;
        isConnected =sampleClient.isConnected();
        AsyncTask.execute(new Runnable()
        {
            @Override
            public void run()
            {
                try
                {
                    if(isConnected)
                    {
                        MqttMessage message = new MqttMessage(data.getBytes());
                        message.setQos(qos);
                        sampleClient.publish(topic, message);
                    }
                    else
                    {
                        //  Connect();
                        broker="tcp://"+edServer.getText().toString().trim()+":1883";
                        ConnectionClass con=new ConnectionClass();
                        con.execute();
                    }
                    Log.d("MQTT MODULE",data+" SENT");
                }
                catch(Exception ex)
                {
                }
                //TODO your background code
            }
        });
    }

In onCreate() method, we initialize the UI components and register for ACCELEROMETER sensor listener. We also attach an event listener to btnConnect click where we execute() a connection class object which in background attempts to connect to the broker.

Java
@Override
   protected void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_main);
       sensorManager = (SensorManager) getSystemService(SENSOR_SERVICE);
       sensorManager.registerListener
       (this, sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER),
        sensorManager.SENSOR_DELAY_UI);
       edServer=(EditText)findViewById(R.id.edServer);
       btnConnect=(Button)findViewById(R.id.btnConnect);
       tvStatus=(TextView)findViewById(R.id.tvStatus);
       btnConnect.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               // Connect();;
               broker="tcp://"+edServer.getText().toString().trim()+":1883";
               ConnectionClass con=new ConnectionClass();
               con.execute();
           }
       });
   }

Finally, we look into the most important part of the code that transforms the accelerometer data to gestures.

Java
float prevX=-1,prevY=-1,prevZ=-1;
    float DX=0;
    long lastUpdate = System.currentTimeMillis();
    int WAIT=1000;
    boolean detected=false;
    @Override
    public void onSensorChanged(SensorEvent event) {
        String command="";
        if(!isConnected)
        {
            return;
        }
        float[] values = event.values;

        // Movement
        float x = Round(values[0], 1);
        float y = Round(values[1], 1);
        float z = Round(values[2], 1);
        long actualTime = System.currentTimeMillis();
        if ((actualTime - lastUpdate) > WAIT)
        {
            if(detected)
            {
                detected=false;


                WAIT=2000;
                return;
            }
            else
            {
                WAIT=1000;
            }
            long diffTime = (actualTime - lastUpdate);
            lastUpdate = actualTime;

        ///// Make the Calculations Here//////
        float diffX=x-prevX;
        float diffY=y-prevY;
        float diffz=z-prevZ;
        Log.d("Sensors Change:","X="+diffX+" Y="+diffY+" Z="+diffz);
            if(diffY>5.0 )//&& Math.abs(diffz)>.9
            {
                Log.d("RECOGNIZED", "\n\n UP \n\n");
                detected=true;
                WAIT=3000;
                Send("UP");
            }
            if(diffY<-2.0 )//&& Math.abs(diffz)>.9
            {
                Log.d("RECOGNIZED","\n\n DOWN \n\n");
                detected=true;
                WAIT=3000;
                command="DOWN";
                //Send("DOWN");
            }
            if(diffX<-4.1 )//&& Math.abs(diffz)>.9
            {
                if(!detected) {
                    Log.d("RECOGNIZED", "\n\n RIGHT \n\n");
                    detected = true;
                    WAIT=3000;
                    command="RIGHT";
                    //Send("RIGHT");
                }
            }
            if(diffX>.9 && diffX<3.5 )//&& Math.abs(diffz)>.9
            {
                if(!detected)
                {
                    Log.d("RECOGNIZED", "\n\n LEFT \n\n");
                    detected = true;
                    WAIT=3000;
                    //Send("LEFT");
                    command="LEFT";
                }
            }
        //////////////////////
        /// Finally Update the past values with Current Values
        if(prevX!=-1)
        {
            DX=DX+diffX;
                prevX = x;
                prevZ = y;
                prevZ = z;
        }
        else
        {
                prevX = x;
                prevY = y;
                prevZ = z;
        }
            if(detected)
            {
                if(command.length()>1) {
                    Send(command);
                    command = "";
                }
            }
        }
        ///////////////
    }

We assume that the phone is tightened on the top of the right instep of the user. We use a detection technique commonly known as displacement based detection. Observe that we take global prevX, prevY and prevZ. Current x, y and z value of the accelerometer is subtracted from the previous values. When phone is placed horizontally, gravity change will be reflected through z, but very limited. Try to lift your instep, you will realize that the change in y is much more significant that change in z as you are just lifting your instep and not the entire phone. Similarly, the displacement will be negative when you bring your instep down.

This will change if you are working with hand gesture. When user holds the phone in hand, he can easily lift the phone UP and down for varying z significantly.

Now try to tilt your instep to left and right (right leg). You will realize that tilting the right instep to left side is difficult and you will not be able to do it properly. This is also true for hand. Try to rotate your right wrist towards left keeping rest of the hand steady. You would not succeed much unless you are talented Roger Federer!

But the constraint is less limiting for the case of hands as you can comfortably tilt the phone to left with the help of your elbow. We faced roadblock for this gesture for several days. Then we accidentally discovered the workaround for the gesture.

If you keep your leg steady and instead of moving your instep through ankle, just move your thigh towards left ankle and thus the instep is automatically titled to left. Negative X reflects RIGHT where as positive and a very small variation in X reflects LEFT.

Once a gesture is detected, user needs to bring his instep to Neutral position. So we give him a time of two seconds to get his leg to the neutral position. Once a gesture is detected, this padding time prevents repeated gestures from being sent.

Detection of the gesture triggers mqtt publish through send method.

All you need to do is trigger the app, connect the broker and fix it on the top of your instep. Now generate the gestures.

Mqtt Broker

For testing the app, you can use iot.eclipse.org. But As the broker is remote, there is a significant latency in data communication. The foot gesture based text entry system is in itself little time consuming. So, you wouldn't want to add a network latency with remote server. Right?

So, we will go with a local Mqtt broker for Windows. Windows allows only 1063 connections (I do not know why) or clients. But, as we would connect only one client with our broker, Windows Mqtt server running in local machine will just do fine.

You need to install Win32 Open SSL first.

Now download and install Mosquitto Broker for Windows

Now go to OpenSSL installed directory which must be in C:\OpenSSL-Win32 and copy pthreadVC2.dll file and replace the same in Mosquitto installation directory. That's it.

From command prompt, go to Mosquitto directory and run mosquitto.exe in order to see the broker running in your system. To close the broker, use Ctrl+c. IP address of the broker will be your PC's local IP address which you can obtain with ifconfig command.

C# Based Gesture to Morse Code Converter

Like many here in CodeProject, C# is our first love. The language is robust and nothing beats it when you want to develop a native Windows desktop App. The other reason for preferring C# is availability of good libraries and fast and easy development cycle which is important for prototyping. Just like our Android app, this is a single form app which has a provision for connecting to the broker. From Android App, we published to the channel, but in C#, we will subscribe to channel. When the message is received (which will be in binary format), we need to format it to string. The received gesture is interpreted and sent to Morse code converter method. The strings are appended. When LEFT gesture is given, last symbol is deleted. After RIGHT gesture, current sets of symbol is converted to character (watch the video for better understanding).

If you look at which is a snapshot of the UI of our C# Interpreter, you can see a Start broker button. We want to start the broker from our app only. This is simple using System.Diagnostics.Process class of .NET. We first locate the Mosquitto installation and programmatically start the server. In case the server was running, we close it and restart it.

C#
[DllImport("user32.dll", SetLastError = true)]
        static extern IntPtr SetParent(IntPtr hWndChild, IntPtr hWndNewParent);
        private void button1_Click(object sender, EventArgs e)
        {
            string s=Environment.GetFolderPath
                     (Environment.SpecialFolder.ProgramFilesX86);
            s = s + "\\mosquitto\\mosquitto.exe";
            //System.Diagnostics.Process.Start(s);
            procMqtt = new System.Diagnostics.Process();
            procMqtt.StartInfo = new System.Diagnostics.ProcessStartInfo(s," -v");
            procMqtt.StartInfo.UseShellExecute = true;
            Process[] pname = Process.GetProcessesByName("mosquitto");
            for (int i = 0; i < pname.Length; i++)
            {
                pname[i].Kill();
            }
            procMqtt.Start();
            System.Threading.Thread.Sleep(2000);
            SetParent(procMqtt.MainWindowHandle, groupBox2.Handle);
        }

On "Connect to Broker" button, we initialize the object of Mqtt client, set it to the address of the server, which is the local ip address of the PC where you are running the app and subscribe to the messages by adding an event handler to MqttMsgPublishReceived.

C#
MqttClient mc = null;
       System.Diagnostics.Process procMqtt;
       private void button2_Click(object sender, EventArgs e)
       {
           try
           {
               var ip = IPAddress.Parse(labIP.Text);
               mc = new MqttClient(ip);
               mc.Connect("RUPAM");
               mc.Subscribe(new string[]{topic},new byte[]{(byte)0});
               mc.MqttMsgPublishReceived += mc_MqttMsgPublishReceived;
               mc.MqttMsgSubscribed += mc_MqttMsgSubscribed;

               MessageBox.Show("Connected");
               mc.Publish(topic, GetBytes("VM Broker Started"));
           }
           catch
           {
           }
       }

In order to make it easy for the user to find out the server address (or PC's local IP address), we fetch at the form's Load() event.

C#
public string LocalIPAddress()
       {
           IPHostEntry host;
           string localIP = "";
           host = Dns.GetHostEntry(Dns.GetHostName());
           foreach (IPAddress ip in host.AddressList)
           {
               if (ip.AddressFamily == AddressFamily.InterNetwork)
               {
                   localIP = ip.ToString();
                   break;
               }
           }
           return localIP;
       }

This is done by looping through all the DNS entries and finding out the IPAddress.

As the Mqtt client runs on a different thread, if you want to access the UI elements from messagePublishReceived event handler, you need to use a delegate to the UI element. Recall that Mqtt messages appears as bytes, which need to be converted to characters.

While trying to convert the bytes received from Android mobile in our C# app, we faced a road block. Encoding class could not convert the bytes to ascii characters. This is because Android sends two bytes for each character (UTF). The first byte is the character code where as the second one was '\0'. So we needed to tweak the code for the conversion.

C#
void mc_MqttMsgPublishReceived(object sender, 
     uPLibrary.Networking.M2Mqtt.Messages.MqttMsgPublishEventArgs e)
        {
            //throw new NotImplementedException();
            //MessageBox.Show(GetString(e.Message));
            this.Invoke((MethodInvoker)delegate
            {
                if (e.Message[1] == (byte)0)
                {
                  //  listBox1.Items.Add(GetString(e.Message));
                }
                else
                {
                    try
                    {
                        string command = "";
                        for (int i = 0; i < e.Message.Length; i++)
                        {
                            //  command = command + ('A' + ((int)e.Message[i] - 64));
                            command = command + ((char)('A' + 
                                      ((int)e.Message[i] - 65))).ToString();
                        }
                        if (command.Equals("UP"))
                        {
                            txtCommands.Text = txtCommands.Text + "-";
                        }
                        if (command.Equals("DOWN"))
                        {
                            txtCommands.Text = txtCommands.Text + ".";
                        }
                        if (command.Equals("LEFT"))
                        {
                            txtCommands.Text = txtCommands.Text.Substring
                                               (0, txtCommands.Text.Length - 1);
                        }
                        if (command.Equals("RIGHT"))
                        {
                            txtTyping.Text = txtTyping.Text + 
                               ConvertMorseToText(txtCommands.Text);
                            txtCommands.Text = "";
                        }
                    }
                    catch
                    {
                    }
                }
            });
        }

Note that we are appending DASH(-) and DOT(.) on UP and DOWN commands and calling a method called ConvertMorseCodeTotext() for converting the set of symbols to text. This function is basically a mapping from Morse Code to English character.

C#
#region Morse code related part
       private Char[] Letters = new Char[] {'a', 'b', 'c', 'd', 'e', 'f', 'g',
       'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u',
       'v', 'w', 'x', 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', '8',
       '9', ' '};
       private String[] MorseCode = new String[] {".-", "-...", "-.-.",
       "-..", ".", "..-.", "--.", "....", "..", ".---", "-.-", ".-..",
       "--", "-.", "---", ".--.", "--.-", ".-.", "...", "-", "..-",
       "...-", ".--", "-..-", "-.--", "--..", "-----", ".----", "..---",
       "...--", "....-", ".....", "-....", "--...", "---..", "----.", "----"};
       public String ConvertMorseToText(String text)
       {
           text = "@" + text.Replace(" ", "@@") + "@";
           int index = -1;
           foreach (Char c in Letters)
           {
               index = Array.IndexOf(Letters, c);
               text = text.Replace("@" + MorseCode[index] +
                      "@", "@" + c.ToString() + "@");
           }
           return text.Replace("@@@@", " ").Replace("@", "");
       }

We use Regular expression to find the matching sequence of Morse code and replace that with character.

That's it! Now as you move generate the gesture through your foot, the letters will get typed on the screen.

Intel Edison Device App

You can follow this article of Gesture Recognition In Intel Edison by Moumita Das to know how the concept can be implemented purely at the device level.

5. Results and Discussions

Performance Analysis

Test values in tabular form

No. of readings 1 sequence Time in sec 2 sequence Time in sec 3sequence Time in sec 4sequence Time in sec
1 Ok 4 Ok 6 Ok 10 Ok 14
2 Ok 3 Ok 8 Ok 11 Ok 16
3 Ok 5 Ok 8 Ok 11 Ok 14
4 Ok 6 Ok 8 Ok 11 Ok 17
5 Ok 4 Ok 6 Ok 18 Ok 10
6 Ok 5 Ok 7 Ok 10 Ok 20
7 Ok 5 Ok 7 Ok 10 Ok 14
8 Ok 5 Ok 7 Ok 11 Ok 16
9 Ok 5 Ok 11 Ok 12 Ok 14
10 Ok 4 Fail - Ok 11 Ok 14
                 
Table :Test cases to find the

Accuracy of the alphabets:

  • Average time for 1 sequence= 4.6sec
  • Average time for 2 sequence= 7sec
  • Average time for 3 sequence= 11.5sec
  • Average time for 4 sequence= 15sec
  • Total average time for 1 character is=9.52 sec

The table above describes the testing part of the work, where we have mounted the smart-phone on the user leg and tested for every alphabets sequence, i.e., for the sequence of 1, i.e., either dot(.) or dash(-) the time taken for average of 10 times, we have got all the times the correct output, i.e., for 1 gesture movement of our foot, we have tested for 10 times, where we have got all the 10 times with correct result the average time taken for 1 sequence is 4.6sec i.e., we have taken an “E” alphabet which has dot(.)sequence and the actual result is same of expected.

Similarly, we have tested for 2 sequence, i.e., we have considered alphabet “A” which have dot-dash[.-] we have tested 10 times here, we have 9 times correct results and 1 time the result failed, the average time taken for 2 sequences takes 7 sec and here the actual output is same as expected. Next we have tested for 3 sequences, i.e., we have considered alphabet “D” which contain dash-dot-dot[-..] we have tested for 10 times and all the times we have got the correct output the average time taken for 3 sequence is 11.5 sec .

Lastly, we have tested for 4 sequences, i.e., we have considered an alphabet “B” which contain dash-dot-dot-dot[-…] we have tested for 10 times and all the times, the result was correct, the average time taken for 4 sequence is 15 sec.

The Total average time taken for 1 character is 9.52 sec. Number of sequences and their corresponding time is shown as below:

Image 9

Figure: Performance Graph

By considering the above table values, the result is shown in the form of graph. The above Fig 6.2 represents the Graph of No. of sequence verses Average time taken, here x-axis indicates Average Time in sec and y-axis indicates No. of sequences, where for one sequence of inputs, the time taken is 4.6 sec and for two sequence of inputs time taken is 7 sec and for three sequence of inputs time taken is 11.5 sec and for four sequence of inputs, the time taken is 15 sec.

Result: The project is applicable for various upper body disabled people but it has few of drawbacks like it continuously takes an input for 2 minutes at the beginning as our human body is never stable due to heartbeat and so on, so it will always be in motion hence an PC app takes few inputs without any gestures formation and even our Android application contains accelerometer sensor which will be in sensing the movement of our foot. In fact of few drawbacks, the outputs are formed accurate whenever the Wi-Fi range is speed and thus the output is formed and the project is successfully applicable in various fields, the few drawbacks can be further reduced by more working on it, as it is the first version of the work it needs to deal with the latency corresponding to protocol.

Screenshots

a. Sensor Data Classifier

Image 10

Figure 5.2 Screenshot: Android app(Sensor data classifier)

b. Application on PC

Image 11

Figure 5.3 Screenshot : Application on PC

c. Different Foot Gestures

Image 12

Figure 5.4 Screenshot: Up Gesture for displaying dash( _ ) Here z axis decreases

Image 13

Figure 5.5 Screenshot: Down Gesture for displaying dot (.) Here z axis increases

Image 14

Figure 5.6 Screenshot: Right Gesture for printing the character from sequence. Here x axis increases.

Image 15

Figure 5.7 Screenshot: Left Gesture for deleting a sequence. Here x axis decreases.

Image 16

Figure 5.8 Screenshot: Mobile app sending gesture data to the pc

Image 17

Figure 5.9 Screenshot: Displaying the text on the screen

6. Conclusion

Generally, there are many paralyzed people in world with upper body disability who cannot move their upper body in certain case of Alzymer, paralysis are of the category. There have been enormous amounts of computing advancement for people with lower body disability in order to assist them with hand gesture movement but there is no significant computational improvement with the help of foot gesture. In this particular work, we have developed an application for classifying foot gesture, further, it have been extended our gesture based library into specific application called Foot Gesture Based Text Entry System, even though the text entry system is quite slow, it can still be used as a prototype for the research towards assistive technology for people whose upper body is paralysed.

Future Work

The work has to deal with the latency corresponding to protocols, further improved by incorporating gesture into wearable device like Google watch or some kind of watch devices which significantly reduces the latency of both transmission as well as the gesture recognition under such situation, we can improve accuracy of the text entry system.

  1. The work can be used into various applications like robotics, home automation so on..
  2. Further, this work can be used for more number of gesture classification like shake gesture, moving the instep in a circular way and so on.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
CEO Integrated Ideas
India India
gasshopper.iics is a group of like minded programmers and learners in codeproject. The basic objective is to keep in touch and be notified while a member contributes an article, to check out with technology and share what we know. We are the "students" of codeproject.

This group is managed by Rupam Das, an active author here. Other Notable members include Ranjan who extends his helping hands to invaluable number of authors in their articles and writes some great articles himself.

Rupam Das is mentor of Grasshopper Network,founder and CEO of Integrated Ideas Consultancy Services, a research consultancy firm in India. He has been part of projects in several technologies including Matlab, C#, Android, OpenCV, Drupal, Omnet++, legacy C, vb, gcc, NS-2, Arduino, Raspberry-PI. Off late he has made peace with the fact that he loves C# more than anything else but is still struck in legacy style of coding.
Rupam loves algorithm and prefers Image processing, Artificial Intelligence and Bio-medical Engineering over other technologies.

He is frustrated with his poor writing and "grammer" skills but happy that coding polishes these frustrations.
This is a Organisation

115 members

Written By
Software Developer Integrated Ideas
India India
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
GeneralMy vote of 5 Pin
Chrris Dale7-Aug-16 13:30
Chrris Dale7-Aug-16 13:30 
QuestionDo you have a video of demo? Pin
Bhuvanesh Mohankumar4-Aug-16 7:01
Bhuvanesh Mohankumar4-Aug-16 7:01 
AnswerRe: Do you have a video of demo? Pin
Moumita Das4-Aug-16 10:50
Moumita Das4-Aug-16 10:50 
AnswerRe: Do you have a video of demo? Pin
Moumita Das4-Aug-16 10:50
Moumita Das4-Aug-16 10:50 
PraiseSocial Cause Pin
Bhuvanesh Mohankumar4-Aug-16 7:00
Bhuvanesh Mohankumar4-Aug-16 7:00 
GeneralRe: Social Cause Pin
Moumita Das4-Aug-16 9:01
Moumita Das4-Aug-16 9:01 
QuestionThoughts Pin
Nelek4-Aug-16 1:40
protectorNelek4-Aug-16 1:40 
AnswerRe: Thoughts Pin
Moumita Das4-Aug-16 1:46
Moumita Das4-Aug-16 1:46 
GeneralRe: Thoughts Pin
Nelek4-Aug-16 1:49
protectorNelek4-Aug-16 1:49 
GeneralRe: Thoughts Pin
Moumita Das4-Aug-16 1:51
Moumita Das4-Aug-16 1:51 
GeneralRe: Thoughts Pin
Nelek4-Aug-16 2:27
protectorNelek4-Aug-16 2:27 
GeneralRe: Thoughts Pin
Sean Ewington4-Aug-16 4:49
staffSean Ewington4-Aug-16 4:49 
GeneralRe: Thoughts Pin
Sean Ewington4-Aug-16 4:50
staffSean Ewington4-Aug-16 4:50 
This reminds me of an old browser issue we saw. Try editing your article in another browser. FireFox should work best.
Thanks,
Sean Ewington
CodeProject

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.