Click here to Skip to main content
15,867,878 members
Articles / Programming Languages / Javascript
Article

Dictating Text Messages from a Windows Store Application using Plantronics and Twilio

17 Jan 2014CPOL8 min read 20.6K   45   1  
In this article, I will show you how to combine Bing Speech Recognition, a Plantronics Voyager Legend UC Bluetooth headset and Twilio to bring text messaging capabilities to a Windows Store application.

This article is in the Product Showcase section for our sponsors at CodeProject. These articles are intended to provide you with information on products and services that we consider useful and of value to developers.

Introduction

Text messaging is a very convenient and popular form of communication. Many times we need to relay small bits of information to another person. These messages could contain another friend’s phone number, an account number, a reminder of a task, a status, or some other tidbit of important information. A text message is stored on the recipient’s device so it also serves as a type of unstructured database of this useful information.

We commonly send text messages from our mobile phones, but we can extend that reach even further. In this article, I will show you how to combine Bing Speech Recognition, a Plantronics Voyager Legend UC Bluetooth headset and Twilio to bring text messaging capabilities to a Windows Store application.

Application Overview 

This application will receive dictation from the user with a Plantronics headset microphone. The dictation can be initiated either by a hardware button press on the headset itself or via a button click on the application user interface. The dictation from the user is then captured and processed into a textual result using the Bing Speech Recognition Control and Service. Once the text from the dictation is available, the user will have the option of SMS messaging the content via the Twilio service to a mobile phone. The SMS message process is also initiated by a button click on the user interface of the application.

Getting Started

This project makes use of Plantronics hardware as well as a couple of external services. To get started, download and install the Plantronics SDK from the Plantronics Developer Connection site.

Next we will need to download and install the Bing Speech Recognition Control for Windows 8.1 Visual Studio extension. In order to use this control, you must also sign up for the service through the Windows Azure Marketplace and create an account there if you don’t already have one. After you have an account and are signed up for the service, you will need to register your application. Click on "My Account", then access the "DEVELOPERS" section from the left-hand menu. Under the Registered Applications tile, click on the REGISTER button.

Image 1

Register your application with a Client ID, Name and URL of your choosing. Be sure to record the Client ID and the Client Secret for use in your source code.

Image 2

You will now see the application as registered and active in your account.

Image 3

You will also need a Twilio account if you don’t already have one. A trial account for this project will suffice. Please make note of your Twilio phone number, as well as the mobile phone number that you used when registering if you are using a trial. With a trial account these will be the only valid phone numbers you can use for sending and receiving text messages. You will also need to record your account SID and authentication token that is available on your Twilio Dashboard page for use in your source code.

Image 4

Lastly let’s create a project in Visual Studio 2013. Create a JavaScript Windows Store application using the "Navigation App" template. I’ve named my project "PlantronicsIntegration".

Image 5

Replace the markup in pages\home\home.html with the following UI:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>home</title>
 
    <!-- WinJS references -->
    <link href="http://www.codeproject.com/Microsoft.WinJS.2.0/css/ui-light.css" rel="stylesheet" />
    <script src="http://www.codeproject.com/Microsoft.WinJS.2.0/js/base.js"></script>
    <script src="http://www.codeproject.com/Microsoft.WinJS.2.0/js/ui.js"></script>
    <link href="home.css" rel="stylesheet" />
    <script src="home.js"></script>
</head>
<body>
    <div class="home fragment">
        <header aria-label="Header content" role="banner">
            <button data-win-control="WinJS.UI.BackButton"></button>
            <h1 class="titlearea win-type-ellipsis">
                <span class="pagetitle">Hello Plantronics</span>
            </h1>
        </header>
        <section aria-label="Main content" role="main">
            <div id="indicator" style="width:50px;height:50px;" ></div>
           
            <button id="btnDictate">Dictate Message</button>
            <button id="btnSendSMS">Send SMS Message via Twilio</button>
 
          
            <div id="ResultText" style="background-color:goldenrod"></div>
 
            <div id="message"></div>
 
        </section>
    </div>
</body>
</html>

This is a quick and simple UI that contains an indicator, to show if a headset is currently connected to the machine, a button that will be used to initiate text dictation and a button that will be used send an SMS message via Twilio. There is also a "ResultText" div that will be used to display the words of a dictation back to the user, and finally a "message" div that will provide statuses and error messages back to the user of the application.

Communicating with the Plantronics Headset

The first thing that we will implement is the interaction between our application and the physical headset. This is made possible through the use of a REST API service. This is a self-hosted service that is provided through the Plantronics Unified Runtime Engine (Plantronics URE). The first step in our implementation is to ensure the runtime is in fact running on our system, if it is not, you can locate the executable where you installed the Plantronics SDK, by default it is in a path similar to the following:

C:\Program Files (x86)\Plantronics\Plantronics SDK\PlantronicsURE.exe

We will implement our Windows Store application to interact directly with the RESTful services exposed through the Plantronics URE. These services in turn interact with the headset hardware directly.

Image 6

Open "pages\home\home.js" for edit and within the root function, add the following code that we will reuse during our implementation. The first method "plantronicsGenericVerifyNoError" is a method that expects a formatted response from the Plantronics URE REST API and checks to see if an error occurred. If no error occurred, it returns true, otherwise a message with the error is displayed to the user. This message is displayed through the second method "showMessage" that adds text to the UI using the "message" div in home.html.

C#
function plantronicsGenericVerifyNoError(result)
{
    var parsed = JSON.parse(result.response);
    if (!parsed.isError) {
        return true;
    }
    else {
        //error condition, display the message
        showMessage(parsed.Err);
        return false;
    }
}
function showMessage(msg) {
    message.innerHTML += msg +"<br />";
}

Now we are ready to verify if a Plantronics headset is available on the machine. To do this we will query the DeviceList function of the REST API and parse its result. If a headset is available, turn the indicator green, otherwise change it to red and display a message to the user. To accomplish this, add the following code to home.js:

C#
var deviceUid = null;

function verifyDevice()
{
    indicator.style.backgroundColor = "gray";
    showMessage("Verifying device...")
    var uri = "http://127.0.0.1:32001/Spokes/DeviceServices/DeviceList";
    WinJS.xhr({ url: uri }).then(parseDevices, function (e)
        { showMessage(e.response); });
}

function parseDevices(result)
{
    var noError = plantronicsGenericVerifyNoError(result);

    if (noError)
    {
        var parsed = JSON.parse(result.response);
        var deviceArray = parsed.Result;
        if (deviceArray.length > 0) {
            deviceUid = deviceArray[0].Uid;
            indicator.style.backgroundColor = "green";
            showMessage("Establishing session with connected device...");
            establishDeviceSession();
        }
    }
    else {
        indicator.style.backgroundColor = "red";
    }
}

Once we have determined a device is in fact connected, we assigned the unique Id of this device to the variable deviceUid. This variable will be used to establish a session with the REST service Session Manager. We will now implement the establishDeviceSession method that is called once we have the device unique id. To do this, add the following code:

C#
var sessionId = null;
var pollHardware = false;

function establishDeviceSession()
{
    var uri = "http://127.0.0.1:32001/Spokes/DeviceServices/"
                    + deviceUid + "/Attach";
    WinJS.xhr({ url: uri }).then(getSessionId, function (e) {
            showMessage(e.response);
    });
}

function getSessionId(result)
{
    var noError = plantronicsGenericVerifyNoError(result);

    if(noError)
    {
        var parsed = JSON.parse(result.response);
        sessionId = parsed.Result;

        showMessage("Session with connected device established");

        //poll for the button pressed hardware event on the device
        showMessage("Begin Hardware Button Polling...");
        pollHardware = true;
        pollHardwareButtonPressedQueue();
    }

}

This code introduces a couple new variables. One is to hold the session id that is used when polling the REST API for headset hardware events. The other one is a Boolean variable that will determine if polling of the hardware should continue. We will need this second variable due to the fact that we have two ways of initiating the dictation. One is through using the button on the UI, the other is by pressing the Call button on our Plantronics headset. While dictation is occurring we will want to turn off event polling to the device. As you can see from the code above, once we have a session id, we are able to start polling for hardware events. We will now implement the "pollHardwareButtonPressedQueue" method as follows:

C#
var noCacheHeader = { "If-Modified-Since": "Mon, 27 Mar 1972 00:00:00 GMT" };
function pollHardwareButtonPressedQueue()
{
    if (pollHardware) {
        setTimeout(function () {
            var uri = "http://127.0.0.1:32001/Spokes/DeviceServices/"
                        + sessionId + "/Events?queue=127";
            WinJS.xhr({ url: uri, headers: noCacheHeader })
                .then(checkHardwareButtonPressedQueue,
                function (e) { showMessage(e.response); });
        }, 300);
    }
}

function checkHardwareButtonPressedQueue(result)
{
    var noError = plantronicsGenericVerifyNoError(result);
    if(noError)
    {
        var parsed = JSON.parse(result.response);
        var queueArray = parsed.Result;
        if (queueArray.length > 0) {

            if(queueArray[0].Event_Name=="Talk")
            {
                //verify audio is on
                showMessage("Hardware Button Pressed: Talk Event Received"+
                            "- Hardware Button Polling Ended");
                pollHardware = false;
                verifyAudioStateOn();
                return;
            }
        }
    }
    pollHardwareButtonPressedQueue();
}

From this code, you can see that polling for headset events occurs indefinitely until such time that a "Talk" event is received. Please note that because the polling happens so frequently that we also needed to include a No-Cache header to our call to the Events queue of our REST service. This will ensure that the physical call is made to the REST service each time. Once a Talk event is received, we are ready to make sure our microphone is turned on and ready to receive the dictation. Implement the verifyAudioStateOn function as follows:

C#
function verifyAudioStateOn()
{
    var uri = "http://127.0.0.1:32001/Spokes/DeviceServices/"
                + sessionId + "/AudioState?state=1";
    WinJS.xhr({ url: uri }).then(checkAudioStateOn,
        function (e) { showMessage(e.response); });
}

function checkAudioStateOn(result) {
    var noError = plantronicsGenericVerifyNoError(result);
    if (noError) {
        var parsed = JSON.parse(result.response);
        if (parsed.Result)
        {
            showMessage("Audio State is on - Begin Dictation");
        }
        beginDictation();
    }
}
function beginDictation() {

}

For now we will just keep the beginDictation method empty as we implement Bing Speech Recognition. Lastly, to initiate the code to begin the interactions with the Plantronics headset, call the verifyDevice method from within the ready function of your page. We’ve also gone ahead and wired up the dictation button from the UI. Replace the define function in home.js with the following:

C#
WinJS.UI.Pages.define("/pages/home/home.html", {
    // This function is called whenever a user navigates to this page. It
    // populates the page elements with the app's data.
    ready: function (element, options) {

     btnDictate.addEventListener("click", dictationButtonPressed, false);

        //verify plantronics device is connected
        verifyDevice();
    }
});

function dictationButtonPressed()
{
    showMessage("On-Screen Button Pressed - End Hardware Button Polling...")
    pollHardware = false; //turn off hardware polling
    verifyAudioStateOn();
}

Implementing Bing Speech Recognition

We will need to add a couple references to the project in order to use the Bing Speech Recognition Control. Right-click References in the Solution Explorer and ensure "Bing.Speech" and the "Microsoft Visual C++ 2013 Runtime Package for Windows" are selected.

Image 7

You will also need to change the build target from "Any CPU" to either x86 or x64 depending on your preference.

Image 8

Now that this is done, we are able to add our speech recognition control to our UI. Open "pages/home/home.html" and add the following to the head section of the document:

<link href="http://www.codeproject.com/Bing.Speech/css/voiceuicontrol.css" rel="stylesheet" />
<script src="http://www.codeproject.com/Bing.Speech/js/voiceuicontrol.js"></script>

After the message div in the same file, add the speech recognition control by adding the following:

<div id="SpeechControl"
     data-win-control="BingWinJS.SpeechRecognizerUx"></div>

Return to "pages/home/home.js" and add the following variables to contain your Bing service account information:

//Bing Service Account Info
var bingAccountInfo = new Bing.Speech.SpeechAuthorizationParameters();
bingAccountInfo.clientId = "[ENTER YOUR CLIENT ID]";
bingAccountInfo.clientSecret = "[ENTER YOUR CLIENT SECRET]";

It is useful to give users helpful tips when using the Speech Recognition control. In order to pre-populate some helpful hints, add the following code to the ready function in home.js:

SpeechControl.winControl.tips = new Array(
         "For more accurate results, try using a headset microphone.",
         "Speak with a consistent volume.",
         "Speak in a natural rhythm with clear consonants.",
         "Speak with a slow to moderate tempo.",
         "Background noise may interfere with accurate speech recognition."
         );

We are now ready to implement the beginDictation method. Replace our empty function stub with the following code:

C#
function beginDictation() {
    var sr = new Bing.Speech.SpeechRecognizer("en-us", bingAccountInfo);
    SpeechControl.winControl.speechRecognizer = sr;

    //dictation
    sr.recognizeSpeechToTextAsync()
        .then(
            function (result) {
                if (typeof (result.text) == "string") {
                    ResultText.innerHTML = result.text;
                    showMessage("Dictation Ended - "+
                        "Resuming Hardware Button Polling");
                    pollHardware = true;
                    pollHardwareButtonPressedQueue();
                }
                else {
                    // Handle quiet or unclear speech here.
                }
            },
            function (error) {
                showMessage(error);
            })
}

In this code, we put the Bing Speech Recognition Control to work. It interprets the text spoken by the user and displays it on the UI using the "ResultText" div. Once the dictation has ended, we resume the polling for headset hardware events.

Text Messaging with Twilio

Similar to when we are interacting with our Plantronics headset, we will also be using a REST API to interact with Twilio. We will use the text that is currently displayed in the ResultText div, the latest text that was dictated by the user, as the content for the text message. To implement text messaging functionality, add the following code to home.js:

//Twilio Account Information
var twilioAccountSid = "[ENTER YOUR TWILIO ACCOUND SID]";
var twilioAuthToken = "[ENTER YOUR TWILIO AUTH TOKEN]";
//phone number in the format "+1##########" in the U.S.
var twilioPhoneNumber = "[ENTER YOUR TWILIO PHONE NUMBER]";
var textToPhoneNumber = "[ENTER YOUR REGISTRATION MOBILE #]";

function sendTwilioSms()
{
    var messageBody = ResultText.innerText.trim();
    if (messageBody.length > 0)
    {
        var paramsString = "To=" + textToPhoneNumber + "&From="
            + twilioPhoneNumber + "&Body=" + messageBody;

        var postData = {
            type: "post",
            user: twilioAccountSid,
            password: twilioAuthToken,
            url: "https://api.twilio.com/2010-04-01/Accounts/"
                    + twilioAccountSid + "/SMS/Messages",
            headers: { "Content-type": "application/x-www-form-urlencoded" },
            data: paramsString
        };
        showMessage("Sending SMS Message...");
        WinJS.xhr(postData).then(verifySMSStatus, smsError);
    }
    else
    {
        showMessage("No message to send via Twilio");
    }
}

function verifySMSStatus(result) {
    showMessage("Twilio SMS sent successfully");
}

function smsError(result) {
    showMessage("Error sending Twilio SMS message");
}

Now we can hook-up our UI button so that we can initiate a text message. To do this, add the following code to the ready function of home.js:

btnSendSMS.addEventListener("click", sendTwilioSms, false);

Run the application and try it out!

Image 9

Summary

In this article we combined some great technologies to provide text messaging functionality to a Windows Store application. We showed how we can interact with Plantronics headset hardware to initiate dictation through a button press on the headset itself. We also showed how to use the Bing Speech Recognition Control and Service to interpret the words that spoken by the user. Lastly, we used the Twilio service to send the text message to our phone.

As a convenience, here is a full listing of home.html and home.js.

home.html

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>home</title>

    <!-- WinJS references -->
    <link href="http://www.codeproject.com/Microsoft.WinJS.2.0/css/ui-light.css" rel="stylesheet" />
    <script src="http://www.codeproject.com/Microsoft.WinJS.2.0/js/base.js"></script>
    <script src="http://www.codeproject.com/Microsoft.WinJS.2.0/js/ui.js"></script>
    <link href="http://www.codeproject.com/Bing.Speech/css/voiceuicontrol.css" rel="stylesheet" />
    <script src="http://www.codeproject.com/Bing.Speech/js/voiceuicontrol.js"></script>
    <link href="home.css" rel="stylesheet" />
    <script src="home.js"></script>
</head>
<body>
    <div class="home fragment">
        <header aria-label="Header content" role="banner">
            <button data-win-control="WinJS.UI.BackButton"></button>
            <h1 class="titlearea win-type-ellipsis">
                <span class="pagetitle">Hello Plantronics</span>
            </h1>
        </header>
        <section aria-label="Main content" role="main">
            <div id="indicator" style="width:50px;height:50px;" ></div>
            
            <button id="btnDictate">Dictate Message</button>
            <button id="btnSendSMS">Send SMS Message via Twilio</button>

           
            <div id="ResultText" style="background-color:goldenrod"></div>

            <div id="message"></div>


            <div id="SpeechControl"
                 data-win-control="BingWinJS.SpeechRecognizerUx"></div>
        </section>
    </div>
</body>
</html>

home.js

(function () {
    "use strict";

    var deviceUid = null;
    var sessionId = null;
    var pollHardware = false;

    //Bing Service Account Info
    var bingAccountInfo = new Bing.Speech.SpeechAuthorizationParameters();
    bingAccountInfo.clientId = "[ENTER CLIENT ID]";
    bingAccountInfo.clientSecret = "[ENTER CLIENT SECRET]";

    //Twilio Account Information 
    var twilioAccountSid = "[ENTER ACCOUNT SID]";
    var twilioAuthToken = "[ENTER AUTH TOKEN]";
    //phone number in the format "+1##########" in the U.S.
    var twilioPhoneNumber = [ENTER TWILIO PHONE #]";
    var textToPhoneNumber = "[ENTER REGISTRATION MOBILE PHONE #]";

    //WinJS xhr no-cache header
    var noCacheHeader = { "If-Modified-Since": "Mon, 27 Mar 1972 00:00:00 GMT" };
   
    WinJS.UI.Pages.define("/pages/home/home.html", {
        // This function is called whenever a user navigates to this page. It
        // populates the page elements with the app's data.
        ready: function (element, options) {

            btnDictate.addEventListener("click", dictationButtonPressed, false);
            btnSendSMS.addEventListener("click", sendTwilioSms, false);

           SpeechControl.winControl.tips = new Array(
                    "For more accurate results, try using a headset microphone.",
                    "Speak with a consistent volume.",
                    "Speak in a natural rhythm with clear consonants.",
                    "Speak with a slow to moderate tempo.",
                    "Background noise may interfere with accurate speech recognition."
                    );

            //verify plantronics device is connected
            verifyDevice();
        }
    });

    function beginDictation() {
        var sr = new Bing.Speech.SpeechRecognizer("en-us", bingAccountInfo);
        SpeechControl.winControl.speechRecognizer = sr;

        //dictation
        sr.recognizeSpeechToTextAsync()
            .then(
                function (result) {
                    if (typeof (result.text) == "string") {
                        ResultText.innerHTML = result.text;
                        showMessage("Dictation Ended - "+
                            "Resuming Hardware Button Polling");
                        pollHardware = true;
                        pollHardwareButtonPressedQueue();
                    }
                    else {
                        // Handle quiet or unclear speech here.
                    }
                },
                function (error) {
                    // Put error handling here.
                    showMessage(error);
                })
    }

    function dictationButtonPressed()
    {
        showMessage("On-Screen Button Pressed - End Hardware Button Polling...")
        pollHardware = false; //turn off hardware polling
        verifyAudioStateOn();
    }

    function showMessage(msg) {
        message.innerHTML = message.innerHTML + msg +"<br />";
    }

    /* PLANTRONICS HARDWARE SPECIFIC FUNCTIONS */
    function verifyDevice()
    {
        indicator.style.backgroundColor = "gray";
        showMessage("Verifying device...")
        var uri = "http://127.0.0.1:32001/Spokes/DeviceServices/DeviceList";
        WinJS.xhr({ url: uri }).then(parseDevices, function (e)
            { showMessage(e.response); });
    }

    function parseDevices(result)
    {
        var noError = plantronicsGenericVerifyNoError(result);
        
        if (noError)
        {
            var parsed = JSON.parse(result.response);
            var deviceArray = parsed.Result;
            if (deviceArray.length > 0) {
                deviceUid = deviceArray[0].Uid;
                indicator.style.backgroundColor = "green";
                showMessage("Establishing session with connected device...");
                establishDeviceSession();
            }
        }
        else {
            indicator.style.backgroundColor = "red";
        }
    }

    function establishDeviceSession()
    {
        var uri = "http://127.0.0.1:32001/Spokes/DeviceServices/"
                        + deviceUid + "/Attach";
        WinJS.xhr({ url: uri }).then(getSessionId, function (e) {
                showMessage(e.response);
        });
    }

    function getSessionId(result)
    {
        var noError = plantronicsGenericVerifyNoError(result);
        
        if(noError)
        {
            var parsed = JSON.parse(result.response);
            sessionId = parsed.Result;
            
            showMessage("Session with connected device established");

            //poll for the button pressed hardware event on the device
            showMessage("Begin Hardware Button Polling...");
            pollHardware = true;
            pollHardwareButtonPressedQueue();
        }
      
    }
          
    function pollHardwareButtonPressedQueue()
    {
        if (pollHardware) {
            setTimeout(function () {
                var uri = "http://127.0.0.1:32001/Spokes/DeviceServices/"
                            + sessionId + "/Events?queue=127";
                WinJS.xhr({ url: uri, headers: noCacheHeader })
                    .then(checkHardwareButtonPressedQueue,
                    function (e) { showMessage(e.response); });
            }, 300);
        }
    }

    function checkHardwareButtonPressedQueue(result)
    {
        var noError = plantronicsGenericVerifyNoError(result);
        if(noError)
        {
            var parsed = JSON.parse(result.response);
            var queueArray = parsed.Result;
            if (queueArray.length > 0) {
                                
                if(queueArray[0].Event_Name=="Talk")
                {
                    //verify audio is on
                    showMessage("Hardware Button Pressed: Talk Event Received"+
                                "- Hardware Button Polling Ended");
                    pollHardware = false;
                    verifyAudioStateOn();
                    return;
                }
            }
        }
        pollHardwareButtonPressedQueue();
    }

    function verifyAudioStateOn()
    {
        var uri = "http://127.0.0.1:32001/Spokes/DeviceServices/"
                    + sessionId + "/AudioState?state=1";
        WinJS.xhr({ url: uri }).then(checkAudioStateOn,
            function (e) { showMessage(e.response); });
    }

    function checkAudioStateOn(result) {
        var noError = plantronicsGenericVerifyNoError(result);
        if (noError) {
            var parsed = JSON.parse(result.response);
            if (parsed.Result)
            {
                showMessage("Audio State is on - Begin Dictation");
            }
            beginDictation();
        }
    }
 
    function plantronicsGenericVerifyNoError(result)
    {
        var parsed = JSON.parse(result.response);
        if (!parsed.isError) {
            return true;
        }
        else {
            //error condition, display the message
            showMessage(parsed.Err);
            return false;
        }
    }

    /* Twilio SMS Functions */
    function sendTwilioSms()
    {
        var messageBody = ResultText.innerText.trim();
        if (messageBody.length > 0)
        {            
            var paramsString = "To=" + textToPhoneNumber + "&From="
                + twilioPhoneNumber + "&Body=" + messageBody;

            var postData = {
                type: "post",
                user: twilioAccountSid,
                password: twilioAuthToken,
                url: "https://api.twilio.com/2010-04-01/Accounts/"
                        + twilioAccountSid + "/SMS/Messages",
                headers: { "Content-type": "application/x-www-form-urlencoded" },
                data: paramsString
            };
            showMessage("Sending SMS Message...");
            WinJS.xhr(postData).then(verifySMSStatus, smsError);
        }
        else
        {
            showMessage("No message to send via Twilio");
        }
    }

    function verifySMSStatus(result) {
        showMessage("Twilio SMS sent successfully");
    }

    function smsError(result) {
        showMessage("Error sending Twilio SMS message");
    }

})();

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Program Manager
United States United States
Jeff Fritz is a senior program manager in Microsoft’s Developer Division working on the .NET Community Team. As a long time web developer and application architect with experience in large and small applications across a variety of verticals, he knows how to build for performance and practicality. Four days a week, you can catch Jeff hosting a live video stream called 'Fritz and Friends' at twitch.tv/csharpfritz. You can also learn from Jeff on WintellectNow and Pluralsight, follow him on twitter @csharpfritz, and read his blog at jeffreyfritz.com

Written By
Software Developer (Senior)
United States United States
Carey Payette is a Senior Software Engineer with Trillium Innovations, a Progress Developer Expert, as well as an ASPInsider. She has interests in IoT and is a member of the Maker community. Carey is also a wife, and mom to 3 fabulous boys. She is a 2nd degree black belt in TaeKwonDo and enjoys coding for fun!

Comments and Discussions

 
-- There are no messages in this forum --