Click here to Skip to main content
15,886,110 members
Articles / Product Showcase
Article

Developing with Desktop Natural User Interface API’s for Developers

23 Oct 2012CPOL14 min read 16.6K   4  
Developing with Desktop Natural User Interface API’s for Developers

This article is in the Product Showcase section for our sponsors at CodeProject. These articles are intended to provide you with information on products and services that we consider useful and of value to developers.

While the evolution of computers and technology has progressed at a breakneck pace, user experience has been largely limited to and attached to the available input devices. Since the introduction of the personal computer, we’ve seen the evolution from Command Line Interface, which was dependent on the availability of keyboards, to Graphical User Interface (GUI), which resulted from the introduction of the mouse, to the newest one, called Natural User Interface (NUI), now that touch is available.

Image 1

Touch- and pen-based input has been available since the early 1990’s (my first pen-based computer was a Windows* 3.1 machine in 1993), but it has been cost prohibitive and inconsistently available on the devices typically targeted by developers. Today that is no longer true. Smartphones, tablets, and now laptop computers, like Ultrabook™ devices, come standard with touch built in.

Image 2

Today, developers and designers must take into account these new features and build their applications to be at least aware of, and optimally developed for a touch first or touch only experience. In this article we are going to explore how designing for touch and NUI will impact the design and build decisions in creating great software, and look at what it takes to migrate existing applications to a touch-enabled environment.

Considerations of multi-touch

When we consider touch, we need to understand the limitations of it as an input device. The mouse is a pixel perfect selection device that enables a user to move a representation of a pointer on the screen, and has buttons that allow a user to click on those pixels, resulting in some specific action.

Touch is a new input option that has certain limitations, like the size of fingers and type of screen technology that can capture the touch action. As a result, to function effectively in this environment we need to provide larger targets and more space between targets. This necessarily limits how much we can put on a screen and understanding the ergonomics of the device helps to determine where to place the most commonly used controls.

For the Ultrabook, which includes a keyboard and a trackpad (similar to mouse) as well as touch, we need to be aware of how users will typically use it to perform their work. While a tablet might be completely detached from a keyboard, the Ultrabook uses touch in the best possible way, exploiting the zoom of pinch and stretch gestures to resize content, panning in a simple swipe or sliding action to move content around, and a simple tap to select an action.

Image 3

The placement of controls on the page needs to reflect the expected usage patterns for the device. When the user needs to reach over the keyboard, the easiest touch targets are going to be near the edges of the screen, on the top and sides—good things to keep in mind when laying out controls.

Touch

Fortunately in Windows the operating system takes care of responding to touch events and providing a translation to mouse activity that applications can use for free. This includes translating Taps to Clicks, Manipulations to Click and Drag, etc. To enable your application to be touch aware you need to subscribe to the touch and manipulation events and handle them in your code.

Some of the challenges with designing for touch include some of the controls typical of previous versions of Windows that are tuned for mouse and keyboard such as ribbons and menus can be difficult to use with touch due to their size and placement. For instance if the menu uses hover, an event that fires when the user moves their mouse pointer over a control, there is no corresponding event with touch, which results in menus that are rendered useless in a touch only environment.

While applications built for previous releases had varying levels of awareness of the possibility of touch, now that it’s a reality we need to adjust and work with the technologies. Here are some of them and approaches for adapting to a touch-enabled environment.

Buttons

Buttons respond to click events and as such there’s not much the developer needs to do to enable them to work with touch. However, the developer needs to consider that hover is not supported and as such if rendering effects, like highlighting the button when a mouse-over event occurs, it won’t be reflected with touch.

Image 4

The main consideration is to ensure that the buttons are large enough to support touch. As such make sure they are at least 23 x 23 px in size, and larger is better. Secondly, include enough margin between controls so that touching one isn’t mistaken as an action on another.

Menus

Menus have been around since the beginning of GUI’s and before in DOS when a list of choices would be displayed and users could select between them. The advantage of menus is that they provide a simple hierarchical way to organize commands the user may need. The challenge is that as our applications add functionality, the number of commands buried in the menu becomes cumbersome.

In a traditional non-touch windows the standard distance between menu items is fairly small, but with touch you need to ensure there’s more margin in order to minimize accidental selection of the wrong commands.

In markup languages like Windows Presentation Foundation (WPF) you can override the menu’s ControlTemplate to specify how you want it to behave.

Toolstrip

This control was introduced in WinForms as a way to add icons to represent actions in a program. To use it, the developer adds a ToolStrip control to their form and then adds controls to the container. Control types include buttons, combo boxes, text commands, and more. Typically the size of the icons is pretty small, and there’s not much margin between controls.

Image 5

In WinForms, as well as in WPF, you can increase the default size and margin between controls on the toolstrip by setting the ImageScalingSize of the ToolStrip, and then altering the margins between items. In our example later in this document, we’ll show you how to fix these issues.

Ribbon

The ribbon has been embraced as a boon and a boondoggle since its introduction in Windows Vista* and Office 2007. The migration away from a menu-based environment to an all graphical one with tabs was a large shift for many users when it was first introduced. The advantage of the ribbon is that it provides a rich organizational structure to how and where controls can be added, and not be limited to a single line of icons. The size of the icons varies between small and large, and the margin between is much more fluid and flexible. Today the ribbon is part of many core applications including the Windows Explorer, Paint, and more.

Image 6

Drop-Down Lists and ComboBoxes

Fortunately there’s not much that needs to be done to support drop-down lists, as they support the tap to show the selection list. As with menus you should pay attention to the space between items in the list.

Platform Choices

Building an application involves technology decisions that should take the lifecycle of software into account. There are several options in Windows, each with pros and cons, that determine the languages and frameworks used to build apps.

These generally include three classes of technologies: markup based that rely on HTML or XAML to describe the interface that is not proprietary to the tool used to create them, forms based where the developers use tools that provide a design surface for the interface and code files for the functionality, and native applications where the code builds the interface at runtime.

Decisions on which technology are typically driven by the talent pool available. If you have Visual Basic* developers, it is likely you’re developing with WinForms. If the application requires speed and performance, you will typically see native code such as C or C++ (although this will be changing with the new Windows Runtime or WinRT).

WinForms

Since the release of Visual Basic 3.0, Microsoft tools have supported the development paradigm of forms-based programming, where the developer begins with a form, adds controls to it, and then writes code to support the required behaviors. This approach, known as WinForms, uses a binary representation of the user interface but supports some productivity features that make rapid development possible.

WinForms continues to be supported in Windows 8, and from the perspective of touch and the Ultrabook, the key areas that should be addressed to make an application touch-enabled is to ensure that the controls are of minimum sizes and have margin between them.

The less-is-more maxim applies here, and it is best if the design/development team takes the time to re-address the user interface to reduce and remove complexity from the design.

An example – migrate a LOB from WinForms to touch optimized

The proposed scenario is to take a typical line of business (LOB) application built for legacy versions of Windows and bring it to the Windows 8 touch-aware desktop. Using the Northwind scenario (download the database schema from http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=23654 we’ve got an HR application that provides information about employees, including a picture. We will migrate the application from .NET 2.0 to the current framework and address issues along the way.

Start – WinForms Application in .NET 2.0

Below is a screen shot of the starting point of our journey; what issues do we find in this interface? At the minimum we need to make the application touchable, i.e., function in a touch only environment. Issues at this point include:

  • Size of buttons on toolbar...too small
  • Margin between controls
  • Spin control for 2012 bonus
  • Hover over picture to get current photo path
  • Drag and drop to update employee picture

Image 7

Step 1 - Touchable

In this iteration of the project we are going to do what is minimally necessary to make the application work in a touch only environment. This will include some code changes and some tweaking of the UI. First, we upgrade the application to the latest framework .NET 4.5 to opt into the touch-aware and gesture-enabled features available from the OS. In doing this we not only get gestures and manipulations translated into mouse events, we also gain access to programmatic features added since the application was last updated, including new data access patterns, LINQ, support for new Asynchronous Coding features, better threading, and much more that we will go into here.

Next, we look at the controls that are smaller than the touchable guidelines of 23 x 23 pixels (6 mm) and adjust properties as needed. This includes adjusting the size rendering of the images for the toolbar and adding larger margins between buttons.

To do this, we alter the properties of the toolstrip to make the ImageScalingSize to 23, 23 and then iterate through the collection of menu items adding Padding of 2, 2, 2, 2 to each item.

Image 8

Next we replace the spin control with something that works for touch. There are some 3rd party controls that you could look at, but standard buttons work just as well with a little bit of code.

Finally, we address the hover over the image so the image path will be visible with touch. Again we have a couple approaches, but both involve adding some code to a mouse down event (a touch or tap event qualifies and is translated to a mouse down). Probably the simplest is to add an event handler to the image control for MouseDown and add the code that makes the photoTextBox visible. One line of code is all it takes.

C++
private void photoPictureBox_MouseDown(object sender, MouseEventArgs e)
{
    photoPathTextBox.Visible = true;
}

Now that our application is touchable, confirm that it is touchable by running it on your Ultrabook. Notice that the icons on the toolbar are larger, that the Award button has been highlighted with Green, and that touching the image displays the image path.

At this point, the toolbar icons are larger with more margin, and we’ve replaced the spin control with something that is easily touchable. We would likely need to still consider the amount of information on the screen, but at least it is usable in a touch environment.

Image 9

The next steps include revisiting the content of the screen to prioritize and remove less relevant fields and see where we can simplify and reduce. That is the goal of the next phase of the application migration where we add the ribbon and move to a markup-based solution, in this case, WPF using XAML.

Touch Enabled

Next we look at what is involved to make the application “touch enabled.” As we defined earlier, this level of awareness includes making commonly used controls at least 40 x 40 pixels (10 mm) to handle standard gestures and basic multi-touch manipulations.

For our scenario we’ve chosen to integrate the ribbon to support the touch environment. So we’ll migrate the application to WPF and use the markup language features to make it easier to support touch-enabled requirements. For example, we can use Styles to target control types to ensure they are large enough and have sufficient margin.

After adding a dataset to connect to the database and laying out the controls on the Employee page we end up with something that looks like this:

Image 10

Windows 8 and WPF 4.5 include the ribbon as a first class control, but you have to add the tool to the toolbox by using “Choose items” and adding relevant ribbon items.

Markup-based applications – XAML

Since the introduction of .NET 4.0, touch-aware controls and events translating the touch events into mouse behaviors make enabling touch in applications much easier. For instance, ListBoxes with scrollable regions will receive and handle a panning gesture.

To make your application light up, you can choose to take advantage of making controls aware of the manipulations by setting the IsManipulationEnabled property to True. Then in the window itself you add event handlers for ManipulationStarted and ManipulationDelta. For even greater realism you can enable inertial effects by adding an event handler for ManipulationInertiaStarting and in the handler adding code to simulate friction or gravity.

For example in a basic WPF application you might have the following XAML markup:

<Window x:Class="WPFSimpleTouch.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525"
        ManipulationStarting="Window_ManipulationStarting_1"
        ManipulationDelta="Window_ManipulationDelta_1"
        ManipulationInertiaStarting="Window_ManipulationInertiaStarting_1"
        >
    <Grid>
        <Image IsManipulationEnabled="True" Source="myImage.png" Height="100" />
    </Grid>
</Window>

This simplified markup for a WPF application includes a simple grid control with an image placed within it. A rendering of the resulting application is below (assuming myImage.png is the Intel logo and has been included in the root of the application source code).

Image 11

In the declaration of the window we used XAML to enroll the three manipulation and inertia events including ManipulationStarting, where we specify the control(s) being affected, ManipulationDelta, which fires during the action of a gesture, and ManipulationInertiaStarting, where we can add logic to simulate friction and gravity to include a dampening effect when the user ends a gesture. A basic implementation is below:

C++
    private void Window_ManipulationStarting_1(object sender, ManipulationStartingEventArgs e)
    {
        e.ManipulationContainer = this;
        e.Handled = true;
    }

    private void Window_ManipulationDelta_1(object sender, ManipulationDeltaEventArgs e)
    {
        Image imgToMove = e.OriginalSource as Image;
        Matrix myMatrix = ((MatrixTransform)imgToMove.RenderTransform).Matrix;

        myMatrix.RotateAt(e.DeltaManipulation.Rotation, e.ManipulationOrigin.X, 
  e.ManipulationOrigin.Y);
        myMatrix.ScaleAt(e.DeltaManipulation.Scale.X, e.DeltaManipulation.Scale.Y, 
 e.ManipulationOrigin.X, e.ManipulationOrigin.Y);
        myMatrix.Translate(e.DeltaManipulation.Translation.X, 
 e.DeltaManipulation.Translation.Y);

        imgToMove.RenderTransform = new MatrixTransform(myMatrix);
        Rect myRec = new Rect(((FrameworkElement)e.ManipulationContainer).RenderSize);
        Rect myBounds = imgToMove.RenderTransform.TransformBounds ( 
                            new Rect(imgToMove.RenderSize));
        if (e.IsInertial && !myRec.Contains(myBounds))
            e.Complete();
        e.Handled = true;
    }

    private void Window_ManipulationInertiaStarting_1(object sender, 
ManipulationInertiaStartingEventArgs e)
    {
        e.TranslationBehavior.DesiredDeceleration = 1.0 * 96.0 / (1000.0 * 1000.0);
        e.ExpansionBehavior.DesiredDeceleration = 0.1 * 96 / (1000.0 * 1000.0);
        e.RotationBehavior.DesiredDeceleration = 720 / (1000.0 * 1000.0);

        e.Handled = true;
    }

The resulting application allows the user to use touch gestures like touch and drag, pinch and zoom to manipulate the image on the screen. You can download the sample code from http://bit.ly/bqtwpfst.

Image 12

Native Applications

In the native application class of software, the developer is typically working with a low-level language like C or C++ and crafting code that needs to be aware of the events firing at that level. For an application to be touch aware you need to register the application to receive Windows Touch messages, and then add code to handle the messages. By default your application will receive WM_GESTURE messages, but if you call RegisterTouchWindow, you will be able to handle the lower level input.

C++
BOOL InitInstance(HINSTANCE hInstance, int nCmdShow)
{
	HWND hWnd;

	...
	RegisterTouchWindow(hWnd, 0);

       ...
}

Your application handles the WM_TOUCH messages directly, but unlike the mouse, touch only generates a single WM_TOUCH message. There is no WM_TOUCH_DOWN, WM_TOUCH_MOVE, or WM_TOUCH_UP. This is different from managed code, in the sense that raw touch events are combined to simulate mouse events. You handle WM_TOUCH messages in the WndProc method, where you can call GetTouchInputInfo to get the status of the touched points. This returns an array of TOUCHINPUT structures each of which represents one finger touch point.

Gestures are handled similarly, in that you need to handle the WM_GESTURENOTIFY message to tell the operating system which gestures you want to enable in your program. By default the gestures other than rotation and simple single finger panning are enabled. After registering which gestures you want to support, you handle the WM_GESTURE messages.

The MSDN* sample http://msdn.microsoft.com/en-us/library/dd562174(VS.85).aspx demonstrates how to register for touch and gestures and shows how inertia can add to the behavior of the touch environment.

Summary

Building touch-aware applications is a straightforward task, and the amount the developer needs to learn to support it is not that great. At a minimum, upgrade your application to the new frameworks where possible, pay attention to sizing and spacing of controls, and light up new features available on the Ultrabook by registering and handling the touch events.

References

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
United States United States
You may know us for our processors. But we do so much more. Intel invents at the boundaries of technology to make amazing experiences possible for business and society, and for every person on Earth.

Harnessing the capability of the cloud, the ubiquity of the Internet of Things, the latest advances in memory and programmable solutions, and the promise of always-on 5G connectivity, Intel is disrupting industries and solving global challenges. Leading on policy, diversity, inclusion, education and sustainability, we create value for our stockholders, customers and society.
This is a Organisation

42 members

Comments and Discussions

 
-- There are no messages in this forum --