Critical success factors in determining the ideal human-machine-interactions (HMI) include context, convenience and enhancing the overall UX. But not all current gesture use cases meet these basic needs.
A new report from the User Experience Strategies Service (UXS) at Strategy Analytics has evaluated some of the broad and specific developments in gesture control. Advances in technology such as AI and IoT are creating future use cases for gestures beyond consumer applications. Providing greater user engagement and immersion, these technologies will enable the simplest of gestures to be recognized. Key report findings include:
° The adoption of gesture technology is increasing and is now being applied to a range of personal devices such as smartphones and tablets, smart home devices and in-vehicle. But its barriers to adoption remain: increased learning curves, the need to recall specific gestures at specific times and frustration with the accuracy of gesture recognition.
° Gesture controls cannot eliminate all other HMI – text entry being one example – but instead work to complement them. Use cases for gesture control must help to enhance the user experience in addition to other HMI features such as touch or voice, available on the same device.
° Advancements in AI and IoT are creating future use cases for gesture control, which will enable more natural user interfaces. Broader use cases include: digital signage in retail, complex robotic systems and drones, smart appliances, medical-based applications and mixed-reality environments.
“Ultimately gesture control needs to be valued as more than a gimmick. Any use cases developed for this HMI need to illustrate value in how they enhance the user journey while still being intuitive” Diane O’Neill, Director UXIP and report author commented. “At present, the greatest strength of gesture is to complement other HMIs. Working in conjunction with touch or voice control, gesture has the ability to provide the missing piece in the quest for a seamless user experience.”