In recent years, together with the advance of sensing technologies, computer graphics and display technologies have been transforming our living and working environment to windows connecting the physical and the virtual world. In this context, the use of alternative inputs has emerged as an attractive solution for more natural and intuitive human computer interaction under less constrained environments.
Most research in this area focuses on gesture recognition and explores two main research issues: the acquisition of the optimal set of gesture features and the development of a gesture model to recognize the type of input gestures. In this workpackage, we provide a more holistic approach onto the development of gesture interface by combining different sensors and by putting emphasis onto the extensibility of the framework.
Interaction toolbox is a novel framework for the design of gesture interface which supports both gesture recognition and the convenient addition of new gestures. For gesture acquisition we combine visual sensors and body sensors to derive robust features for a wide range of gestures. The toolbox provide novel input devices (mWire, mCube, and mRing). The toolbox uses sophisticated machine learning techniques for pattern matching and reasoning. To demonstrate the versatility of the system, we develop various prototype applications: digital photo sorting on a large screen display, motion training system and gestural interactions in a museum environment.
|Application: Largescreen Interactions
|Application: Motion Training System
|Application: Intelligent Environment
March 31. 2012 04:21:40
Powered by CMSimple