This is the unedited abstract from the official EC workprogramme published 16th Dec 2002 (page 17, paragraph 2.3.1.6) The color highlighting of keywords is NOT in the original document. The full Text is here. |
Objective:To develop natural and adaptive multimodal interfaces, that respond intelligently to speech and language, vision, gesture, haptics and other senses. Focus is on: Interaction between and among humans and the virtual and physical environment, through intuitive multimodal interfaces that are autonomous and capable of learning and adapting to the user environment in dynamically changing contexts. They should recognise emotive user reaction and feature robust dialogue capability with unconstrained speech and language input. Multilingual systems facilitating translation for unrestricted domains, especially for spontaneous or ill-formed (speech) inputs, in task-oriented settings. Work can span from basic research in areas such as machine learning and accurate vision and gesture tracking, to system level integration with proof of concept in challenging application domains, including wearable interfaces and smart clothes, intelligent rooms and interfaces for collaborative working tools, and cross-cultural communications. IPs are expected to address the objectives within a holistic approach enabling, where justified, competition within and across projects. NoEs should aim at lowering barriers between hitherto split communities and disciplines and advance knowledge in the field. They should help establish and reinforce shared infrastructures, including for training and evaluation, annotation standards and appropriate usability metrics and benchmarks. STREPs are expected to bootstrap research in identifiable or emerging sub-domains and to prepare associated communities. |