SIMILAR Presents
Call for Papers
February 23, 2004

Last edited :

IEEE Signal Processing Magazine Special Issue on Speech Technology and Systems in Human-Machine Communication

Over the past two decades, significant progresses are made in advancing speech technologies in multimodal/multimedia and human-machine communication. As the era of information age continues, the research in speech technologies is further acclerated by the advance of powerful computing devices, the data driven pattern recognition methods, and the need to generate machine understandable metadata for web cotents and other information sources. Although various speech systems are built and applied in numerous applications, the full potential of speech technologies in multimodal/multimedia communication still remains to be uncovered. This special issue is to fill the need of a comprehensive review of new approaches and advances of speech technologies under the broad perspective of intelligent human-machine communication. Speech technologies and systems touch upon many essential signal processing techniques, and they are in the core of multimodal/multimedia communication research. It is hoped that such a systematic and up-to-date overview of the field, including tutorials to well established or new techniques, can bring the awareness and applications of speech technologies closer to the general signal processing community. Review papers are solicited from the following non-exhaustive
list of topics. The emphasis is on recent advances in current technologies and directions in furture research. Other related work is also welcome.

Scope of Topics:

+ Novel speech recognition techniques and systems

+ Novel speech understanding techniques and approaches in language modeling

+ Dialogue system design and architecture

+ Spoken document retrieval systems and their underlying technologies

+ Speech technologies in multimodal interaction and multimedia communication

+ Novel signal processing techniques for robust speech recognition (e.g., distant talker speech recognition)

+ Novel techniques for audio-visual and multi-sensor, multimodal speech recognition
+ Facial animation synthesis and recognition

Submission Procedure:

Prospective authors should submit their white papers to the web submission system at http://www.cspl.umd.edu/spm/, according to the following timetable. The desired length of white papers is 5 to 10 pages.

White paper due: May 1, 2004
Invitation notification: June 1, 2004
Manuscript due: September 1, 2004
Acceptance Notification: December 1, 2004
Final Manuscript due: January 15, 2005
Publication date: May, 2005

Guest Editors:

Li Deng, Microsoft Research, Redmond, WA 98052, USA, deng@microsoft.com

Kuansan Wang, Microsoft Research, Redmond, WA 98052, USA, kuansanw@microsoft.com

Wu Chou, Avaya Labs Research, Rm. 2D34, 233 Mt. Airy Rd, Basking Ridge, NJ 07920, wuchou@avaya.com