Stan Kurkovsky, PhD
HomeResearch › Multimodality in Mobile Computing and Mobile Devices

Multimodality in Mobile Computing

Published in 2009

Multimodality in Mobile Computing and Mobile Devices:
Methods for Adaptable Usability

A book edited by Dr. Stan Kurkovsky
Central Connecticut State University, USA

Introduction

Software applications or computing systems that combine multiple modalities of input and output are referred to as multimodal. For example, Apple iPhone combines the capabilities of a traditional screen & keyboard interface, a touch interface, and a speech interface. Software running on Apple iPhone should be able to take advantage of these three modalities of input/output. The objectives of multimodal systems are two-pronged: to reach a kind of interaction that is closer to natural interpersonal human communication, and to improve the dependability of the interaction by employing complementary or redundant information. Generally, multimodal applications are more adaptable to the needs of different users in varying contexts. Multimodal applications have a stronger acceptance potential because they can generally be accessed in more than one manner (e.g. using speech and web interface) and by a broader range of users in a varying set of circumstances.

Objective of the book

Recognizing that mobile computing is one of the most rapidly growing areas in the software market, this book will explore the role of multimodality and multimodal interfaces in the area of mobile computing. Mobile computing has a very strong potential due to the extremely high market penetration of mobile and smart phones, high degree of user interest in and engagement with mobile applications, and an emerging trend of integrating traditional desktop and online systems with their mobile counterparts. Multimodal interfaces play a very important role in improving the accessibility of these applications, therefore leading to their increased acceptance by the users.

Target audience

The target audience of this book will be composed of professionals and researchers in industry and academia working in the areas of mobile computing, human-computer interaction, and interface usability, graduate and undergraduate students, engineers, and anyone interested in mobile computing and human-computer interaction.

Currently accepted chapter proposals

  • Multimodal and Multichannel Issues in Pervasive and Ubiquitous Computing
  • Multimodal Cues: Exploring Pause Intervals between Haptic/Audio Cues and Subsequent Speech Information
  • Human-Computer Interaction Container
  • A Formal Approach to the Verification of Adaptability Properties for Mobile Multimodal User Interfaces
  • Exploiting Multimodality for Intelligent Mobile Access to Pervasive Services in Cultural Heritage Sites
  • Designing Multimodal Mobile Applications
  • Platform Support for Multimodality on Mobile Devices
  • Towards Universal Access - Multimodal Mobile GIS for the Elderly
  • Simplifying the Multimodal Mobile User Experience
  • An Adapted User Interface on Mobile Phones Dedicated for Partially Sighted and Blind People
  • Intention-In-Action: The Role of Bodily Engagement in Multimodal Interaction Design
  • Adaptive Multimodal Presentation of Information: Report from two Studies
  • Multimodal Search on Mobiles Devices
  • Automatic Signature Verification on Handheld Devices
  • Ubiquitous User Interfaces Multimodal Adaptive Interaction for Smart Environments
  • Multilingual, Multimodal and Mobile Information Service Systems
  • Sustainable, Intelligent Space for Communal Interaction with Art and Culture
  • Usability Framework for the Design and Evaluation of Multimodal Interaction: Application to a Multimodal Mobile Phone

Editorial board

  • Yevgen Borodin, Stony Brook University (SUNY), USA
  • Matthieu Boussard, Alcatel-Lucent Bell Labs, France
  • Nick Bryan-Kinns, Queen Mary, University of London, UK
  • Maria Chiara Caschera, Institute of Research on Population and Social Policies, National Research Council, Italy
  • Pablo Cesar, National Research Institute for Mathematics and Computer Science, the Netherlands
  • Antonio Coronato, Institute for High Performance Computing and Networking, National Research Council, Italy
  • Giuseppe De Pietro, Institute for High Performance Computing and Networking, National Research Council, Italy
  • Daniel Doolan, Robert Gordon University, UK
  • Patrizia Grifoni, Institute of Research on Population and Social Policies, National Research Council, Italy
  • Giovanni Frattini, Engineering.IT S.p.A., Italy
  • Jaakko Hakulinen, University of Tampere, Finland
  • Michael Hellenschmidt, Fraunhofer Institute for Computer Graphics Research, Germany
  • Cristian Hesselman, Telematica Instituut, the Netherlands
  • Anthony Jameson, Fondazione Bruno Kessler, Italy
  • Samuel Joseph, University of Hawaii, USA
  • Ralf Kernchen, University of Surrey, UK
  • Marilyn McGee-Lennon, University of Glasgow, UK
  • Stefan Meissner, University of Surrey, UK
  • Nuria Oliver, Telefonica R&D, Spain
  • Shimei Pan, IBM Watson Research Center, USA
  • Thomas Pederson, Umea University, Sweden
  • Markku Turunen, University of Tampere, Finland
  • Maria Uther, Brunel University, UK
  • Vladimir Zanev, Columbus State University, USA

Contact

Inquiries and submissions can be forwarded by e-mail to:
Dr. Stan Kurkovsky
Department of Computer Science
Central Connecticut State University
E-mail: kurkovskysta@ccsu.edu
Web: http://www.cs.ccsu.edu/~stan/research/multimodality/
Tel: 1-860-832-2720
Fax: 1-860-832-2712

This book is scheduled to be published by IGI Global (formerly Idea Group Inc.), publisher of the Information Science Reference (formerly Idea Group Reference) and Medical Information Science Reference imprints. For additional information regarding the publisher, please visit www.igi-global.com.