Affiliation:
1. University of Lisbon, Portugal
Abstract
Current technological apparati have made it possible for natural input systems to reach our homes, businesses, and learning sites. However, and despite some of these systems being already commercialized, there is still a pressing need to better understand how people interact with these apparati, given the whole array of intervening contextual factors. This chapter presents two studies of how people interact with systems supporting gesture and speech on different interaction surfaces: one supporting touch, the other pointing. The naturally occurring commands for both modalities and both surfaces have been identified in these studies. Furthermore, the studies show how surfaces are used, and which modalities are employed based on factors such as the number of people collaborating in the tasks and the placement of appearing objects in the system, thus contributing to the future design of such systems.