sOc-EUSAI'2005 conference

Regular session - Smart environments and multimodal interaction
O3-1 Levels of Interaction Allowing Humans to Command, Interrogate and Teach a Communicating Object : Lessons Learned From Two Robotic Platform
Dominey, Peter Ford - Weitzenfeld, Alfredo
As formated for the printed proceedings - 55.ps - 55.pdf - pages 135-140
As delivered by the authors - 55_pdf_file.pdf
Abstract :
As robotic systems become increasingly capable of complex sensory, motor and information processing functions, the ability to interact with them in an ergonomic, real-time and adaptive manner becomes an increasingly pressing concern. In this context, the physical characteristics of the robotic device should become less of a direct concern, with the device being treated as a system that receives information, acts on that information, and produces information. Once the input and output protocols for a given system are well established, humans should be able to interact with these systems via a standardized spoken language interface that can be tailored if necessary to the specific system.
The objective of this research is to develop a generalized approach for human-machine interaction via spoken language that allows interaction at three levels. The first level is that of commanding or directing the behavior of the system. The second level is that of interrogating or requesting an explanation from the system. The third and most advanced level is that of teaching the machine a new form of behavior. The mapping between sentences and meanings in these interactions is guided by a neuropsychologically inspired model of grammatical construction processing. We explore these three levels of communication on two distinct robotic platforms. The novelty of this work lies in the use of the construction grammar formalism for binding language to meaning extracted from video in a generative and productive manner, and in thus allowing the human to use language to command, interrogate and modify the behavior of the robotic systems.


03-2 Augmenting Everyday Life with Sentient Artefacts
Kawsar, Fahim - Fujinami, Kaori - Nakajima, Tatsuo
As formated for the printed proceedings - 40.ps - 40.pdf - pages 141-146
As delivered by the authors - 40_pdf_file.pdf
Abstract :
The paper introduces sentient artefacts, our everyday life objects augmented with sensors to provide value added services. Such artefacts can be used to capture users context in an intuitive way, as they do not require any explicit interactions. These artefacts enable us to develop context aware application by capturing everyday scenarios effectively. In the paper we present a daily life scenario, and then demonstrate how such scenarios can be implemented effectively using applications that integrate multiple sentient artefacts.


O3-3 Interacting with the Ubiquitous Computer - Towards Embedding Interaction
Schmidt, Albrecht - Kranz, Matthias - Holleis, Paul
As formated for the printed proceedings - 64.ps - 64.pdf - pages 147-152
As delivered by the authors - 64_pdf_file.pdf
Abstract :
Computing and communication technology is widely used and integrated in devices, environments, and everyday objects. Even with major advances in technology the vision of ubiquitous computing – from a user perspective – is not yet achieved. In this paper we look at new forms of interaction that will help to interact with the ubiquitous computer. In particular we introduce the concept of embedded interaction and implicit use. The focus of the research is on embedding information into people’s environments. Currently massive amounts of information are available. However, delivering it to the user in a way that is pleasant and not annoying is still a challenge. Observing mobile phone information push services, it appears that endless information is available; however, much of the information is interesting only in a very specific context of use. We investigate how information can be provided to users – exactly when and where it is needed. Our approach is based on a variety of information displays unobtrusively embedded into the user’s everyday environment. We place the information displays in context. In contrast to the traditional approach on context-awareness where a context is recognized and then the appropriate information is delivered, we look at providing information already in context. It is up to the user to make use of the provided information or not.


O3-4 Orchestrating Output Devices - Planning Multimedia Presentations for Home Entertainment with Ambient Intelligence
Elting, Christian
As formated for the printed proceedings - 11.ps - 11.pdf - pages 153-158
As delivered by the authors - 11_pdf_file.pdf
Abstract :
In this paper we motivate the use of personalized multimedia presentations involving multiple output devices in a home entertainment environment. First we illustrate our vision and analyze the requirements. Afterwards we present the architecture of our system focusing on the output coordination strategy, which achieves a coordination of multiple output devices by means of an AI planning approach.
Then we present our prototype implementation, which generates movie-related. The implementation consists of a TV set displaying an animated character, a PDA, which acts as a remote control and a 17” digital picture frame, which displays pictures and renders speech. We conclude with an overview over related work.