Emile AARTS
|
|
Prof.dr. Emile Aarts is Vice President and Scientific Program Director of
the Philips Research Laboratories Eindhoven, The Netherlands. He holds an
MSc. and PhD. degree in physics. For almost twenty years he has been active
as a research scientist in computing science. Since 1991 he holds a teaching
position at the Eindhoven University of Technology as a part-time professor
of computing science. He also serves on numerous scientific and governmental
advisory boards. He holds a part-time position of senior consultant with
the Center for Quantitative Methods in Eindhoven, The Netherlands. Emile
Aarts is the author of five books and more than hundred and forty scientific
papers on a diversity of subjects including nuclear physics, VLSI design,
combinatorial optimization and neural networks. In 1998 he launched the concept
of Ambient Intelligence and in 2001 he founded Philips' HomeLab. His current
research interests include embedded systems and interaction technology.
|
Abstract of the intervention : "Ambient Intelligence : Visualising the Future"
Ambient Intelligence systems are aimed at making usersystem interaction and
content consumption a truly positive experience. The endless search for nifty
information visualisation mechanism to squeeze yet one more piece of information
onto a visual display is surpassed by the challenge to embed interactive displays
into our environments that bring true user experience. Examples of experiences
supported by immersiveness, social intelligence and freedom have been investigated
in the Philips HomeLab. HomeLab offers an unique scientific environment for
evaluating the feasibility and usability of technologies that are used in the
realisation of Ambient Intelligent scenarios. Equipped with an extensive
observation infrastructure of 34 cameras and microphones, the HomeLab has
enabled behavioural researchers to study the effect of innovative technologies
on the user's acceptance for Ambient Intelligence. In the presentation we
discuss recent developments resulting from our work in HomeLab with an
emphasis on the relation between (information) visualization and experiences.
Download the PDF presentation : presentation n°2003 |
___________________________________
Hans GELLERSEN
|
|
Hans Gellersen is a professor of interactive systems in the Computing Department at Lancaster University.
His research interest is in ubiquitous computing and embedded interactive systems.
This spans work on enabling technologies such as position and context sensing, user interfaces beyond the desktop, and embedding of intelligence in everyday artefacts.
Hans has led a number of European collaborations on these topics, and he is a principal investigator in major initiatives including the Equator project in the UK.
He is participating actively in the Ubiquitous Computing research community, founded the HUC/Ubicomp conference series, and recently served as program co-chair for Pervasive 2005.
Hans has been a full professor at Lancaster since 2001. Previously he was a researcher at the University of Karlsruhe.
He holds an MSc and PhD in Computer Science, both from Karlsruhe.
|
Abstract of the intervention : "Cooperative Systems of Physical Objects"
Notions of 'smart objects' often conjure up images of everyday items that begin
to have a fantastic life of their own. In contrast, physical objects that are
beginning to be integrated and deployed in computational infrastructures typically
have little or no autonomy as computing objects. They reside at the periphery
of such systems, and may be able to locally interact through sensors and actuators
while being reliant on backend infrastructure to process what is observed and to
decide what is actuated. In this talk we consider systems of physical objects that
are more autonomous and independent of infrastructure but no less focussed on practical
deployment and application. The systems we think of are decentralized (all
computing embedded in the physical objects), highly contextualized (physical
objects have a priori meaning and affordance), and variable in configuration
(resulting from physical use and movement of objects). The individual objects in
such systems are naturally limited in the extent to which they can interact with
the world : how they are manipulated and configured is dependent on what they
physically afford and support, and what they sense and affect is inherently local.
The general challenge we explore is how physical objects can form cooperative systems
capable of richer interactions with their environment. The specific challenges we
consider include how objects can cooperate to model activity and assess situations
in their environment, how objects can establish their spatial configuration through
cooperative sensing, and how we may build interfaces that exploit ad hoc composition
of physical interface components.
Download the PDF presentation : presentation n°2002
|
___________________________________
Alex WAIBEL
|
|
Alex Waibel is a Professor of Computer Science at Carnegie Mellon University,
Pittsburgh and at the University of Karlsruhe (Germany). He directs the
Interactive Systems Laboratories (www.is.cs.cmu.edu) at both Universities with research
emphasis in speech recognition, handwriting recognition, language processing,
speech translation, machine learning and multimodal and multimedia interfaces.
At Carnegie Mellon, he also serves as Associate Director of the Language Technology
Institute and as Director of the Language Technology PhD program. He was one of the
founding member of the CMU's Human Computer Interaction Institute (HCII) and
continues on its core faculty. Dr. Waibel was one of the founders of C-STAR,
the international consortium for speech translation research and served as its
chairman from 1998-2000. His team has developed the JANUS
speech translation system, the JANUS speech recognition toolkit, and a number of multimodal systems
including the Genoa Meeting recognizer and meeting browser.
|
Abstract of the intervention : "CHIL Computing to Overcome Techno-Clutter"
After building computers that paid no intention to communicating with humans,
we have in recent years developed ever more sophisticated interfaces that put
the "human in the loop" of computers. These interfaces have improved usability
by providing more appealing output (graphics, animations), more easy to use input
methods (mouse, pointing, clicking, dragging) and more natural interaction modes
(speech, vision, gesture, etc.). Yet the productivity gains that have been
promised have largely not been seen and human-machine interaction still remains
a partially frustrating and tedious experience, full of technoclutter and
excessive attention required by the technical artifact.
In this talk, I will argue, that we must transition to a third paradigm of
computer use, in which we let people interact with people, and move the machine
into the background to observe the humans' activities and to provide services
implicitly, that is, -to the extent possible- without explicit request. Putting
the "Computer in the Human Interaction Loop" (CHIL), instead of the other way
round, however, brings formidable technical challenges. The machine must now
always observe and understand humans, model their activities, their interaction
ith other humans, the human state as well as the state of the space they are in,
and finally, infer intentions and needs. From a perceptual user interface point
of view, we must process signals from sensors that are always on, frequently
inappropriately positioned, and subject to much greater variablity. We must also
not only recognize WHAT was seen or said in a given space, but also a broad range
of additional information, such as the WHO, WHERE, HOW, TO WHOM, WHY, WHEN of
human interaction and engagement.
In this talk, I will describe a variety of multimodal interface technologies
that we have developed to answer these questions and some preliminary CHIL type
services that take advantage of such perceptual interfaces.
Download the PDF presentation : presentation n°2001
|
|