Santa Monica, CA, USA
March 19, 2013

Program

Mar 19,2013

9:00amWelcome
9:10amInvited Talk Oliver Brdiczka Contextual Intelligence: From UbiComp to Understanding the User's Mind
9:50amPaper Session I (each 10 min presentation + 10 min discussion)
  • Digital Pens as Smart Objects in Multimodal Medical Application Frameworks
  • Further Investigating Pen Gesture Features Sensitive to Cognitive Load
10:30amCoffee Break
11:00amPaper Session II
  • Patterns for HMI design of multi-modal, real-time, proactive systems
  • Modeling Socially Apt Smart Artifacts
  • A Query Refinement Mechanism for Mobile Conversational Search in Smart Environments
  • Fast and Comprehensive Extension to Intention Prediction from Gaze
12:30pmLunch Break
2:00pmPaper Session III
  • A Mobile User Interface for Semi-automatic Extraction of Food Product Ingredient Lists
  • Improving Accuracy and Practicality of Accelerometer-Based Hand Gesture Recognition
2:40pmSelection of Topics to discuss
2:50pmIn-depth discussion on selected topics I
3:30pmCoffee Break
4:00pmIn-depth discussion on selected topics II
5:00pmWorkshop Summary
5:30pmEnd of workshop
EveningWorkshop dinner

 

List of Accepted Papers

Paper Session I

Christian H. SchulzMarkus Weber, Christian H. Schulz, Daniel Sonntag and Takumi Toyama - Digital Pens as Smart Objects in Multimodal Medical Application Frameworks
PDF In this paper, we present a novel mobile interaction system which combines a pen-based interface with a head-mounted display (HMD) for clinical radiology reports in the field of mammography. We consider a digital pen as an anthropocentric smart object, one that allows for a physical, tangible and embodied interaction to enhance data input in a mobile on-body HMD environment. Our system provides an intuitive way for a radiologist to write a structured report with a special pen on normal paper and receive real-time feedback using HMD technology. We will focus on the combination of new interaction possibilities with smart digital pens in this multimodal scenario due to a new real-time visualisation possibility.

Lisa AnthonyLing Luo, Ronnie Taib, Lisa Anthony and Jianwei Lai - Further Investigating Pen Gesture Features Sensitive to Cognitive Load
PDF A person’s cognitive state and capacity at a given moment strongly impact decision making and user experience, but are still very difficult to evaluate objectively, unobtrusively, and in real-time. Focusing on smart pen or stylus input, this paper explores features capable of detecting high cognitive load in a practical set-up. A user experiment was conducted in which participants were instructed to perform a vigilance-oriented, continuous attention, visual search task, controlled by handwriting single characters on an interactive tablet. Task difficulty was manipulated through the amount and pace of both target events and distractors ng displayed. Statistical analysis results indicate that both the gesture length and width over height ratio decreased significantly during the high load periods of the task. Another feature, the symmetry of the letter ‘m’, shows that participants tend to oversize the second arch under higher mental loads. Such features can be computed very efficiently, so these early results are encouraging towards the possibility of building smart pens or styluses that will be able to assess cognitive load unobtrusively and in real-time.

Paper Session II

Tom StevensNádia Ferreira, Sabina Geldof, Tom Stevens, Tom Tourwé and Elena Tsiporkova - Patterns for HMI design of multi-modal, real-time, proactive systems
PDF The design of multi-modal, adaptive and pro-active interfaces for complex real-time applications requires a specific approach in order to guarantee that the interaction between human and computer remains natural. In order for the interface to adapt to the user and the context, the system needs to reason about her needs and proactively adapt to these while keeping the user in control. The HMI (human-machine interface) design should accommodate varying forms of interaction, depending on what is most appropriate for that particular user at that particular time. HMI design patterns are a powerful means of documenting design know-how, so that it can be re-used. We propose a formal framework to organize and annotate this know-how so that the designer (or at runtime, the system) is supported in the selection (and instantiation) of a pattern, fit to the situation at hand. In this paper, we describe our findings from collecting existing multi-modal design patterns, our approach to elicit new ones from a diversity of real-world applications and our work on organizing them into a meaningful pattern repository using a set of pre-defined parameters, so that they can be described in a uniform and unambiguous way easing their identification, comprehensibility and applicability. These patterns enable designers to optimize the interaction between human operators and systems that reason about and proactively react on information captured e.g. via sensors. Therefore we think that research on interaction with smart objects could benefit of this work.

Juan SalmancaJuan Salamanca - Modeling Socially Apt Smart Artifacts
PDF Although smart artifacts could be designed as agents with whom humans interact, the resulting interaction between them is asymmetrical if the smart artifacts are designed solely to support the accomplishment of human plans and goals. The ontological asymmetry between both human and non-human agents prevents designers of smart artifacts to consider them as actual social actors capable of performing a social role instead of just being tools for human action. In order to overcome such asymmetry this research repositions smart artifacts as mediators of social interaction and introduces a triadic framework of analysis in which two interacting humans and a non-human agent are regarded as networked and symmetrical actors.

Beibei HuBeibei Hu and Marie-Aude Aufaure - A Query Refinement Mechanism for Mobile Conversational Search in Smart Environments
PDF A key challenge for dialogue systems in smart environments is to provide the most appropriate answers adapted to the user’s context-dependent preferences. Most of the current conversational search is inefficient for locating the target choices when user preferences depend on multiple attributes or criteria. In this paper, we propose an architecture which incorporates a context-dependent preference model for representing weighted interests within utility functions, and a query refinement mechanism that can incrementally adapt the recommended items to the current information needs according to user’s critiques. Our preliminary evaluation results based on a scenario of conversational search demonstrate that the query refinement mechanism supported by our architecture can enhance the accuracy of search over interactive dialogue turns.

Hana Vrzakova and Roman Bednarik - Fast and Comprehensive Extension to Intention Prediction from Gaze
PDF Every interaction starts with an intention to interact. The capability to predict user intentions is a primary challenge in building smart intelligent interaction. We push the boundaries of state-of-the-art of inferential intention prediction from eye-movement data. We simplified the model training procedure and experimentally showed that removing the post-event fixation does not significantly affect the classification performance. Our extended method both decreases the response time and computational load.

Paper Session III

Tobias LeidingerTobias Leidinger, Lübomira Spassova, Andreas Arens and Norbert Rösch - MoFIS - A Mobile User Interface for Semi-automatic Extraction of Food Product Ingredient Lists
PDF PDF The availability of food ingredient information in digital form is a major factor in modern information systems related to diet management and health issues. Although ingredient information is printed on food product labels, corresponding digital data is rarely available for the public. In this article, we present the Mobile Food Information Scanner (MoFIS), a mobile user interface that has been designed to enable users to semi-automatically extract ingredient lists from food product packaging. The interface provides the possibility to photograph parts of the product label with a mobile phone camera. These are subsequently analyzed combining OCR approaches with domain-specific post-processing in order to automatically extract relevant information with a high degree of accuracy. To ensure the quality of the data intended to be used in health-related applications, the interface provides methods for user-assisted cross-checking and correction of the automatically recognized results. As we aim at enhancing both the data quantity and quality of digitally available food product information, we placed special emphasis on fast handling, flexibility and simplicity of the user interface.

David MaceDavid Mace, Wei Gao and Ayse Coskun - Improving Accuracy and Practicality of Accelerometer-Based Hand Gesture Recognition
PDF Wrist-watches are worn by a large portion of the world’s population, but their potential usefulness is not only limited to checking the time. Watches are located in a prime position to retrieve valuable position and acceleration data from a user’s hand movements. In this paper, we explore the plausibility of using watches containing accelerometers to retrieve acceleration data from hand gesture motions for use in human-computer interaction tasks. We compare two approaches for discerning gesture motions from accelerometer data: naïve Bayesian classification with feature separability weighting and dynamic time warping. We introduce our own gravity acceleration removal and gesture start identification techniques to improve the performance of these approaches. Algorithms based on these two approaches are introduced and achieve 97% and 95% accuracy, respectively. We also propose a novel planar adjustment algorithm to correctly recognize the same gestures drawn in different planes of motion and reduce spatial motion dissimilarities.

About the Workshop

The workshop will take place in conjunction with IUI'13, the International Conference on Intelligent User Interfaces, on March 19, 2013 in Santa Monica, CA, USA. More...

2013

Organizers