There is an undeniable ongoing trend to put computing capabilities into everyday objects and places. Well known examples range from smart kitchen appliances and objects (smart coffee machines, smart knifes and cutting boards), smart (tangible) objects up to smart meeting rooms and even urban infrastructures.
These smart objects are fully functional on their own, but added value is obtained through communication and distributed reasoning. While other venues have focused on the many technical challenges of implementing smart objects, far less research has been done on the topic of how the intelligence situated in these smart objects can be applied to improve their interaction with the users. This field of study poses unique challenges and opportunities for designing smart interaction.
Smart objects typically have only very limited interaction capabilities. Yet, their behavior exhibits an amazing amount of intelligence. For example, several digital cameras are able to recognize faces in a scene automatically and adjust the focus accordingly. For first time users this can be quite surprising, and for experts this is a feature they probably want to turn off. The challenge is to design intuitive interaction with smart objects in a way the user feels in control of the smart object and understands the behavior and capabilities of the object.
Interaction with smart objects is situated in the physical environment of the user, i.e., it does not take necessarily place in a desktop setting. A smart object often uses additional cues from its context to improve the interaction with the user, thereby, making the interaction between user and smart object feel more natural. Furthermore, a smart object is a physical object which allows to exploit approaches from tangible and embodied interaction to enhance the interaction.