Council Interview with Cecile Crutzen on Ambient Intelligence and Internet of Things

Dr. Dipl.-Math. Cecile K. M. Crutzen, is Associate Professor  School of Informatics, Open Universiteit Nederland and Domain coordinator of "People, Computer, Society": " The Internet of things will change in the future our environment. In the same way as cars have changed our infrastructure, the government should take more and more the role of a pioneer to establish "traffic rules and laws " for the internet of things. At the moment governments are incident driven. They should give the builders and providers of the internet of things strict "building" rules to create an internet of things as safe as possible. Citizens should be allowed and should be given spaces of privacy in the internet of things where individuals themselves can stipulate the conditions for the import and export of data f.i the kind of data and in which circumstances data transport may take place. The internet of things its risks and its possibilities should be part of the primary education of every child in the same way as reading writing and mathematics. Therefor they need to establish more theoretical knowledge and practical experience on the risks of the internet of things in the curriculum of teacher training.In our society individuals need to interact regular with civil authorities. It should not be allowed to restrict that interaction only via the internet. Other possibilities should be offered without extra costs.Council: You have worked on Ambient Intelligence, do you think there are major conceptual differences with 'Internet of Things'? Is it just a shift in terms?The conceptual difference between AmI and IoT is the focus. AmI is a technology focussed on the "invisible" interaction with humans. IoT is focussed on the technological infrastructure for making things interactive and identifiable. You could perhaps say IoT is one of the options for realizing AmI. In a "smart house" field bus systems can be used without a connection to the internet. In some conception of AmI humans are seen as things with data, sensors and activities, only objects in a network of (inter-)activities. In AmI humans are just things in the IoT.Council: How would you describe IoT to non experts?Starting mentioning the things in our environment which have already sensors and a connected functionality with the sensors' input of the sensor, such as automatic doors. After the explanation of sensors, you could give an example of a device which stores your activities e.g. a web browser, a mobile telephone. After a lot of already existing IoT you could fantasize together, what kind of functionalities you like to have in a smart house. After the fantasying part I would start a discussion on privacy and the problems which can occur with autonomous acting devices, or devices which obey to your AmI provider, but not to you.Council: A key issue for a functioning IoT is adoption by relevant users and social groups. Do you agree?Yes, however this adoption is not always voluntary. Social pressure and a strong connection with already existing habits of humans will accelerate the adoption of the IoT.Domestication of Ambient Intelligence will be forced by jumping on the bandwagon of some fundamental fears of the individual and society such as the present loss of security and safety because of terrorism, the necessary but unaffordable amount of care for the elderly and the sick, handling the complexity of combining professional and home work, difficulties in coping with the overwhelmingly obtrusive interactions and information of our society and being dependent on the gridlocked transport system. So-called “Killer” applications are largely motivated and justified with providing a bit more security for the individual (Wahlster, 2004). The Ambient Intelligence developers focus on substitutes and prostheses for the human touch in the care of children, the elderly and the disabled: “When daily contact is not feasible, the decision to move a senior is often driven by fear and uncertainty for his or her daily well-being. Our goal is to create a surrogate support system that resurrects this informal daily communication.” (Mynatt, 2001, p. 340). And the providers openly promote their technology staking on social fears: “We trust less and we fear more. We will therefore be searching for reassurance and guarantees. … We will welcome tools that allow us to monitor the health of ourselves or our loved ones, that allow quick links with emergency services, or ‘tag’ our children so that we know where they are. In short how can our technologies look after us and our environments rather than us looking after our technology.” (Philips Research, 2003, p. 35). Single purpose Ambient Intelligence applications will be connected for continuous monitoring of the individual with the strong suggestion that this provides security and maintains health (Friedewald, 2003). These single purpose applications will enforce the adaptivity of humans to more complex and integrated applications of intelligent structures. Of course emergency situations with an impact on peoples’ physical and psychological well-being could imply “that a service or tool that assists people should be easy for the person to use, should not require much thinking or complicated actions, and should be easily and readily accessible” (Kostakos, 2004). But humans are not always in emergency situations in contrary to the suggestions of the providers. The interpretation of the meaning of “a better life” in Ambient Intelligence is “taking away the worries” of a possibly unstable future. People are claimed vulnerable and naked without an artificial skin of input and output devices. Council: However, many individuals remain concerned by the privacy issues associated with IoT systems. Here, your arguments (2006) in Invisibility and the Meaning of Ambient Intelligence are: "The Information Society can only be reliable if it is capable to construct, connect and nourish these rooms where doubting the promises of, ambient intelligence (AmI), is a habit. Being aware of the redesign of borders is a necessary act for creating diversity in interaction rooms — where people and society can choose how the invisible and visible can interact, where they can change their status, where the invisibility could be deconstructed." Can you elaborate a little it on this?I have introduced the concept of the “critical transformative room” (CTR). A critical transformative room is a space between users and their technological devices where the preferred interpretation of the actions of the artificial actors can be negotiated, where doubt can occur as a constructive strategy and can be effective in a change of the acting itself; the acting of both the human actors and the artificial actors. In a CTR doubt can lead to actions of inquiring. It is a space between interpretation and representation of the offered (ready-made) interactions of the technology.Differences and different meaning construction processes are respected. In contrary to doubt, change is not the determining and defining aspect of a critical transformative room. In any interaction environment actions and interactions cause changes. Doubt is situated in the interaction itself by questioning the caused visible and invisible changes.A CTR is always individual. It is the design (construction) of the intertwining of use and (re)design activities of an individual. The process of intertwining design and use depends always on the needs and the wishes of the individual and is situated in the interaction. It needs the presence-at-hand of the artificial intelligent environment and it depends on the actor’s affective disposition and state of mind.The borders of a CTR are frozenness of use on the one side; and on the other side, the despair of a forced continuous design. The border "frozenness" is a mental invisibility of the human actor towards the used technology device. The human actor is only using without thinking anymore, how it could be used. The use has become a routine acting.The other border of a CTR is despair in the meaning of continuous doubting and redesigning the use of our technological environment. Under the aspect of “use” as an integration of ready-made technological actions in human activity, based on experiences, humans are always in a process of gaining a certain status of mental invisibility. This status has a risk, to be frozen in a frame; in a limited scale of possible actions in specific situations.Although if human behaviour could not be based partially on individual or collective routine and habits, life would became no longer liveable. Human actors would be forced at each moment to decide about everything. Faced with the amount and complexity of those decisions they would not be able to act anymore. Humans would place themselves in a complete isolation and conflict, where they cannot accept and adapt even the most obvious interpretations and representations of other human actors on their technological environment. They would be in the stress of constantly redesigning their environment.There will be always a dilemma between frozen use and continuous design. It is the individual user and the society who should be allowed to solve this dilemma in a respectful and ethical way. Every action and interaction in our environment causes changes, but not all activities of actors and the chances they have caused, are present in interaction worlds. It depends on the attentivity and the perception of the human actors. And above that a lot of actions of an intelligent environment are physically invisible. The meanings given to artificial actors and their acting representations by a human actor rely on the existing meaning constructions. If changes caused by interaction are comparable and compatible with previous changes, then they will be perceived as obvious. They are taken for granted. This kind of interaction will not cause any doubt. The interaction with technology can become obvious if humans do not think anymore about the actions they are doing with that technology. These actions are part of their routine acting. A lot of ready-made technology devices and structures are interaction partners in our daily life, they became and are mental invisible.Technical devices in our environment should allow humans and even invite them to create a CTR where they can negotiate the interactions with the critical devices. However not a lot of humans are capable or willing to start that negotiation process.A lot of technological devices offer only a closed world of interaction possibilities. If humans want to act in a different way, not taking over the dominant (preferred) meaning, embedded by the designers and the providers their activities are mostly seen as errors and failures from dissidents.Users who do not act with the technical environment as it was modeled and implemented by the designers and provider are seen as stupid or technophobic. Doubt is seen as an unwanted feeling of insecurity and not as necessary prerequisite for change. Domination and ignorance in a closed world cause this hierarchical opposition between doubt and security. To change routine acting is always very difficult because routine does not have much presence in each world of interaction. Moreover, in closed worlds interaction routines and habits are frozen and creating doubt is seen as an unpleasant activity.Especially in Ambient Intelligence environments not all activities of the artificial actors are visible, but being attentive to the change process users can reconstruct the invisible.Attentivity can open up the closed artificial interaction environment that is inhabited by the designers and their artificial products. It is then a mere construction of actors being in interaction with actions of questioning and doubting, which have the potential to change their habits and routines in their interaction.Such a strategy is helpful for breaking through the obvious acting. It can give the act of doubting a positive meaning: causing doubt, thinking and feeling doubt are necessary moments in an interaction for changing the interaction itself. By creating a “leavable and reliable” (the two meanings of Verlässlichkeit) critical transformative room the separation of use and design can be blown up and users in their acting with artificial environments can intertwine use and design through doubting and negotiating the ready-made interactions. Doubt is the first step for creating openings and redecorations in the room between humans and their environment. The question is: Can Ambient Intelligence environments be critical transformative rooms? Can they qualify as open environments? Are then shadows between the visible and invisible enough inducement for doubt? Or will the domestication of Ambient Intelligence technology be ruled by fear?Council: How can we see this in concrete terms?The user interaction in Ambient Intelligence environments, fenced in between forced routine and despair is shrunken to only an on-off switch. If you are e.g. in a position of needing care and this care can only be delivered by an intelligent environment then you cannot use the off-switch anymore. It should be ethical responsibility of the care-providers that you could negotiate what kind of activities and data-exchange the technological environment may be allowed.Council: The Privacy Coach, produced by a small Dutch consortium of RFID experts, is an application running on a mobile phone that supports customers in making privacy decisions when confronted with RFID tags (Broeninjk and others, 2011). It functions as a mediator between customer privacy preferences (Fischer-Hübner, 2011) and corporate privacy policies, trying to find a match between the two, and informing the user of the outcome.These apps are a good start because you can make visible the consequences of the tags and this presence could be the starting of a negotiation process. According to Langheinrich transparency of what data is collected and how it will be used is necessary. Transparency and trust tools should be made compulsory by law to force data collectors to describe their collection policies and should be controlled by governmental trustful institutions. These tools increase the “trust in a transaction or data exchange, by providing additional background information about the transfer, its conditions, and the parties involved”. They “link directly into our previously identified social mechanism of trust, as they can provide assurances upon which users can make trust decisions due to incomplete knowledge about their interaction partner.” Transparency tools cannot “prevent the abuse of personal data through malicious parties, but can help respectable collectors of our personal data to use our information in accordance with the preferences and limitations that are given by the subjects of the data collection.” (Langheinrich, 2005).These apps could be more useful if it is possible to disable the tag. Without transparency humans will be without informational rights. However the invisibility of the data collection in Ambient Intelligence is not a mistake but intentional – therefore there will be no correction efforts. In most of the cases if you do not allow the data collection and exchange of your personal data you can not use the ready-made interaction, f.i. you cannot buy the tagged product.The requirement for transparency in high complex intelligent environments and the many, divergent different aims and events of data collection cannot take place without overstraining the person from which the data are collected. This stress will lead to inadequate wishes for mental invisibility or could lead to the other border of the CTR: despair. Roßnagel recommends a built-in technology that automatically recognizes, identifies and memorizes a data access. Only the cases for which the subject did not give permission or in which the given limitations are overruled, should be reported (Roßnagel, 2007).ReferencesCrutzen C.K.M./ Hein H.-W. (2009): ‘Invisibility and Visibility; the Shadows of the Artificial Intelligence’, in: J. Vallverdu, D. Casacuberta (Eds): Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence, IGI-Global. p. 472-500.Friedewald, M., & Da Costa, O. (2003). Science and technology roadmapping: Ambient intelligencein everyday life (Aml@Life). JRC/IPTS - ESTO Study, July 2003. Retrieved from, http://forera.jrc.ec.europa.eu/documents/SandT_roadmapping.pdfKostakos, V., O’Neill, E. (2004). Pervasive computing in emergency situations. In Proceedings of the Thirty-Seventh Annual Hawaii International Conference on System Sciences, January 5-8, 2004, Computer Society Press.Langheinrich, M. (2005). Personal privacy in ubiquitous computing tools and system support. Dissertation of the Swiss Federal Institute of Technology, Zürich. Retrieved from, http://www.vs.inf.ethz.ch/publ/papers/langheinrich-phd-2005.pdfMynatt, E. D., Rowan, J., Craighill, S., & Jacobs, A. (2001) Digital family portraits: Providing peace of mind for extended family members. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2001) (pp. 333-340). Seattle, Washington: ACM Press. Philips Research (2003). 365 days – Ambient Intelligence research in HomeLab. www.research.philips.com/technologies/misc/homelab/downloads/homelab_365...ßnagel, A. (2007). Datenschutz in einem informatisierten Alltag. Friedrich-Ebert-Stiftung,Berlin. Retrieved from, http://library.fes.de/pdffiles/stabsabteilung/04548.pdfWahlster, W. et al. (2004, September). Grand challenges in the evolution of the information society. ISTAG Report. European Communities. Retrieved from, ftp://ftp.cordis.europa.eu/pub/ist/docs/2004_grand_challenges_web_en.pdf