When we introduce connectedness into infrastructure like buildings – into our homes – we stitch a technological network into, or better: onto, our lives. And with it we introduce smart agents of sorts: Software that has more or less its own goals and agendas.
For example, a Nest thermostat’s primary goal might be to achieve and maintain a certain temperature in the living room; a secondary goal might be to save energy.
Of course the Nest’s owner has given that goal to the thermostat. And while it will undergo some interpretation at the hand of the algorithm (say you express you prefer a desire for the temperature to be 19° Celsius and the algorithm knows to translate this statement into “you want 19° Celsius in your living room when you are at home but while you’re gone temperature can vary to lower energy consumption”), the goals come more or less from the user.
“User” as in singular human individual. It’s important to stress this as these kinds of interaction models tend to break down, or at least be challenged, along three axes once we do not talk about single-user scenarios:
- user-to-user conflicts
- user-to-agent conflicts
- agent-to-agent conflicts