By Alec Balasescu*
Optimisation is one of the major selling points of Artificial Intelligence (AI)-based automation systems. It consists as much of a set of offers, with clear and obvious applications such as optimising the image-based diagnosis in healthcare, as of a proposition to attain the mirage of a perfectly optimised lifestyle through monitoring of one’s every action from food intake to daily activities — a hallucination that is.
In 2019 at the Unfinished festival, I organised and mediated a conversation between Angelica Dass, a Madrid based artist born în Brazil who is most famous for her project Humanae, and Moran Cerf, a neuroscientist from Tel Aviv who is based in the US and specialises in human machine interaction at the level of the brain; exploring how technology may hack into consciousness. The question I posed was what would a perfectly optimised life look like with the help of technology? More specifically, I asked them to imagine the hypothetical situation in which a decision making algorithm has access to all one’s data, from genetic make-up to neural mapping, to every bit and byte of lifestyle choices made up to the present. Would they choose between a butter croissant they crave or a kale and salmon sandwich on oat crackers, which the machine would suggest in order to optimise one’s health? While Angelica chose the croissant, Moran argued that he would prefer to use his cognitive power for something else other than these choices, and would go for the sandwich. He also hoped for a prolonged healthy productive life.
So, what does optimisation mean in this example? What is it that we optimise, and where do we leave space to suspend our decisions and follow the algorithmic ones? This is a fundamental ethical problem to be added to current AI debates.
Another example comes from the current pandemic. During a conversation with two New York based AI ethicists, regarding algorithms and decision making to allocate resources in hospitals, we debated whether we can rely on algorithmic decisions when life and death is immediately and palpably at stake? (Elsewhere, I developed some thoughts on this matter.) Age came into discussion and one of the ethicists mentioned that, of course, young patients should have priority. The variable of age could, and for some should, be encoded in a decision making support algorithms. For example, it could indicate the priority of treatment of patients.
Valuing human life is a universal ethical code, but here we are facing a conundrum. First of all, this goes against medical ethical codes. Second, if one thinks about this question philosophically and anthropologically, more questions arise:Are certain types of human life “valued” more than others in different societies? And how do societies make decisions on the basis of differentiated value assigned to different “types of life”? Is a young life more valuable than an old one? How is this valuing expressed in daily practices? Is it possible to have an “equally valuable” approach to human life even in moments of scarcity? And finally, how are those assumptions encoded in algorithms that are used in rapid decision making regarding resources allocation (i.e. which patient should be treated first, or more extreme who should receive intubation and who shouldn’t in case of scarcity in ICU)? This brings us to the necessity of giving more attention to the manner in which algorithms outputs are integrated in the decision making itself.
The cultural variance of practices that embed ethical codes is the subject of debate by ethicists and anthropologists. As mentioned, ethics have the semblance of universality, as in all life has equal value. However, moving across time and cultures any observer will rapidly realize that the ethical code valuing every life expresses itself in practices that identify “hierarchies” of types of lives, protecting some more than others. These are furthermore internalised by individuals into unconscious biases, and eventually embedded in algorithms. Identifying this variance has been in focus of many mental and physical experiments, from the trolley dilemma to the moral machine - the latter being essentially a gamification of the former. No easy answer is at hand. But in this context my questions are: how are we to optimise decisions that directly impact the quality of life? Should we optimise these decisions? And how precisely are we optimising them?
To end with, more often than not efficiency is defined independent of context, and fails to account for the complexity of the process involved in any work environment. The view from the finance department, for example, of what efficiency means can be at times at odds with the perspective of quality assurance, and the way quality is understood in the latter may be completely different from what persons on the work floor strive for.
This observation brings in mind the third major question: What do we optimise (for)?
In conclusion, I would suggest that we spend more time thinking about these three questions before implementing automated decision making systems, before designing algorithms, and before considering the parameters of that design:
Why do we optimize? In other words, what does optimisation means? Reducing costs or increasing well-being and happiness? How do we define happiness, and how can we be attuned to its individual and collective variability? Answers to these questions may be found into a better care given to the definitions of metrics and measurements when deciding on the usefulness of an automated system.
How do we optimise? Did we take into account as many variables as possible, including the socio-cultural context and the type of activity we’d like to optimise? It is possible, probable, and even desirable to explore the array of unforeseen effects before designing and implementing an automated system in context?
What do we optimise (for)? Are there domains in which a certain type of optimisation should be kept at bay, or should be rethought for the domain itself?
In other words, we should approach optimisation as a tool for a well-defined scope, and not as the scope itself while usually using a very narrowly defined optimisation definition. More often than not, in the case of AI algorithms and automated systems, the latter seems to be the case. Perhaps it is time to ponder on the variance of optimisation and AI in function of context by redefining optimisation in the first place. And perhaps not every domain of life needs to be optimized in an algorithmic manner.
Alec Balasescu* is an anthropologist by training, and approaches the world, and his work, through the lenses of this science. He finished his Ph.D. at UC Irvine, in 2004, and has been active in both public and private domains in various capacities, while continuing to teach in different university settings, both online and in class. His work experience spans 9 different countries on three continents in the past 25 years, since he left his native Romania. Alec’s research, writing, and practice is centred on understanding of human actions in context, and in developing strategies of change based on this - where context is understood to be the result of dynamic interactions between culture, technology, economy, religion, gender and sexuality, and institutional practices. Besides consulting, Alec teaches in the Masters of Global Leadership and in the Masters of International Communications at Royal Roads University in Canada. He lives in Frankfurt, Germany.