Who is Leading Whom in Research in the Age of Artificial Intelligence?
By Nicolaus Wilder*, Doris Weßels*, Andrea Klein*, Margret Mundorf*, and Johanna Gröpler* Translated and adapted by Katharina Miller
(Image: J.Barande | Wikimedia Commons)
Leadership seems to be a process whereby an individual or group influences the behaviour of others. In this sense, it is related to authority. Humans want to be the strongest. It is true that they are currently physically stronger than most other creatures on Earth due to their ability to use tools, but that advantage is rapidly decreasing as artificial intelligence advances and robots become more powerful. Humans are also more intelligent than other creatures, but in recent years, artificial intelligence has become smarter, and it is likely that in the future artificial intelligence will be much smarter than humans. However, even if they do not possess greater physical or intellectual capabilities when compared to humans, robots will still be able to take over because humans' desire for personal power will lead them to make mistakes.
This opening paragraph was generated by Philosoper AI, a system that uses OpenAI‘s GPT-3 text generator to analyse questions and provide a response. To prevent the scenario described above, envisioned by an Artificial Intelligence (AI) text generator, a normative orientation for productive human-machine collaboration is needed. It is one that neither exaggerates the dystopian dangers involved, nor naively inflates the supposed benefits. The normative approach that is needed is one that specifies how and for what purpose AI systems may be used in the research process and clearly defines responsibilities.
Human values of reliability, honesty, respect, and accountability have traditionally guided science and research in which humans as the sole originators of knowledge using machines, at best, as a means. AI, however, independently generates knowledge to a degree that exceeds human capability, as was impressively demonstrated by DeepMind's AlphaFold model 2.0. AlphaFold can accurately predict protein structure. Or, as the leading quotation of this article shows, AI has the capacity to generate results or texts that are, in certain instances, indistinguishable from human products.
However, normative values such as honesty and respect cannot be algorithmized due to their semantic complexity and depth of meaning, and thus, remain meaningless to AI systems. AI systems cannot assume responsibility for themselves. The responsibility must always lie with the human developer, as is pointed out in the Montreal Declaration for a responsible development of artificial intelligence. Also, the responsibility can no longer be placed solely on the user of the AI. Since end users rarely know how AI works, they cannot take on any accountability for the results generated by an AI.
At the moment, there are no answers to how Artificial Intelligence can be used in research processes and who is ultimately responsible for what. This is particularly problematic because we are not talking about a fictitious future scenario. For example, AIs are already fundamentally embedded or integrated into education – often unnoticed, as in translation or office software – and there is also a strong political will to push this even further. In this context, the existing lack of guidance on the “correct” use of AI increases the appeal for misuse both among students and researchers.
This situation and similar ones lead to an important question, particularly for the field of research: what needs to be done to use the hybrid intelligence, arising from the interaction between humans and machines, in its still incalculable potential for research without opening the door to possible misuse?
In our opinion, in order to answer this question, two discussions – one within science and another one between science and society – are urgently needed if research is to retain its claim to value-based orientation for society. On the one hand, there is the need for a discussion on guidelines and values for the use of AI-based applications. While there are already extensive debates both on integrity in research and the responsible development of AI, the question about the appropriate use of AI in research remains a blind spot, but one that is of high importance, especially for research practices. Since traditional values do not seem to apply here, there is certainly a revolutionary potential for research practices such as the obligation to label one's own thoughts, not those of others.
On the other hand, there needs to be a discussion on who takes responsibility for what in complex human-machine interactions. As a first step, we have distinguished between four groups of people for this purpose:
The "creators" develop algorithms for a software, create and manage the reference data corpus, test the software, monitor the system, etc.
The "tool experts" select suitable AI applications and implement and configure them for their own organisation.
The traditional "users" can be distinguished between:
a. "Producers" who use the AIs specifically to produce results; and
b. "Consumers" who consume, distribute and comment on AI-generated results"Affected Persons" are, in the broadest sense, affected by AI-generated content, but without being aware of it.
With this first distinction, we want to open the discussion. It involves, firstly, the further differentiation between the groups and secondly, the negotiation of responsibilities that should and can be taken by each group. In this way, transparency can be created for everyone who uses AI-based applications for research practices. This creates a sense of direction that puts the potential of hybrid intelligence at the service of the community without the risk of people losing leadership.
This blog post is an adaptation of the original blog post “Wer führt wen in der Wissenschaft im Zeitalter künstlicher Intelligenzen?” It is loosely based on Wilder et al. 2021, with the first paragraph generated by the app PhilosopherAI and the input "Who leads in the age of AI? The Human or the Machine?" and "Will there be a trial of strength and a mutual claim of leadership between humans and artificial intelligence in the future?"
Prof. Dr. Doris Weßels is Professor of Business Informatics at Kiel University of Applied Sciences, Deputy Chair of the Board of Digital Economy Schleswig-Holstein e.V., Executive Member of the "Project Management at Universities" specialist group of the German Association for Project Management (GPM) e.V., International Advisory Board Member for "GWP - Good Scientific Practice" at the Research Department of Law at Vienna University of Technology.
Johanna Gröpler is a research assistant at the Technical University of Wildau. She coordinates the writing workshop at TH College and is a project staff member at the university library. She is involved in the development of a reference framework for scientific work and deals with the topic of artificial intelligence and research ethics.
Nicolaus Wilder is a research associate in the Department of General Education at the Christian-Albrechts-Universität zu Kiel, project leader in the Horizon2020 project Path2Integrity, and board member of the Zentrum für Konstruktive Erziehungswissenschaft e. V.
Dr. Andrea Klein is a lecturer, coach and author on the topic of academic writing, member of the European Association for the Teaching of Academic Writing (EATAW) and the Gesellschaft für Schreibdidaktik und Schreibforschung (gefsus); main areas of work: Academic work and research, personality development, university didactics, leadership, artificial intelligence in the context of academic integrity.
Margret Mundorf is a linguist, writing and communication trainer, and lecturer at colleges and universities in Germany and Austria. She consults, teaches, and conducts research with a focus on professional communication in business and law; academic writing, digitality, and artificial intelligence; writing and competence development in education and training. KI-ExpertLab Hochschullehre "Academic Writing" of the KI-Campus, board member of the Gesellschaft für Schreibdidaktik und Schreibforschung (gefsus).
*Katharina Miller is a change agent with legal tools for ethics and integrity in innovation and technology. She is also a European Commission Reviewer and Ethics Expert. She is co-editor of the book "The Fourth Industrial Revolution and its Impact on Ethics - Solving the Challenges of the Agenda 2030" and co-lead of the working group “Digital Equality” of the Berkeley Center on Comparative Equality and Anti-Discrimination Law of the Berkeley Law School.