The question of the superfluity or necessity of humans can only be discussed in the context of frames of reference and value and norm systems: Would humans be superfluous if they did not have to work in a diligence or production logic?
Then as now, the discourse on automation is conducted in extremes: Artificial intelligence (AI) costs jobs and ‘subjugates’ the human being as in a dystopian science fiction scenario, or it leads to a utopia by freeing humans from ‘useless’ work. Technological progress always raises similar concerns: Computers were discussed as ‘job killers’ in the 1970s – an almost ridiculous view today. Predictions are often unreliable.
We have to concede that it was almost impossible to foresee the progress of the last few months. Today, generative AI handles tasks that were previously considered un-automatable. Regardless of whether the models are ‘creative’ or ‘intelligent’, they change the way we work, learn, teach and conduct research.
This development raises a number of questions, especially for universities. One thing first: AI does not render people unnecessary. However, people who do not work with AI will find it difficult to compete with those who do.
New technology has always led to more than less work. Nevertheless, the question of substitutability is currently raised both in terms of the activities of so-called knowledge workers and in terms of less qualified activities and those who carry them out. It needs to be clarified how we deal with productivity gains and changing expectations towards people. Which skills are necessary – also for new, emerging professions?
It is a question of which existing and new skills are needed to hold one’s ground in the post-digital world. While certain tasks, such as the summarization of a text, can be solved by machines, media literacy, for example, becomes more important. Skills do not become superfluous because a machine can solve the tasks associated with them – rather, we need to think about what is to be learned, why and from whom.
The central question is: How will the relationship between humans and machines develop? Who takes on which role(s)? What tasks are to be performed by machines? Who has control over decisions; how do we secure democratic and ethical principles and sovereignty?
Collaboration with AI requires trust. Currently, despite the growing importance of open-source models, we rely on commercial providers. We must think about the consequences this might have and what tools we want to use in the future and how we want to work.
These questions must be dealt with in an inclusive and transdisciplinary manner. Universities must become actors. Good, responsible and sustainable use of AI requires a thoughtful approach. This shows: People are not superfluous, but roles must be negotiated constantly and proactively.