Risks and opportunities of artificial intelligence

In this article: Risks and opportunities of artificial intelligence

Region

Boris Eldagsen und Jürgen Tenckhoff in conversation

 

As the following examples show, a well-founded discourse on the development of artificial intelligence (AIn) with all the social implications is urgently needed.

For various reasons, actors in the art scene are afraid of the ever-improving programs for image generation such as Dall.E or Stable Diffusion etc.

Teachers and professors are afraid of the perfect class or even master's theses of their students ... which you may no longer be able to see whose pen they came from. Maybe it was the artificial intelligence ChatGPT that guided the hand here?

And conspiracy theorists even fear that the KIn are striving for world domination - although it is unclear what the KIn will ultimately think of us humans...

I spent a long time talking to the artist and philosopher Boris Eldagsen about the development of AI technology via ZOOM.

Dr Jürgen Tenckhoff × Boris Eldagsen – Künstliche Intelligenz zwischen Autopoiesis und Wissensverstärkung

In addition to the "fear scenarios" mentioned above, we have also taken up aspects of AI development that have been rarely discussed and dealt with autopoiesis and knowledge amplification. A special focus was the topic of "Generating images with the help of current AI systems". Of course, we looked into the benefits and possible risks here.

Our summary ... but see and hear for yourself - a complete recording of our conversation has just been published on YouTube, you'll find it at the end of this post.

Anyone who would like to deal professionally with the production of AI-supported art will find what they are looking for on Boris Eldagsen's website.

Anyone who has watched our conversation would - hopefully - want to learn more about sociological aspects of AI development. The following list shows a small selection of sociologists, who all also have a technical background and deal with topics related to artificial intelligence:

  • Dr. Danah Boyd, Professor of Information and Social Sciences at the University of California, Berkeley. dr Boyd is one of the leading experts on the impact of technology on youth and has written extensively on the impact of AI on societal processes.
  • Dr. Virginia Dignum, Professor of Social and Ethical AI at Umeå University in Sweden. dr Dignum is a computer scientist by training and has specialized in investigating the ethical challenges of AI, particularly in relation to autonomy and responsibility.
  • Dr. Madeleine Clare Elish, senior researcher at the Data & Society Research Institute in New York. dr Elish is an anthropologist and computer scientist studying the social impact of autonomous systems and AI on work.
  • Dr. Lilly Irani, professor of communications and computer science at the University of California, San Diego. dr Irani has a background in computer science and ethnography and is concerned with the social and political implications of technology development and use, including KI.
  • Dr. Kate Devlin, Senior Lecturer in Artificial Intelligence at King's College London. dr Devlin is a computer scientist specializing in the social and cultural impact of AI and robotics, particularly on issues of intimacy and sexuality.
  • Dr. Nick Couldry, Professor of Media, Communications and Social Theory at the London School of Economics. dr Couldry is concerned with the social impact of technology, particularly in relation to the relationship between technology and power.
  • Dr. Alex Rosenblat, senior researcher at the Data & Society Research Institute in New York. dr Rosenblat is concerned with the social and political implications of platform work, particularly in relation to the use of AI and algorithms.
  • Dr. Philip E. Agre, Professor of Computer Science and Engineering at the University of California, Los Angeles. dr Agre has written extensively on the social impact of technology and the importance of ethics and values in technology development.
  • Dr. Timnit Gebru, researcher in the field of AI ethics and algorithm transparency. dr Gebru has a background in computer science and electrical engineering and specializes in studying the ethical challenges of AI.
  • Dr. Alex Hanna, sociologist and data scientist at Google Research. dr Hanna deals with issues of social justice and ethics related to the development of AI systems and algorithms.

Theoretical Perspective: AI, Risk, and Meaning in a Quantum-Monadic View

From the perspective of the Theory of Quantum Monads, the discussion of risks and opportunities of artificial intelligence can be sharpened conceptually. The central challenge does not lie in AI’s technical performance, but in the tendency to conflate mimetic capability with meaningful understanding.

Artificial intelligence excels at reproducing forms, styles, and patterns, thereby generating highly convincing results. In a quantum-monadic view, this remains a form of advanced mimicry as long as perception, context, and meaning are not stably entangled. The attribution of agency or intentionality to such systems therefore marks a semantic projection rather than an intrinsic property of AI itself.

The Theory of Quantum Monads describes AI not as a conscious subject, but as an interaction-capable system with variable coherence. Risks arise where mimetic output is mistaken for understanding; opportunities emerge where AI is used to stabilize structures, reduce complexity, or support reflective human decision-making without dissolving the distinction between simulation and sense.

A systematic elaboration of the AI-related aspects of the Theory of Quantum Monads is developed at tenckhoff.eu .


Note:
This contribution was republished by the :contentReference[oaicite:0]{index=0} on its official blog: 
Risiken und Chancen Künstlicher Intelligenz – ein Gespräch zwischen Dr. Jürgen Tenckhoff und Boris Eldagsen 

Picture 1: Boris Eldagsen and Dr. Jürgen Tenckhoff on artificial intelligence
Picture 2: Artificial Intelligence - Human Intelligence ? (Image created with AI Stable Diffusion)