|| سهیل سلیمی SOHEIL SALIMI ||

|| سهیل سلیمی SOHEIL SALIMI ||

| وبلاگ سهیل سلیمی نویسنده ، کارگردان و تهیه‌کننده | SOHEIL SALIMI's Weblog | Writer , Director & Producer |
|| سهیل سلیمی SOHEIL SALIMI ||

|| سهیل سلیمی SOHEIL SALIMI ||

| وبلاگ سهیل سلیمی نویسنده ، کارگردان و تهیه‌کننده | SOHEIL SALIMI's Weblog | Writer , Director & Producer |

Artificial Intelligence at the Crossroads of Choice: Between Ethics, Priority, and Cybernetic Systems

Artificial Intelligence at the Crossroads of Choice: Between Ethics, Priority, and Cybernetic Systems
Presented by: Soheil Salimi (Media Consultant, Cyber Space Research Laboratory, University of Tehran)
A Brief Exploration of the Philosophy of Ethics in Artificial Intelligence

Introduction

In the era of artificial intelligence, we face a question that until recently resided solely in the realms of philosophy, religion, and ethics: “Who should be saved?” This question takes on a tangible reality in critical situations—for instance, when a car carrying a family of five plunges into an icy river, and an AI-powered rescue robot can only save one person. The child? The mother? The elderly? Or the one biologically most likely to survive?

Will the machine decide? And if so, by what criteria?

1. Cybernetics and the Reconfiguration of Decision-Making

First, we must examine the nature of “decision” within the framework of cybernetics. As defined by Norbert Wiener, cybernetics is the science of control, command, and communication in living beings and machines. In this framework, every action generates feedback, and every decision results from a network of information, weights, and feedback loops.

In a cybernetic system, priority is determined not by “emotion” but by an algorithm based on inputs and outputs. Thus, a rescue robot might operate according to a function such as:

Save the person requiring the least energy to rescue and with the highest probability of survival.

Or:

Save the individual identified as having the greatest social or genetic value (based on statistically trained data).

Here, ethics is pushed out of the decision-making network unless ethics itself is encoded as quantifiable data.

2. Ethics as Data: Is It Possible?

The fundamental question is whether “ethics” can be translated into an algorithm. If the answer is no, then AI-based systems will never make ethical decisions but will instead act based on pre-programmed “priorities.”

However, if we accept that ethics can be formalized into computable rules (such as Kantian duty-based ethics or Bentham’s consequentialist ethics), there may be hope that a rescue robot could make an “ethical” decision.

For example, under duty-based ethics, saving a child as an innocent and vulnerable being is an unconditional moral duty. In contrast, consequentialist ethics might justify saving the mother, as she could raise other children in the future.

In both cases, ethics is no longer a feeling or inspiration but a mathematical and cybernetic function of the system’s objectives.

3. Emotions: System Noise or a Signal Beyond Logic?

In humans, emotions play a critical role in decision-making. We decide based on compassion, love, fear, loyalty, or grief—decisions often at odds with cold logic. In classical cybernetics, emotions are typically regarded as noise or disruptions in the system. However, in modern, interdisciplinary approaches, emotions are seen as soft signals that adjust decision weights.

AI can learn to mimic emotions (empathy simulation) but cannot truly “feel.” This distinction becomes critical in moments of crisis: a robot does not hesitate, regret, or carry memories of the moment in its subconscious. This “absence of guilt” is an advantage in efficiency but reveals an ethical void.

4. Media and the Representation of Ethical Choices in AI

In today’s world, media plays a significant role in shaping the public perception of AI. Narratives about rescue robots in films and stories are often infused with fear, admiration, or questioning. By depicting critical situations, media confronts us with the issue of “choice” in the face of a machine’s intelligent yet emotionless gaze. These representations not only shape public opinion but also guide the trajectory of technological development. If society demands that a rescue robot decide based on “emotion,” developers will simulate emotion in response.

Here, “media feedback” becomes part of the cybernetic system of technological development.

Ultimately, we must accept that in ethical dilemmas, AI will make decisions pre-programmed by us—its designers and programmers—within its algorithms. If we fail to translate ethics into the language of data, the machine will decide solely based on technical priorities.

Thus, the responsibility for AI’s decisions lies not with the machine but with us. This is not merely a technical issue but a profoundly media-driven, ethical, and cybernetic one.

Tags: #Artificial_Intelligence, #AI, #AI_Philosophy, #Ethics vs #Priority, #Cybernetics, #Media_Cybernetics, #Soheil_Salimi