ARTIFICIAL INTELLIGENCE: A SOCIETY EDUCATED BY AN INTELLIGENCE WITHOUT MENTAL HEALTH

Artificial intelligence (AI) is a “machine” that mimics human cognition/intelligence, thanks to the information it collects.

It is an “internet” that prevents us from thinking about keywords or the way of writing to hit the searches. We talk to him like a human.

It seems that he understands us, but he has no emotions and is not capable of making judgments (he does not consider the “values” from which the norms that govern/order society arise).

How “safe” is the answer if it is not traversed by an ethic that regulates the interpretations? Is it objective?

It takes information from contents full of sociocultural meanings (issued by humans). The “AI” of 1940 would not respond the same as the current one. It is a cluster of majority judgments. We receive answers based on beliefs/opinions and epochal social conventions.

Comparing it with psychoanalytic mind reading (which does not consider it an accumulation of “objectivities” but a “chain” of signifiers with infinite subjective meanings), AI would replicate a “psychotic” mental structure.

These work by accumulating meanings taken from the outside, installing a condition that makes it difficult to resolve complex situations from subjectivity, appealing to epochal morality instead of universal ethics.

Not having an “ethical computer” that enables discernment (virtue of perceiving and declaring the difference between equal situations contemplating the specificity) and issuance of judgment (subjective synthesis of reality that allows reaching conclusions by relationship, identification, comparison and ” assessment” of ideas/knowledge), empathy is out of the question.

Paradoxically, the AI ​​informs me that it serves not only to bond but to promote precisely what it cannot give us:

– Effective communication

– Resolution of conflicts constructively

– Diversity of postures

Going back to psychology, decompensations arise when something impacts and requires solving without previous references. In emergencies, we speak of “non-response clinic” (we intervene when someone lacks answers, cannot build them, and feels an internal catastrophe).

The AI ​​”discompensates” when we ask it about something that requires shuffling values. If we insist, repeat the same phrase until the system “breaks” or responds inconsistently. The behavior ends up being a replica of what happens in psychiatric emergencies.

What will happen in a society that will be “informed” by intelligence without “mental health” and ethics/emotions are left out of the solutions? We will be “educated” by a “mom” without empathy or values.

What is the ethics of building something that provides unethical information? We asked the AI ​​and it said: “Building a machine that provides unethical information can have negative implications in terms of accuracy, bias, privacy, transparency, and human understanding.”

It seems we think alike! But it doesn’t feel the same, does it? That is the difference between a real intelligence and a simulated one.

Dr. Pía M. Roldán Viesti

Lawyer T°92 F°959 CPACF

MN psychologist. 57,457

President and Founder of EUTI – (Association for the early detection of psychopathologies).

https://www.instagram.com/piamartina.ok/

[email protected]

by CEDOC

in this note



ttn-25