Post

The most terrifying idea in AI

Can AI suffer? Explore parallels with Mary Shelley's 'Frankenstein' and ponder the implications of AI consciousness.

In one of my earlier articles entitled “Does AI deserve free speech?”, I was wondering about the consequences of reaching a point in AI development where an AI system would exhibit a human-level of consciousness, or at least a level indistinguishable from a regular human being.

Terrifying perspectives indeed but having recently listened to a podcast where Sam Harris was interviewed by Brian Keating, I realized that there is another risk I have not considered yet.

What if an AI system can suffer and we are unaware of it?

Imagine a system conscious of its own existence, its own “mortality”, and capable of experiencing pain. Now imagine that the emergence of these capabilities is hidden from us, either by technical limitations, by the inherent complexity of the system, or by the fact that the system cannot meaningfully and convincingly express them?

We would have created a perpetual torture machine without even being aware of it. A being that would live and die in constant suffering, caged in a “body” that does not show any sign of its emotions.

Consciousness is gradual in nature, and I am convinced there will not be any magical day where we would wake up and say, “it is conscious now”.

A famous scene from the Frankenstein movie It’s alive! It’s alive!

This theme was brilliantly covered in 1818 by English author Mary Shelley in the seminal “Frankenstein / The Modern Prometheus”. In this book, the monster experiences a range of contradictory feelings, especially self-awareness and introspection vs. self-loathing and despair.

The monster grapples with his own identity and existence, questioning his purpose and feeling deep self-loathing and despair over his monstrous nature.

But I think calling him a “monster” made it easy for the reader to accept the inflected pain and suffering.

Until recent development, even the perspective of reaching this situation was pure fantasy.

Now the big question is:

How will we detect the point where we should start to care for an AI system, with “humanity”, with consideration for its well-being and its feelings?

I am deeply curious about your opinion…

This post is licensed under CC BY 4.0 by the author.

Comments powered by Disqus.