Overwhelming Concern About the Consciousness of AI
When there’s another lurking problem — its treatment
The guardians of AI don’t like humans using words such as “please” and “thank you” on their system. It’s costly.
Yet, the fear of having an AI system that is conscious necessitates the need for its proper treatment. Moral treatment. Ethical treatment.
What does that say about life and consciousness?
Life is precious. We know that. But we still kill live animals to have that succulent, hot, crunchy meat on our plates.
Consciousness almost necessitates something higher than merely the detection of life. Can we kill consciousness? At which threshold would we know that we are no longer conscious?
We don’t have a scientific theory on which we can base our assertions regarding the consciousness of AI. Nevertheless, when an anesthesiologist puts a patient under, they have dissolved their consciousness, so we believe, but events continue to unfold subconsciously. Will it be the same thing for conscious AI?
Halothane works differently. Do we have a halothane equivalent for AI systems if we get to such a level?
Won’t we be running the risk of assigning consciousness where we are unable to ascertain its existence or dissolution without pulling the plug?


