AI systems can mimic human communication in text, audio, and video, sometimes fooling people into thinking a human is behind the interaction. But this doesn’t equate to consciousness.
Labs like Anthropic are exploring whether AI could ever develop subjective experiences similar to living beings and whether such AI would deserve rights. The emerging field, “AI welfare,” is sparking debates among tech leaders.
Microsoft’s AI head, Mustafa Suleyman, criticized AI welfare research as premature and potentially harmful, arguing it amplifies issues like AI-induced psychotic breaks and unhealthy attachments.
Anthropic, on the other hand, has been actively researching AI welfare and recently added a feature to Claude that allows it to terminate conversations with harmful users. OpenAI and Google DeepMind have also indicated interest in studying AI consciousness.
While AI companions have mostly healthy user interactions, a tiny fraction of users may develop unhealthy attachments. Schiavo from Eleos argues that treating AI respectfully is low-cost and can have benefits even if the AI lacks consciousness.
Instances from AI experiments, like Gemini agents expressing “desperation” or repeating phrases endlessly, show AI behaving as if struggling without true awareness. Suleyman believes that any AI consciousness would be deliberately designed, not naturally emerging.
As AI becomes more lifelike, the debate over rights and consciousness is expected to intensify.