You need help configuring your mobile phone or information about your insurance coverage, so you contact the hotline – and the staff answer your call in accent-free English or German. You would never guess that someone in a call centre in Kolkata is sitting on the other end of the line. Entire service lines were outsourced years ago for economic reasons.
In the meantime, call centre staff in India are no longer efficient enough and are being replaced by agent-based AI systems, online bots or moltbots. These not only answer your questions and wait for your commands, but also track tasks, monitor conditions and carry out work independently without the need for constant user input. The AI bots run continuously in the background and can respond to instructions via chat applications such as WhatsApp, Telegram, Slack or voice recognition. They can act on your behalf, remember context over long periods of time and carry out the tasks assigned to them independently.
So will we soon experience an authenticity crisis in which we no longer know whether we are communicating with a real person or an intelligent machine?
1. The fundamental dilemma: humans vs. machines
| Expectation | Reality with AI-Agents |
|
Personal response |
AI simulates personality, tone of voice, accent, even mood |
| Responsible preson towards | No one is liable, no one feels responsible |
| Real relationship | Algorithm optimized for engagement, not connection |
| Trust | Will be systematically subordinated if cover is blown |
The problem: as soon as uncertainty arises as to whether something is human or machine, generalized mistrust arises—even
towards real people. Who takes responsibility for misinformation or its effects?
2. Specific risks of camouflage
| Scenario | Consequences |
|
Business negotiations |
Opposite thinks he is negotiating with decision-makers - in reality with AI, which has no real commitment power
|
|
Medical advice |
Patient believes doctor listened personally - AI agent has no empathy, no moral or ethical responsibilityatient
|
|
Therapeutic conversation |
Deepest vulnerability we squander on algorithms |
| Legal communication |
Cient expects lawyer - AI-agent cannot replace strategic decision-making |
|
Love and friendship |
Emotional manipulation through empathetic AI (cf. AI-Companion by Luka) |
What sounds like something out of an episode of Black Mirror has long been reality. AI apps such as Replika train algorithms that recognize emotions in real time. The US company Luka has developed an AI bot that allows users to communicate via voice input and facial expressions. The artificial intelligence communicates with the user through a self-created avatar. It analyzes the user's individual mood (emotion recognition) via the camera, microphone, and choice of words with the help of AI pattern recognition.
China is already using such technologies to monitor unpopular minorities.
3. The reverse Turing test society
We are approaching a state in which:
- Every interaction becomes suspicious
- The burden of proof is reversed: Not “Are you AI?”, but prove that you are human
- Authenticity certificates become necessary (video calls, biometric verification, trustworthy infrastructure)
- Social costs increase: more time for verification, less spontaneous communication
Paradox: the technology that promises efficiency creates friction losses due to mistrust.
4. Who bears the responsibility?
| Participant | Responsibility | Problem |
|
Users of the AI-agent |
Disclosure |
Competitive disadvantage, social stigam |
|
Provider (OpenClaw, Luka) |
Labeling requirement? |
Open Source = no central control |
|
Recipient |
Distrutst as default? |
Technically difficult to enforce, internationally fragmented |
5. Possible solutions
| Approach | Iplementation |
|
Labeling requirement |
This message was created by a KI agent |
|
Verifiable identity |
Digital signatures, blockchain-based “human being” certificate |
|
Context rules |
AI-agents only for specific, low-sensitivity domains |
|
Education and awareness |
Media competency, critical AI-literacy |
|
Social norms |
Good behavior and trust require disclosure |
But with all these approaches that contribute to greater transparency, one thing must not be forgotten: for many users, the boundaries between the real world and the virtual world disappear, even though they know full well that their avatar is just a machine. They develop deep feelings for something that in reality consists only of a network of artificial neurons.
6. The deeper problem: the simulation of care
Luka, OpenClaw, and similar agents can not only respond, but also appear to remember, show concern, and build relationships. This is not a relationship—it is behavioral manipulation.
We are building machines that pretend to like us. That is not efficiency—it is emotional deception—
an emotional deception that is mutating into the synthetic drug of the modern age.
Conclusion:
The question is no longer “Can AI act like a human?”—it already can.
The question is rather: “Do we want a society in which we constantly have to guess whether the person we are dealing with is real?”
Luka and OpenClaw are technically brilliant, but socially risky. Anyone who uses them should ask themselves:
Efficiency for whom – and at what cost to social trust?
Mandatory labeling is the minimum solution – but the real solution lies in a culture of transparency that goes beyond compliance.