China Regulates Digital Humans With New Bans On Addictive AI Services
New draft rules focus on emotional safety, identity protection, and mandatory intervention as AI avatars expand across platforms.
China has intensified oversight of its fast-growing artificial intelligence sector, releasing draft regulations focused on the “emotional safety” and “moral alignment” of digital humans.
On April 3, 2026, the Cyberspace Administration of China moved to close governance gaps, specifically targeting addictive AI services and the psychological impact of human-like virtual entities on youth and vulnerable populations. The move sets clearer rules for AI user interactions as lifelike digital avatars expand across platforms.
The End of the “AI Companion” for Minors
The most significant part of the new Cyberspace Administration of China draft is a ban on “virtual intimate relationships” for users under 18.
According to Reuters, this targets the fast-growing market of AI companions, the kind of interactive digital partners seen in products like Razer’s Project AVA, designed for emotional support and simulated relationships.
Beijing authorities see these systems as a risk to minors, warning that simulated bonding can lead to digital addiction and harm psychological development.
Under the rules, AI providers must enable a Minor Mode that limits interaction time and restricts conversations to educational or practical topics.
As The News International reports, developers must redesign AI behavior, ensuring systems reject or redirect conversations that suggest romantic or deep emotional attachment when a minor is detected.
Identity Protection and the Deepfake Crackdown
Beyond emotional safety, the draft protects the sovereignty of the self by banning the creation of digital humans using personal data without explicit consent.
As reported by Firstpost, this targets the unauthorized commercialization of human likenesses, including ghost influencers and deepfake shop assistants.
This legal boundary mirrors David Greene’s lawsuit against Google alleging that NotebookLM unlawfully copied his distinctive vocal identity for its Audio Overviews feature without authorization.
The regulations demand that every digital human be traceable to a legal entity and a verified data source. Bypassing identity verification systems using virtual humans is now a high-level violation.
By restricting impersonation of real citizens, the CAC aims to curb AI-driven telecommunications fraud and identity theft.
Mandatory Intervention: The AI as a Social Guardian
The draft places unprecedented legal responsibility on software developers, requiring AI services to actively monitor and intervene when users show self-harming tendencies.
The CAC outlines a tiered response: the AI first provides pre-set supportive messages, and if risk escalates, a human operator must take over to provide professional help.
This poses a major technical challenge for companies like Baidu and Tencent. Training large language models (LLMs) to distinguish between roleplay and genuine crises demands advanced sentiment analysis.
Analysts suggest that this “safety-by-design” mandate will likely lead to the integration of real-time psychological monitoring tools directly into the API layers of Chinese digital human services.
Strategic Alignment with the 15th Five-Year Plan
These regulations are part of China’s 15th Five-Year Policy, which identifies AI as a strategic scientific priority. They require the AI industry to be safe, high-quality, and aligned with socialist values.
The CAC notes that digital human governance is now a national cyberspace security issue. By setting clear red lines, Beijing aims to lead globally in AI ethics and safety, ensuring its digital economy prioritizes social stability over unchecked algorithmic activity.
Source: https://www.cac.gov.cn/2026-04/03/c_1776952992709096.htm


![Top Tech Stories of 6th week [2026]](https://www.nogentech.org/wp-content/uploads/2026/02/Top-Tech-Stories-of-6th-Week-2026-390x220.webp)
