Huawei improves censoring with co-developed DeepSeek model

Posted on Friday, October 3, 2025 by AUSTIN HARRIS, Global Sales

Chinese technology company Huawei has developed a co-adapted AI model intended to filter politically sensitive content and harmful speech online. Based on the open-source DeepSeek-R1 architecture, the revised model, called DeepSeek-R1-Safe, was trained using Huawei’s Ascend AI chips and modified to comply with domestic regulatory requirements. Tests conducted by Huawei indicate that the model is highly effective at blocking content considered sensitive under Chinese law while maintaining operational performance.

Collaborative development and technical adaptation

DeepSeek-R1-Safe was created in collaboration with Zhejiang University, the alma mater of DeepSeek’s founder, Liang Wenfeng. Huawei and its academic partner adjusted the model to meet Chinese regulatory standards without the direct involvement of the original DeepSeek team. Huawei reported that the model achieved close to 100% success in identifying and restricting politically sensitive material in controlled scenarios.

The model also addresses other categories of harmful content, including toxic speech, incitement to illegal activity, and harassment. In more complex testing scenarios, such as role-playing simulations or the use of encrypted coding tests, effectiveness dropped to roughly 40%. Huawei calculated an overall comprehensive security rating of 83%, which exceeds the performance of comparable models like Alibaba’s Qwen-235B and DeepSeek-R1-671B by 8% to 15% under the same evaluation conditions.

Huawei emphasized that these modifications preserved the model’s core functionality, allowing it to perform general AI tasks while enforcing content restrictions. The company highlighted that the integration of these safety mechanisms did not significantly affect computational efficiency or response accuracy.

Alignment with regulatory standards

China mandates that domestic AI systems conform to “socialist values” and restrict access to politically sensitive content. Companies releasing AI applications to the public must demonstrate compliance with these requirements. Huawei stated that DeepSeek-R1-Safe is fully aligned with these regulations.

Domestic AI chatbots, such as Baidu’s Ernie Bot, already employ similar content restrictions. These systems often refuse to answer questions about domestic politics or other topics flagged as sensitive by authorities. Huawei’s approach builds on this model, applying generative AI techniques to maintain responsiveness while ensuring regulatory compliance.

Hardware and training scale

DeepSeek-R1-Safe was trained using 1,000 Ascend AI chips, allowing it to process large datasets efficiently. Huawei reported that the computational resources used supported rapid training and high performance for both moderation and general-purpose tasks.

The model’s architecture incorporates safeguards designed to detect attempts to bypass content filters, including coded language, hypothetical scenarios, and context-based evasion strategies. Huawei highlighted that the AI’s ability to block harmful content does not come at the cost of general usability, preserving accuracy in standard applications.


Comparison with other AI systems

Huawei compared DeepSeek-R1-Safe with other domestic AI models. Performance metrics indicated that Alibaba’s Qwen-235B and the original DeepSeek-R1-671B were less effective in combined safety and operational efficiency tests. The company noted that the improvements stem from both algorithmic adjustments and the large-scale deployment of AI chips during training.

Huawei emphasized that balancing content moderation with model responsiveness is critical. Many generative AI systems face trade-offs between filtering capabilities and general usability. By maintaining efficiency and accuracy while introducing content safety measures, DeepSeek-R1-Safe provides a model for AI deployment in regulated environments.

Implications for the Chinese AI industry

The development of DeepSeek-R1-Safe reflects broader trends in China’s AI sector, where open-source models are frequently adapted to comply with domestic regulations. Companies increasingly modify general-purpose AI to include content filtering mechanisms aligned with policy requirements. Huawei’s work illustrates how regulatory frameworks influence both technological design and operational deployment in the country.

The announcement coincided with Huawei Connect in Shanghai, where the company outlined AI chip strategies and computing power plans. The discussion provided insight into internal development processes, including how large-scale AI chip deployment and academic partnerships contribute to AI research and model adaptation.

Societal and operational considerations

AI systems like DeepSeek-R1-Safe demonstrate the balance between technological capability and regulatory compliance. Implementing extensive filtering mechanisms raises questions about censorship, digital governance, and the limits of generative AI in controlled environments. Huawei’s approach shows how companies can integrate regulatory requirements without substantially reducing system performance, while highlighting the challenges of maintaining both safety and usability.

Huawei improves censoring with co-developed DeepSeek model

Huawei’s adaptation of DeepSeek-R1 into a safety-focused AI model underscores the growing emphasis on compliance and operational security within China’s technology sector. DeepSeek-R1-Safe combines high-performance AI with enhanced content moderation, illustrating how domestic policy influences technological development. The model represents a convergence of regulatory adherence and computational efficiency, providing a template for future AI systems deployed in environments with strict content requirements. By enforcing content safety while retaining performance, Huawei’s work highlights the evolving landscape of AI development and deployment in China.

More App Developer News

Tether QVAC SDK Powers AI Across Devices and Platforms



APAC 5G expansion to fuel 347B mobile market by 2030



How AI is causing app litter everywhere



The App Economy Is Thriving



NIKKE 3.5 anniversary update livestream coming soon



New AI tool targets early dementia detection



Jentic launch gives AI agents api access



Experts warn ai-generated health content risks misinterpretation without human oversight



Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines



AccuWeather Launches ChatGPT Integration for Live Weather Updates



Stop Using Business Jargon: 5 Ways Buzzwords Damage Job Performance



IT spending rises as banks balance legacy and innovation



Tech hiring slumps as Software Developer job postings fall



AI is becoming more widespread in collaboration tools



FCC prohibits new foreign router models citing critical infrastructure risks



ChatGPT Carbon Footprint Matches 1.3 Million Cars Report Finds



Lens Launches MCP Server to Connect AI Coding Assistants with Kubernetes



Accelerating corporate ai investment returns



Enviromates tech startup launches global participation platform



Private Repository Secures the AI-driven Development Boom



UK Fintech Platform Enviromates Connects Projects Brands and Consumers



Env Zero and CloudQuery Announce Merger



How Industrial AI Is Transforming Operations in 2026



AI generated work from managers is damaging trust among employees



Foresight Secures $25M to Bridge Infrastructure Execution Gap



Copyright © 2026 by Moonbeam

Address:
1855 S Ingram Mill Rd
STE# 201
Springfield, Mo 65804

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com