1. https://appdevelopermagazine.com/artificial-intelligence
  2. https://appdevelopermagazine.com/huawei-improves-censoring-with-co-developed-deepseek-model/
10/3/2025 8:11:18 AM
Huawei improves censoring with co-developed DeepSeek model
AI chips,Huawei,DeepSeek,AI systems,Artificial Intelligence,Generative AI,AI Solutions,Machine Learning,Censorship
/huawei-improves-censoring-with-co-developed-deepseek-model-App-Developer-Magazine_tlfhjjja.jpg
App Developer Magazine
Huawei improves censoring with co-developed DeepSeek model

Artificial Intelligence

Huawei improves censoring with co-developed DeepSeek model


Friday, October 3, 2025

Austin Harris Austin Harris

A co-developed DeepSeek model is designed to restrict politically sensitive content and harmful speech, with Huawei improves censoring capabilities while maintaining performance comparable to the original AI system.

Chinese technology company Huawei has developed a co-adapted AI model intended to filter politically sensitive content and harmful speech online. Based on the open-source DeepSeek-R1 architecture, the revised model, called DeepSeek-R1-Safe, was trained using Huawei’s Ascend AI chips and modified to comply with domestic regulatory requirements. Tests conducted by Huawei indicate that the model is highly effective at blocking content considered sensitive under Chinese law while maintaining operational performance.

Collaborative development and technical adaptation

DeepSeek-R1-Safe was created in collaboration with Zhejiang University, the alma mater of DeepSeek’s founder, Liang Wenfeng. Huawei and its academic partner adjusted the model to meet Chinese regulatory standards without the direct involvement of the original DeepSeek team. Huawei reported that the model achieved close to 100% success in identifying and restricting politically sensitive material in controlled scenarios.

The model also addresses other categories of harmful content, including toxic speech, incitement to illegal activity, and harassment. In more complex testing scenarios, such as role-playing simulations or the use of encrypted coding tests, effectiveness dropped to roughly 40%. Huawei calculated an overall comprehensive security rating of 83%, which exceeds the performance of comparable models like Alibaba’s Qwen-235B and DeepSeek-R1-671B by 8% to 15% under the same evaluation conditions.

Huawei emphasized that these modifications preserved the model’s core functionality, allowing it to perform general AI tasks while enforcing content restrictions. The company highlighted that the integration of these safety mechanisms did not significantly affect computational efficiency or response accuracy.

Alignment with regulatory standards

China mandates that domestic AI systems conform to “socialist values” and restrict access to politically sensitive content. Companies releasing AI applications to the public must demonstrate compliance with these requirements. Huawei stated that DeepSeek-R1-Safe is fully aligned with these regulations.

Domestic AI chatbots, such as Baidu’s Ernie Bot, already employ similar content restrictions. These systems often refuse to answer questions about domestic politics or other topics flagged as sensitive by authorities. Huawei’s approach builds on this model, applying generative AI techniques to maintain responsiveness while ensuring regulatory compliance.

Hardware and training scale

DeepSeek-R1-Safe was trained using 1,000 Ascend AI chips, allowing it to process large datasets efficiently. Huawei reported that the computational resources used supported rapid training and high performance for both moderation and general-purpose tasks.

The model’s architecture incorporates safeguards designed to detect attempts to bypass content filters, including coded language, hypothetical scenarios, and context-based evasion strategies. Huawei highlighted that the AI’s ability to block harmful content does not come at the cost of general usability, preserving accuracy in standard applications.

a man using deepseek new censored model representation

Comparison with other AI systems

Huawei compared DeepSeek-R1-Safe with other domestic AI models. Performance metrics indicated that Alibaba’s Qwen-235B and the original DeepSeek-R1-671B were less effective in combined safety and operational efficiency tests. The company noted that the improvements stem from both algorithmic adjustments and the large-scale deployment of AI chips during training.

Huawei emphasized that balancing content moderation with model responsiveness is critical. Many generative AI systems face trade-offs between filtering capabilities and general usability. By maintaining efficiency and accuracy while introducing content safety measures, DeepSeek-R1-Safe provides a model for AI deployment in regulated environments.

Implications for the Chinese AI industry

The development of DeepSeek-R1-Safe reflects broader trends in China’s AI sector, where open-source models are frequently adapted to comply with domestic regulations. Companies increasingly modify general-purpose AI to include content filtering mechanisms aligned with policy requirements. Huawei’s work illustrates how regulatory frameworks influence both technological design and operational deployment in the country.

The announcement coincided with Huawei Connect in Shanghai, where the company outlined AI chip strategies and computing power plans. The discussion provided insight into internal development processes, including how large-scale AI chip deployment and academic partnerships contribute to AI research and model adaptation.

Societal and operational considerations

AI systems like DeepSeek-R1-Safe demonstrate the balance between technological capability and regulatory compliance. Implementing extensive filtering mechanisms raises questions about censorship, digital governance, and the limits of generative AI in controlled environments. Huawei’s approach shows how companies can integrate regulatory requirements without substantially reducing system performance, while highlighting the challenges of maintaining both safety and usability.

Huawei improves censoring with co-developed DeepSeek model

Huawei’s adaptation of DeepSeek-R1 into a safety-focused AI model underscores the growing emphasis on compliance and operational security within China’s technology sector. DeepSeek-R1-Safe combines high-performance AI with enhanced content moderation, illustrating how domestic policy influences technological development. The model represents a convergence of regulatory adherence and computational efficiency, providing a template for future AI systems deployed in environments with strict content requirements. By enforcing content safety while retaining performance, Huawei’s work highlights the evolving landscape of AI development and deployment in China.






Subscribe to App Developer Magazine

Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.

MEMBERS GET ACCESS TO

  • - Exclusive content from leaders in the industry
  • - Q&A articles from industry leaders
  • - Tips and tricks from the most successful developers weekly
  • - Monthly issues, including all 90+ back-issues since 2012
  • - Event discounts and early-bird signups
  • - Gain insight from top achievers in the app store
  • - Learn what tools to use, what SDK's to use, and more

    Subscribe here



Featured Stories


Tether QVAC SDK Powers AI Across Devices and Platforms
Tether QVAC SDK Powers AI Across Devices and Platforms Wednesday, April 22, 2026


APAC 5G expansion to fuel 347B mobile market by 2030
APAC 5G expansion to fuel 347B mobile market by 2030 Tuesday, April 21, 2026


How AI is causing app litter everywhere
How AI is causing app litter everywhere Tuesday, April 21, 2026




The App Economy Is Thriving
The App Economy Is Thriving Monday, April 20, 2026


NIKKE 3.5 anniversary update livestream coming soon
NIKKE 3.5 anniversary update livestream coming soon Friday, April 17, 2026


New AI tool targets early dementia detection
New AI tool targets early dementia detection Thursday, April 16, 2026


Jentic launch gives AI agents api access
Jentic launch gives AI agents api access Wednesday, April 15, 2026


Experts warn ai-generated health content risks misinterpretation without human oversight
Experts warn ai-generated health content risks misinterpretation without human oversight Wednesday, April 15, 2026


Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines
Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines Tuesday, April 14, 2026


AccuWeather Launches ChatGPT Integration for Live Weather Updates
AccuWeather Launches ChatGPT Integration for Live Weather Updates Tuesday, April 14, 2026


Stay Updated

Sign up for our newsletter for the headlines delivered to you

SuccessFull SignUp

Get More App News



/sites/themes/prod/assets/js/less.js"> ' ' %>