Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Security Concerns, Chinese Ownership Drive Concerns

U.S. federal agencies are blocking employees from using Chinese chatbot DeepSeek over security and privacy concerns.
See Also: OnDemand | AI in the Spotlight: Exploring the Future of AppSec Evolution
Organizations that have reportedly restricted use of the Chinese startup’s product include NASA, the Pentagon, the Congress and the Navy.
The AI model’s potential to share data with the Chinese government is sparking concern, as are security problems with the app (see: DeepSeek AI Models Vulnerable to Jailbreaking).
DeepSeek is under review “and is currently unauthorized for official House use,” the House of Representatives chief administrative officer said in a notice to congressional offices, reported Axios. “Threat actors are already exploiting DeepSeek to deliver malicious software and infect devices,” the notice warned.
NASA personnel are not permitted to use the AI platform to “share or upload agency data on DeepSeek products or services,” and are not authorized to “access DeepSeek via NASA devices and agency-managed network connections” as of Friday, the CNBC reported.
The massive downloads of the DeepSeek app mean that thousands and even millions of users are experimenting and uploading what could be sensitive information, Forrester said. The Pentagon’s ban came after some of its employees had been using the chatbot for several days.
DeepSeek’s privacy policy shows that it collects “text or audio input, prompt, uploaded files, feedback, chat history or other content,” using it for training purposes. Its servers are based in China, where all user data is stored. Companies in the authoritarian country are required to share data with intelligence agencies on request.
This means that the Chinese government could potentially use DeepSeek’s AI models to spy on American citizens, acquire proprietary secrets and conduct influence campaigns, said Melissa Ruzzi, director of artificial intelligence at security company AppOmni. The volume of AI-driven attacks may also increase. DeepSeek is vulnerable to jailbreaking, allowing bad actors to bypass restrictions and generate malicious outputs that can then be used in other attacks, she said.
“There is no silver bullet to securing DeepSeek,” she told Information Security Media Group, adding that “sensitive data once fed to an AI tool cannot be clawed back.”
The company’s AI platform enforces guardrails on topics sensitive to China, but its protections against data leaks and hallucinations are notably weak, said Elad Schulman, CEO of generative AI security company Lasso.
David Brauchler, technical director, and head of AI and ML security at NCC Group, said that DeepSeek contains biases that may run contrary to the objectives of U.S. military personnel: like any model, malicious patterns can be embedded into the end product that induce unwanted behavior when certain triggers are met. “For example, questions about U.S. military strategy could trigger the model to respond with poor-quality suggestions,” he said.
“We also observed broader security risks beyond its origin and supply chain, including suspicious behaviors that could pose a threat to organizations. Given these findings, we strongly advise against using these models in critical workflows or sharing any sensitive information with them,” he told ISMG.
A clutch of governmental agencies across the globe have already initiated investigations into DeepSeek, including the Italian data protection authority and the Irish Data Protection Commission which are probing DeepSeek’s data storage and processing practices. Data protection authorities in France, Belgium and South Korea have also initiated inquiries.