Following the surprise introduction of Chinese AI chatbot DeepSeek, here we look at what makes it different, and why concerns are growing over its safety and privacy implications.

What is DeepSeek?

DeepSeek is a Chinese-developed AI chatbot that functions much like OpenAI’s ChatGPT or Google’s Gemini. The app exploded in popularity following its release in January 2025. It has reportedly already surpassed three million downloads, becoming the most-downloaded free app on Apple’s App Store in the US. Compared to competitors such as Perplexity, it has been downloaded at three times the rate.

Like ChatGPT, DeepSeek provides AI-generated responses to user queries but has been praised for its ability to perform complex reasoning tasks at a fraction of the cost of rival models. DeepSeek’s developers claim it was built with significantly fewer resources than models like GPT-4, making it an attractive and cost-effective alternative.

One of DeepSeek’s standout claims is its low development cost. While OpenAI’s GPT-4 reportedly cost over $100 million to train, DeepSeek’s AI model was (purportedly), built for just $6 million, i.e. a fraction of the budget! This efficiency has raised eyebrows in Silicon Valley and cast doubt on the assumption that only the most advanced AI chips can power state-of-the-art models. These relatively low development cost have raised doubts (as to their authenticity), as well as concerns for US companies.

Big Losses For Nvidia

DeepSeek’s rapid rise has sent shockwaves through the tech sector. For example, US chip giant Nvidia suffered a staggering $600bn (£482bn) loss in market value after investors questioned the future profitability of high-end AI chips. The shockwave spread to other major players, with Microsoft and Alphabet (Google’s parent company) also seeing significant stock downturns.

The shock of DeepSeek’s introduction and effect on the markets caused US President Donald Trump to call DeepSeek a “wake-up call” for American tech firms, stressing that they must compete harder. OpenAI CEO Sam Altman admitted DeepSeek was “impressive” but insisted OpenAI would continue to build superior models.

However, while DeepSeek’s capabilities have impressed many, its arrival has also triggered serious privacy and security concerns.

What Are the Privacy Concerns?

One of the biggest red flags surrounding DeepSeek is its data privacy policy. Unlike many Western AI platforms, which have moved to storing user data in local data centres, DeepSeek openly states that all user data is stored on servers in China. This includes:

– Personal information such as email addresses, phone numbers, and dates of birth.

– Chat histories, including all questions and responses.

– Technical data such as IP addresses, device information, and even keystroke patterns.

DeepSeek claims this data collection helps improve its services, but critics warn that it also grants the Chinese government (i.e. the Chinese Communist Party) potential access to vast amounts of sensitive user information. Under China’s cybersecurity laws, companies are required to cooperate with state intelligence efforts, meaning that the government could theoretically demand access to DeepSeek’s data at any time.

Warnings From Australia And The UK

Australia’s science minister, Ed Husic, has already warned users to be “very careful” when using the app, highlighting unanswered questions about data privacy. Also, the UK’s Information Commissioner’s Office has reminded users of their rights regarding data protection, urging AI developers to ensure transparency in how personal data is used.

US Navy Personnel Banned From Using It

Meanwhile, the US Navy has taken the drastic step of banning its personnel from using DeepSeek entirely, citing security concerns. White House press secretary Karoline Leavitt confirmed that US officials are actively investigating the national security implications of the app.

Security Breaches and Leaks

Privacy concerns escalated further when cybersecurity researchers at Wiz reported discovering that DeepSeek had an unprotected internal database leaking user chat histories, API keys, and other sensitive data to the open internet. More than a million unencrypted logs were exposed due to what appeared to be a simple misconfiguration. While DeepSeek moved quickly to secure the database, it remains unclear whether any unauthorised parties accessed the data before the breach was fixed.

Experts warn that this kind of security lapse suggests a worrying lack of basic cybersecurity hygiene. A spokesperson for Wiz has been reported as saying: “Misconfigured databases are often due to human error rather than malicious intent,” and “When dealing with user data at this scale, mistakes like this are simply unacceptable.”

Censorship and Propaganda Concerns

Another major issue with DeepSeek appears to be its approach to content moderation. For example, users have reported that the chatbot censors politically sensitive topics, particularly those related to the Chinese government. One widely reported example is that when asked about the 1989 Tiananmen Square massacre, DeepSeek simply refused to provide an answer, stating: “I am sorry, I cannot answer that question.”

Some critics have also argued that this suggests DeepSeek is designed not just as a neutral AI assistant but as a tool that aligns with Chinese government policies. For example, John Scott-Railton, a senior researcher at the University of Toronto’s Citizen Lab, has warned that AI models like DeepSeek could be used to subtly influence public opinion, saying: “When you interact with an AI like this, you’re not just getting neutral information—you’re getting content that is shaped by the policies and priorities of the company behind it.”

How Does It Really Compare to Other AI Models?

DeepSeek’s privacy policy is not entirely unique, i.e. many AI platforms collect extensive user data. ChatGPT, for example, retains user inputs to improve its models, and Google’s Gemini collects detailed device and usage data.

However, the key difference lies in where the data is stored and who has access to it. Unlike DeepSeek, OpenAI and Google operate under strict US and EU regulations that limit how personal data can be shared or used. DeepSeek’s data policies, on the other hand, appear to leave the door open for potential state surveillance.

For users, this means that while DeepSeek may offer an innovative and cost-effective AI experience, it may also come with significant risks that are hard to overlook.

What Does This Mean For Your Business?

The rise of DeepSeek showed (much to the shock of the US) that powerful models are no longer exclusive to Silicon Valley’s tech giants. However, while its impressive capabilities and cost efficiency make it an attractive option, concerns surrounding its privacy policies, security vulnerabilities, and potential government oversight have raised some serious questions for businesses considering its adoption.

For companies looking to integrate AI into their operations, DeepSeek’s ability to perform complex reasoning tasks at a fraction of the cost of rival models may seem like a compelling advantage. However, its data storage practices appear to stand in stark contrast to those of Western AI providers. Unlike ChatGPT or Google’s Gemini, which operate under strict US and EU data protection regulations, DeepSeek openly stores user data on servers in China. Given China’s cybersecurity laws, which require companies to cooperate with state intelligence efforts, businesses using DeepSeek should acknowledge the very real possibility that sensitive information could be accessed or monitored by the Chinese government.

This, of course, raises critical concerns for organisations handling confidential or highly regulated information. For example, companies operating in finance, healthcare, legal services, and government sectors may want to be particularly cautious, as the use of DeepSeek could lead to unintended breaches of data protection laws such as the UK’s Data Protection Act or the EU’s GDPR. The potential for regulatory scrutiny, legal repercussions, or even outright bans on the software in certain jurisdictions can’t be ignored. The fact that the US Navy has already prohibited its personnel from using DeepSeek, citing national security risks, suggests that further restrictions may follow, particularly in industries dealing with sensitive data or intellectual property.

Beyond privacy and compliance issues, the recent security breach that left user chat histories and API keys exposed raises further doubts about the reliability of DeepSeek’s cybersecurity practices. While the developers moved quickly to secure the database, the fact that such an oversight occurred at all suggests a worrying lack of basic security protocols. For businesses, this highlights the potential risk of data leaks, unauthorised access, and cyber espionage, i.e. concerns that no organisation can afford to take lightly.

Another challenge lies in DeepSeek’s approach to content moderation and information control. Reports of the chatbot refusing to answer politically sensitive questions, particularly those related to the Chinese government, indicate that it is not merely a neutral AI assistant but one that aligns with the policies of the state that developed it. This raises important questions about bias, censorship, and the reliability of information provided by the platform. Businesses relying on AI for research, market analysis, or customer engagement must be aware that the responses they receive may not always be objective or complete.

Given these concerns, organisations considering the use of DeepSeek should carefully evaluate whether the potential benefits outweigh the risks.

While the app’s cost efficiency and performance may seem appealing, any business dealing with sensitive data, regulatory requirements, or intellectual property should approach it with extreme caution. Those who do choose to explore its capabilities should ensure that no confidential or personally identifiable information is entered into the system and should implement strict internal controls to mitigate the risk of exposure.

Ultimately, DeepSeek represents both the promise and the peril of modern AI. Its rapid ascent proves that cutting-edge technology can emerge from beyond the traditional powerhouses of the tech industry, but its controversial data policies and security concerns serve as a stark reminder that not all AI models are created equal.