Tag Archives: AI

ChatGPT vs. Deepseek: Which is Better for Privacy?

There’s no doubt that AI-powered tools have revolutionized how we work.

It’s even changed how we see the world. With the release of ChatGPT, AI became a subject that was discussed at the dinner table.

Business leaders and individuals alike are experimenting with AI apps and rushing to determine how they can integrate them into their organizations and their workflows. Nevertheless, in this rush to keep up with the times, we mustn’t let online privacy become an afterthought.

Competition Leads to Better Privacy

ChatGPT may have set the bar, but contenders like Deepseek have proven that anything can happen in the AI space, and it can happen fast.

Deepseek astonished the tech giants with its latest large language models (LLL). AI models such as DeepSeek R1 are comparable with Open Ai’s flagship ChatGPT-4o and Claude Sonnet 3.5.

This competition is not only a pivotal part of the story that will lead to better and better AI, it means more choice for users. Now, privacy-focused individuals can compare the privacy policies of AI apps and make informed decisions about how they use them.

In terms of privacy, ChatGPT and Deepseek have many similarities on the surface. However, as you dig deeper it becomes clear that ChatGPT has put more effort into creating policies and features that give users some level of control over their data.

How ChatGPT and Deepseek are Alike

Before getting into how ChatGPT and Deepseek stack up against each other, it makes sense to start with their similarities.

Both apps are chatbots. It’s a lot like messaging friends with Meta’s Messenger. Only in this case, you’re chatting with AI. Once users type in their questions or explain what they’re looking for, chatbots generate incredibly polished and detailed answers.

Like any apps that require sign-up, both apps collect personal data such as name, email, and phone number. Additionally, technical data such as your IP address as well as usage data such as feature usage are stored.

ChatGPT collects data from user prompts, uploaded files, and conversations. Deepseek collects all the same data in addition to user feedback.

Thus, in terms of input, ChatGPT and Deepseek are identical. The same chat data goes in, but the first crucial factor is to determine how that input is stored, and how it’s potentially used.

Where Your Data Goes

Deepseek’s privacy policy reveals that all user data is stored on secure servers in China. As ChatGPT is a US-based company, it stores user data on US servers.

This means that Deepseek and ChatGPT follow vastly different laws set by the countries they operate in.

Deepseek is subject to the Cybersecurity Law of China (2017) and the Data Security Law (2021). These laws grant the Chinese government access to a wide array of user data.

Data that can be requested and sent to Chinese authorities include sensitive user inputs, conversations, and even login credentials.

For US citizens and other people who live outside of China, this might be concerning because they could be subjected to the surveillance laws of a foreign country.

Controlling Your Data

Deepseek collects more user data than ChatGPT overall.

Any chats you have in Deepseek can be used to train its AI. The app also includes third-party tracking. It collects behavioral analytics, even outside of Deepseek.

To top it off, Deepseek shares data with advertisers, corporate affiliates, and legal entities and there’s no way to opt out as a user.

The data ChatGPT collects isn’t as extensive. Most importantly, it allows people to have control over their data. Functionality is provided so all user data that was collected can be deleted. Additionally, it’s possible to opt out of sharing data to train its AI.

Compliance with the Laws of Foreign Countries

ChatGPT has greater transparency in terms of which laws it is compliant with that originate outside of the US. It explicitly states that it is GDPR and CCPA compliant. GDPR is the EU data protection law and CCPA is the California Consumer Privacy Act.

Deepseek lacks transparency about compliance. Certainly, it follows China’s privacy laws but it can’t be assumed that laws in the US and Europe are followed to the same extent.

Conclusion

By default, AI apps collect user data fairly aggressively. However, Deepseek’s privacy policy reveals that it’s even more hungry for your data than ChatGPT. Deepseek is a ground-breaking AI app, but it has a long way to go in terms of considering the online privacy of its users.

Two notable differences between Deepseek and ChatGPT are the level of control the user has over their data as well as transparency.

ChatGPT offers opportunities to opt out of data collection. Plus, it provides options for removing your data. With Deepseek, the user isn’t’ given tools to manage their data. It isn’t clear whether it’s compliant with laws such as GDPR and CCPA or not.

Is Your Data Safe with AI App Deepseek?

TikTok is the popular app that’s often been cited as a potential threat to the online privacy of Americans. However, there’s a new kid on the block that privacy-focused people should be even more careful with.

It’s a new AI app called Deepseek.

Deepseek is breaking new ground in AI. People are rushing to try it due to the hype on social media and in the headlines. However, privacy experts advise that people should exercise caution when chatting with Deepseek’s AI.

How Deepseek Made Waves

Recently, Deepseek has exploded in popularity. It’s right up there at the very top of the Google Play store at the time of this writing, right along with rival ChatGPT.

Deepseek’s rise was so dramatic that Donald Trump called it a “wake-up call” for US companies that have invested heavily in AI, such as Microsoft, OpenAI, and Nvidia. It sent Nvidia’s stock price plummeting nearly 17% in late January, setting a record for the largest single-day loss in stock market history.

The message the market sent was clear. China is a force to be reckoned with in AI. And moving forward US competitors won’t underestimate companies like Deepseek and Alibaba again.

Investors and businesspeople alike are astonished that Deepseek achieved performance comparable with ChatGPT with only $5.6M in development costs. It took OpenAI over $3B to develop GPT-4.

Diving into Deepseek: How it Works

Deepseek was founded in May 2023, and in 2025 it reached the limelight. This surge in popularity was largely due to Its newest large language models. DeepSeek-V3 and DeepSeek-R1 are revolutionary in their efficiency and cost-effectiveness.

DeepSeek-V3 is a general-purpose model that’s trained to answer the user’s questions. DeepSeek-R1 can question itself. It’s geared towards advanced reasoning tasks and deep thinking.

If you’ve ever used ChatGPT you know what to expect in terms of how you interact with Deepseek’s AI. It’s essentially a chat interface not much different from Meta’s Messenger, only with AI on the other end generating detailed and polished replies. Simply type in your question or specify what information you’re looking for.

The Data Deepseek Collects About You

So, what exactly does Deepseek collect about its users?

The personal data you share during sign-up as well as “text or audio input, prompt, uploaded files, feedback, chat history,” may be collected and stored by the company.

Additionally, when you contact Deepseek it can “collect the information you send us, such as proof of identity or age, feedback or inquiries about your use of the Service or information about possible violations of our Terms of Service (our “Terms”) or other policies.”

Certain data is collected automatically. Technical information such as your IP address and operating system are stored. Deepseek maintains a list of devices that you use to access the app so that all of them are associated with your account. This is common as it allows tech companies to deal with people who abuse their services.

The next part is a little strange at first glance. The app automatically gathers “keystroke patterns or rhythms.” This could be a way to identify bots so they can potentially be blocked from using Deepseek’s services.

To China or Not to China

Some people don’t see a big difference between Deepseek and other AI apps like ChatGPT in terms of privacy. Both Deepseek and ChatGPT may store what’s said during chat sessions.

The key difference to consider is ChatGPT is operated by a US company while Deepseek is based in China. So, the data that’s collected about you by Deepseek is stored on secure servers in China.

Even if one concludes that ChatGPT and Deepseek operate similarly as a company, it isn’t the company itself that’s in question. The political landscape is vastly different in China. And the governments in the US and China have values that couldn’t be more distinct.

As an American, when you use ChatGPT your data remains in America. When you use Deepseek your data is collected by a foreign company that must abide by laws of the land. Unlikely or not, many Americans aren’t comfortable with the idea of a foreign government probing companies like Deepseek for information about US citizens.

Information to Avoid Sharing with Deepseek

If you decide to try out Deepseek, proceed with caution. Avoid sharing information about yourself or about your loved ones that you consider private. Think twice about sharing media such as audio or images that reveal information that you wouldn’t be comfortable with distributing online.

A good rule of thumb is if you wouldn’t post it on Facebook or Instagram you shouldn’t share it with Deepseek. Keep in mind that anything you type in the app could theoretically be stored on servers in China for years.

Conclusion

Hot apps like Deepseek often incite a feeling of missing out. When it seems like everybody knows all about it and you haven’t even created an account, it’s normal to want to catch up to the crowd.

All popular AI apps have privacy implications to be aware of, but since Deepseek is based outside the US, there’s a whole other layer to consider.

Metaverse Brings Privacy Risks with Its Startling Possibilities

The great power of the approaching metaverse comes with great responsibility. Facebook has released its plans to responsibly build the metaverse with its partners.

They’re considering ways they can minimize the amount of user data that is needed to accomplish founder Mark Zuckerberg’s vision. Their aim is to build tech to “enable privacy-protective data uses.” In theory, users will be able to see how their data is used, and they’ll be able to control it.

However, what sounds good on paper and the real-world implementation are two very different things.

Facts are facts: Facebook is the least trusted social media app in terms of privacy. Nearly one-third of US Facebook users have some reservations about how the platform protects their privacy and data.

Trust is very hard to gain, and it can be lost in an instant. Facebook’s journey to reassure users that their metaverse is safe will be met with healthy skepticism.

The inherent risks of the metaverse could prove to be just as vast as it’s infinite possibilities. Before we tackle that issue, let’s look at what the metaverse is.

Wait, What’s the Metaverse?

The metaverse is the vision of virtual reality you’ve seen in movies for decades.

Imagine walking down a bustling street in Tokyo from the comfort of your condo in Miami. You look up and see a soaring building covered with brightly lit signs.

As you lower your gaze you notice a small souvenir shop with a large toy robot displayed by the window. You walk right in, and purchase the toy.

Now, this isn’t a video game. You used real digital currency to get it. Soon after you see a confirmation message that tells you they’ve received your order. The shop is now preparing it for shipment to your physical address in Miami.

This is just one example of how the metaverse will allow virtual worlds and reality to collide.

Biometric Data and Brainwaves

Tech companies can tell us what they will and won’t do, but to really understand the privacy implications of the metaverse it’s best to know what the hardware is capable of.

Virtual reality headsets are able to make use of biometric data. The user’s environment, physical movement, and dimensions can all be tracked.

Naturally, headsets and eyeglasses are ideal for tracking eye movements. Moreover, it’s possible to track the physiological reaction to experiences found within the metaverse, such as heart rate.

This is where it gets just as astonishing and it is alarming. Soon, brain-computer interfaces (BCI) will allows us to access the metaverse through smart wearables such as headphones, watches, and glasses.

BCIs collect brain signals, analyses them, and then translates them into commands smart devices can understand and execute on.

These BCIs will have access to one’s very thoughts, so malicious users in the metaverse may be able to crack the code and gain access to our brain activity if robust security measures aren’t in place.

Additionally, users will have to trust that all parties that have access to BCI data will use it solely to control the functions of their software, and not to harvest our thoughts.

Metaverse Privacy by Design

Software developers responsible for building the metaverse are aware of the threats, and they’ll act accordingly. Privacy by design is to be standard practice.

In other words, software will be built with user privacy in mind from the start, rather than adding it later to appease users and lawmakers.

Existing laws such as General Data Protection Regulation (GDPR) have driven app makers to be transparent about when private data is being accessed. For example, Google Glass displays icons and outputs audio to let users know when they’re being recorded.

Advertisers Want Your Data

Sometimes we forget that much of the technology we use every day is ad supported. It makes social media, freemium mobile games, and access to professional journalism all possible.

Consequently, user data is needed to show us ads with products we’re actually interested in.

The desires of users and advertisers clash when it comes to privacy and data. Of course, users want strong privacy. And advertisers need access to information about consumers to run their ad campaigns efficiently.

Realistically, the needs of advertisers often edge out the wants of users. Whoever pays the bills for the metaverse to run holds the real power.

Consumers have their wins too, such as Google’s decision to phase out third party cookies that track user activity.

However, historically advertisers have always been granted tools to benefit from user data. The methods of collecting and sharing that data simply change. Thus, it’s difficult to believe that a company like Facebook will take user privacy to such an extreme that it turns off advertisers.

Implications of Surveillance and AI

“The company that builds the metaverse will actually listen in on every conversation and watch every person,” said former Google and Apple exec Kai-Fu Lee.

“That on the one hand can make the experience very exciting because it can see what makes you happy and give you more of that,” he adds. “But then what is the notion of privacy in a metaverse?”

Whether a company like Facebook collects user data or not, the fact they must literally capture your every movement for the VR experience to work, is alarming.

AI will also play a huge role in making immersive worlds possible.

You’ve seen this concept in video games, when you interact with NPCs (non-player characters) that have a mind of their own. Since reality and virtual reality come together in the metaverse, the stakes are much higher.

U.N. leaders have warned that AI could pose a threat to human rights.

“AI technologies can have negative, even catastrophic effects,” said U.N. High Commissioner for Human Rights Michelle Bachelet.

Conclusion

The metaverse is a brave new world, full of amazing possibilities and dangers.

It could be Facebook, but there’s no telling which corporation will succeed at bringing the metaverse to our homes.

Every time we’re in the metaverse we’ll partially check-out of reality. Yet everything we do and say will be monitored, and some of our actions will affect our life in the physical world.

Ultimately creators of the metaverse need to design a world that serves people, rather than making us serve the metaverse.