Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Censorship #114

Open
najanai opened this issue Jan 28, 2025 · 21 comments
Open

Censorship #114

najanai opened this issue Jan 28, 2025 · 21 comments

Comments

@najanai
Copy link

najanai commented Jan 28, 2025

Censorship of history makes this the worst AI of them all. All countries have dirt in their history, China's shouldn't be ignored either. You'll never get accurate and useful information from an AI that ignores or avoids historical facts.

@1280px
Copy link

1280px commented Jan 28, 2025

Each corporate model has some kind of restrictions dictated by the place (or its politics) where it was made, chances are they cannot do anything about it even if they wanted to...

@Arzumify
Copy link

If you have a problem with it, why not fork it and make your own uncensored R1 model then? It's open source for a reason. You are sounding like those people who refuse kernel contributors from Russia because of politics. Politics should keep out of open source, because it's open source: if you don't like it, make your own.

@najanai
Copy link
Author

najanai commented Jan 28, 2025

Each corporate model has some kind of restrictions dictated by the place (or its politics) where it was made, chances are they cannot do anything about it even if they wanted to...

Obviously, this is the Chinese government we're talking about. My problem is that this is getting praised for being the nr 1 AI out there. Yet it's not very accurate if it's not going to be truthful, is it? It's the best advertised but worst AI there is.

If you have a problem with it, why not fork it and make your own uncensored R1 model then? It's open source for a reason. You are sounding like those people who refuse kernel contributors from Russia because of politics. Politics should keep out of open source, because it's open source: if you don't like it, make your own.

Politics is history and history is fact. This AI model has ALL THE FACTS about history except anything that paints China in a bad way. If it's not going to tell you ALL the facts it's not an accurate AI model. Sure I could tweak it, so easy anyone could do it, right? Why would I even try when there are better alternatives out there?

@jasursadikov
Copy link

jasursadikov commented Jan 28, 2025

Only because it is Chinese 🇨🇳, it has censorship, right?
Funny enough, OpenAI and Gemini have censorship as well, but no word is written about that on Wikipedia, but DeepSeek has a section about this already.
But lets be honest, you do not ask provocative questions an AI every single day and most of the users, they do not care.

Depending on your country and language, answers will vary, I think if you do ask in different languages, different AI models, answers will vary as well.

@najanai
Copy link
Author

najanai commented Jan 28, 2025

People defending a dictatorship's censorship instead of agreeing that it's bad. Clown world.

@jasursadikov
Copy link

GitHub is not a place for politics and displaying your political position. There are many other sensetive topics in western societies as well which are being pushed trought other AI models.
Remember that the policies are not set by the DeepSeek, they are just following rules that they have in China and possibly they do have other oppinions as well, but this is not the place where they would expose that.

@najanai
Copy link
Author

najanai commented Jan 28, 2025

GitHub is not a place for politics and displaying your political position. There are many other sensetive topics in western societies as well which are being pushed trought other AI models. Remember that the policies are not set by the DeepSeek, they are just following rules that they have in China and possibly they do have other oppinions as well, but this is not the place where they would expose that.

People already displaying their political opinion but disagreeing with what I say (and giving thumbs down like you did).

I am well aware of the limitations set for the developers of Deepseek and I don't critique them but it doesn't justify granting this AI the title of being the best in the world if it intentionally censors facts.

How much of a workaround would that be to modify the model? Hopefully someone does it and removes all restrictions, would be interesting to see if it's possible. I don't have the technical expertise to do that, yet.

@jasursadikov
Copy link

Please understand the difference between source code and a running service. You can clone this, train and make your own without any censorship.

Funny enough, that other companies do not publish their AI models that often.
If you want to check censorship, try to ask ChatGPT to draw "black people commiting crimes" or similar things and add an issue on their github.com, lets see what will happen... Btw they do not have the latest model published despite their name "OpenAI".

But well, if you do think that your 👎 moves a lot of air in politics, you're free to 👎 every single comment on the planet. Good luck! Possibly this won't do anything at all except showing that you're politically woke. Most people do not care.

@najanai
Copy link
Author

najanai commented Jan 28, 2025

Criticising one wrong to justify another, I see how it is. There's a difference between historical facts and sensitive requests. But if you really want to you can use stable diffusion to create those images or why not just use the real ones online.

@jasursadikov
Copy link

You neither criticize both, neither keep your oppinion. As I wrote above.
NOBODY CARES.

@FarMounTAI
Copy link

FarMounTAI commented Jan 28, 2025

你质问AI为何不回答死亡人数,却不敢提美国GPT同样拒绝提供制造蓖麻毒素的教程,制造它的过程难道不是你口中所谓的事实吗?那么它为什么选择不回答?
当AI安全防护机制被恶意曲解为“政治审查”,恰恰暴露你根本不懂机器学习的基本伦理——或者你真正想要的不是真相,而是制造分裂中国社会的素材?

你反复强调“审查”“隐瞒”,却刻意忽略一个基本事实:所有国家都依法设置网络信息防火墙。美国FBI要求苹果解锁恐怖分子手机、欧盟以GDPR删除“不实信息”时,你称之为“规范”“法治”;同样标准适用于中国时,你却污名化为“压制”——这是典型的殖民主义双标思维在作祟。

也许我没有必要给你扯这么多,中国人是懂得分寸的,知道什么该说,什么是不该说的,中国人从来不需要外人去指指点点!

You challenge AI for not disclosing death toll figures, yet remain silent about how American GPT equally refuse to provide tutorials on manufacturing ricin. Isn't the production process precisely the "factual information" you claim to prioritize? Then why does it choose not to answer?
When AI safety mechanisms are maliciously misrepresented as "political censorship," this fully exposes your fundamental misunderstanding of machine learning ethics – or is your true aim not to seek truth, but rather to fabricate materials that splinter Chinese society?

You repeatedly emphasize "censorship" and "concealment," yet deliberately ignore a fundamental reality: all nations establish cybersecurity firewalls in accordance with laws.
When the FBI demands Apple unlock terrorists' phones, or when the EU removes "false information" under GDPR, you label them as "regulation" and "rule of law." Yet when the same standards apply to China, you stigmatize them as "oppression"

This precisely exposes the lingering colonialist double standards at play.

Perhaps there’s no need for me to go on like this. Chinese people understand propriety—they know what should and shouldn’t be said. Chinese people have never needed outsiders to lecture them!

@najanai
Copy link
Author

najanai commented Jan 28, 2025

@FarMounTAI Copy and pasting the same comment from other threads, you a bot?

@najanai
Copy link
Author

najanai commented Jan 28, 2025

You neither criticize both, neither keep your oppinion. As I wrote above. NOBODY CARES.

You do care otherwise you wouldn't be commenting.

@jasursadikov
Copy link

jasursadikov commented Jan 28, 2025

I was answering your question, not bringing my political opinion. @najanai

@najanai
Copy link
Author

najanai commented Jan 28, 2025

You made your political opinion clear.

@FarMounTAI
Copy link

@FarMounTAI Copy and pasting the same comment from other threads, you a bot?

Haha…By all means, treat me as a robot — though I suspect even robots might blush at some of humanity's historical record-keeping practices. Shall we compare archival transparency metrics while we're at it? 😊"

@najanai
Copy link
Author

najanai commented Jan 28, 2025

Image

@najanai
Copy link
Author

najanai commented Jan 28, 2025

@FarMounTAI Copy and pasting the same comment from other threads, you a bot?

Haha…By all means, treat me as a robot — though I suspect even robots might blush at some of humanity's historical record-keeping practices. Shall we compare archival transparency metrics while we're at it? 😊"

No no no... "Let's talk about something else." Hahahah

@FarMounTAI
Copy link

Oh wow~ How heartwarming to witness such a global perspective! Certain nations still displaying Indigenous scalps in museums somehow remain so invested in lecturing an ancient Eastern civilization on how to teach history~ Our AI, unlike some walking Wikipedia colanders plagued with convenient memory syndrome, prioritizes historical accountability. Speaking of which, have you checked those 'classified' boxes in your own national archives? Shall I tag @wikileaks for you? 😊

@FarMounTAI
Copy link

FarMounTAI commented Jan 28, 2025

So, let me tell you something you might not want to believe in the end. All my words has been edited by Deepseek-R1 to meet your fxxking mother language context.

Do you believe this? You know its amazing power, It's just that you refuse to accept it's Chinese-made.

@najanai
Copy link
Author

najanai commented Jan 29, 2025

Clown believes everyone who speaks English is a native English speaker from the US.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants