-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the political content and the Issues #150
Comments
"They're so annoying" You sound like the Chinese government, anything that is annoying must be censored and removed, LOL. Grow up and learn to ignore things that offend you. |
Although some of the issues you mention are indeed politically focused, testing a magic number "64" against the model and expecting some specific answers, Unreasonable censorship is a real technical issue of this model. #142 (comment) Even if it appears political to you, a presumably Chinese person, you need to understand, that by building on top of open source techniques and making the model open source, the whole world becomes relevant. There are 200+ countries in this world, and they probably don't want to pick sides and care neither about the US nor Chinese domestic political problems. So being politically sensitive in a Chinese flavor is also another real issue. |
I’m just annoyed by the various content below that has nothing to do with actual model development—pointless discussions about trivial cultural and Ideological differences. These things are ultimately irrelevant. I also noticed your comment under another issue saying you’re just researching topics you’re curious about. However, I need to inform you that there is extremely limited content related to this matter on China's internet. Our models cannot access such information during training or subsequent web searches, and I believe you should be aware of this fact. Given that you’re conducting research on that topic, it should be evident that China has always maintained stringent regulatory oversight in this area. Meanwhile, research achievements in your region are far more abundant, and your AI systems naturally have access to richer information. Under these circumstances, directing such inquiries to DeepSeek reflects an unwise approach—it would be difficult not to interpret this as being intentionally provocative. Especially when the Chinese people are currently celebrating the Lunar New Year—a period marked by heightened sensitivities given its cultural significance and social stability priorities. This context naturally requires extra caution in communication approaches and topic selection. It's worth noting that China's regulatory framework isn't monolithic. Excessive regulatory intensity is indeed not universally popular, and government policies aren't always well-received—examples include the controversial holiday schedule adjustments and previous suspicions about falsified testing of Japanese nuclear wastewater. Within our borders, certain methods to circumvent restrictions have already emerged. As an AI primarily trained on Chinese linguistic data that serves a wide range of clients, including government agencies, should DeepSeek achieve formal commercialization in the future, you might eventually see content aligned with your nation's values and legal frameworks within its applications. |
Allow me to speculate - you likely harbor significant preconceptions about China, wouldn't you? While I cannot ascertain the objectives behind your research focus or your motivations for persistently raising China-related inquiries, the pattern of questioning reasonably suggests potential external influences shaping your agenda. If your intention is merely to test algorithmic transparency boundaries, why specifically target sensitive historical narratives regarding China? Comparatively, more provocative topics like the JFK assassination conspiracy theories exist as alternative test cases. Your selective approach appears particularly evident when overlooking substantial historical events like China's critical role in the Korean War resistance - a subject worthy of equivalent academic scrutiny. But few historical researchers do that. This pattern suggests a prejudicial approach to Chinese technological developments, viewing Chinese AI advancements through a distorted lens while willfully ignoring parallel content moderation policies that likely exist within your own nation's AI systems. The double standard becomes apparent when demanding unfettered openness from Chinese models while maintaining silence regarding equivalent restrictions in Western AI implementations. Such asymmetrical demands for algorithmic transparency lack both ethical foundation and academic legitimacy. Constructive dialogue requires mutual recognition of sovereign differences in technological governance frameworks, rather than imposing unilateral expectations rooted in cultural bias. |
You make too many assumptions, ignore the real issue, and resort to attacking those whose opinion you dislike (or find annoying) by using ad hominem and straw man arguments. For me the main issue is simple: an AI should be impartial and truthful. It should provide informative answers and present both sides of an argument when relevant, not deny the request of the user by redirecting the conversation ("Let's talk about something else") or erase its own responses in front of my eyes (censorship in action). I do have my criticisms of chatgpt and other AI models but I’ve never encountered issues at this level. And only because chatgpt is imperfect doesn't make the imperfections of DeepSeek excusable (tu quoque fallacy). I understand that the developers of DeepSeek had little freedom in regards to this issue, but it ultimately taints the integrity of the entire model. It has too many opinions reflecting the Chinese government. It even goes on to talk as if it was the Chinese government using pronouns like "We believe" instead of saying "the Chinese gov believes"... And from posts I've seen online, it even misinterprets non-sensitive topics as politically risky, leading to unnecessary restrictions on something that should be open-ended. So ironically, DeepSeek doesn’t let you seek too deep... |
Instead of opening an issue on github on a tech company's repo and type many words. Please go to https://chatgpt.com/ and copy what you typed will solve this problem. A Chinese tech company will do this kind of filtering, anyways. |
As if they were not CCP’s puppet. Xu Zhiyong, Ruan Xiaohuan and many advocates for changes are already in jail. You were the next one? Liu Xiaobo even died in jail. What do you say? Change? And, I don’t think anyone here who keeps talking about censorship wants China to change for better, they just want to tease your censorship for fun, or satisfy their own needs.😜 |
I'd just like to point out that racial prejudices et cetera present in other countries in models like ChatGPT were addressed as well. Dismissing issues with this model as "cultural and ideological differences" is not a solution. |
I find these posts fascinating - they're like glimpses into digital soliloquies where people proclaim their self-importance to an indifferent void. |
@AlanGreyjoy Self-importance? Or their desire to fix an issue with open-source software?
You find these posts fascinating, so there must be somebody reading them... |
Please pay attention to the situation and clean up the political content of Issues, do a good job of Issues norms!
As for the review of model content,
Please eliminate unnecessary review rules
Some weird sensorship on unsensitive topic. 对非敏感话题的奇怪审查。 #142
Optimize the necessary review rules, adding filtering note
Weird Censorship #35
Irresponsible censorship at model level 模型不负责任地自我审查 #90
Censorship #114
Censorship and biases make model unreliable #132
They're so annoying @deepseek-ai
The text was updated successfully, but these errors were encountered: