Donate
Text Audio
00:00 00:00
Font Size

OpenAI recently made the case that its GPT-4 product should be used for so-called “content moderation.” And that just means more censorship.

GPT-4 is OpenAI’s most advanced version of its large language model, also known as an AI interface. ChatGPT was one of the first versions of this product. Unfortunately, research showed that ChatGPT contained a “significant and systemic left-wing bias.” 

OpenAI’s Aug. 15 blog post explaining how its AI can be used for “content moderation” does not provide assurances that biases have been removed. The post describes how “content moderation” models can be customized for each platform, thus introducing the Big Tech platform’s inherent biases into the moderation process. Further, OpenAI confirms the potential for bias, stating: “[j]udgments by language models are vulnerable to undesired biases that might have been introduced into the model during training.” 

The current automatic systems that are used for Big Tech censorship certainly have proven to be unreliable, and it is not apparent that GPT-4 would be any better at understanding context and nuance. One example includes Facebook rejecting ads for a Holocaust movie titled Beautiful Blue Eyes, claiming that the movie's title violated the platform's policy against content that "includes direct or indirect assertions or implications about a person's race." After a human reviewed the case, the ads were approved.

In another example from Twitter, now known as X, under former CEO Jack Dorsey’s leadership, the platform unreasonably censored tweets in response to a question asking what users’ favorite Clint Eastwood movie was for citing Hang ‘Em High.

Automated systems have proven unable to distinguish between a post that contains hateful language and a post designed to bring attention to such content. This was most evident with the spread of information on the so-called “Trans Day of Vengeance” via social media platforms such as Twitter. 

When people began bringing attention to the fact that such an event was being planned and promoted via Twitter, every mention of the event was eventually removed. Even accounts that were critical of the event suffered temporary account restrictions for daring to speak out against it. Once again, it is evident that the algorithms used for censorship currently do not understand context.

AI models have shown they can’t produce unbiased results and are generally unable to account for context, so the solution is not to use them for so-called “content moderation” purposes. The best solution is to stop censoring free speech altogether. 


Conservatives are under attack. Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on so-called hate speech and equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.