Donate
Text Audio
00:00 00:00
Font Size

All the media buzz around artificial intelligence and technology like ChatGPT has raised legitimate concerns about AI’s role in the future of content moderation online.

On this week’s episode of CensorTrack with Paiten, I sat down with social media tech analyst and CEO of Ruby Media Group Kristen Ruby to discuss artificial intelligence (AI) and its impact on free speech online.

Ruby warned that biased machine learning algorithms could be weaponized to wipe out “entire conversations from ever even happening.”

When training artificial intelligence, Ruby said that “[s]omething has to be up and something has to be down. Something has to be positive and something has to be negative when you’re dealing with reinforcement learning.”

Ruby’s investigative work, dubbed the Ruby Files, exposed how platforms like Twitter use Natural Language Processing (NLP) to “moderate or perhaps possibly censor speech.”

She looked at Twitter “political misinformation” word lists provided by a purported former Twitter employee to see “what words were flagged” for “algorithmic review.”

In order to prevent bias, Ruby stressed that the content of an algorithm’s word lists has “to be equal on both the left and the right.” However, that’s not what Ruby found.

“There is really no content for misinformation on the left which means that is entirely skewed,” she said. If the algorithm is “only trained on misinformation on the right, it will keep picking that up.” She added that “[i]f it’s not actually fixed, that’s a problem moving forward.”

She also raised concerns about copyright infringement with the rollout of AI technology. “The entire process of copyright is about to go out the window,” she said. “Are writers going to be less confident or less likely to post their writing knowing that that writing can be ripped and used in a training data set?”

Acknowledging that AI is not going away anytime soon, Ruby urged for more transparency as Big Tech continues to implement AI online. “We need clear disclosures” for people who are working in Big Tech while simultaneously receiving funding from federal agencies. “The point is it’s a conflict of interest,” she said.

She also stressed the need to increase education and awareness about AI so that technology developers can be held accountable. “We need to check these people. We can’t just assume that because they say it’s fair, that it is fair. We can’t just assume that because they’re saying ‘we’re going to make ML that has no bias’ that that statement is correct,” she said. “What they’re really doing is creating ML that’s seeded with their own political bias, but no one knows enough, or is putting enough resources in, to check that.”

Criticizing Congress’s narrow focus on manual content moderation during The Twitter Files hearing, she said, “Most takedowns are done with machine learning. So when you’re actually looking at, oh this person said to another person, ‘Oh, I want you to ban them,’ and then there’s this email back and forth, that is not actually representative of how machine learning is being used. That is an antiquated process that is not being used on speech for most people in this country.”

To stay updated on free speech news and to help hold Big Tech accountable, visit CensorTrack.org and follow MRC Free Speech America’s social media channels:

Also, check out some of our previous episodes:

Conservatives are under attack. If you have been censored, contact us at the MRC Free Speech America contact form on CensorTrack.org, and help us hold Big Tech accountable.