Donate
Text Audio
00:00 00:00
Font Size

The congressional lawmakers who drafted Section 230 of the Communications Decency Act (1996) have said the law will not protect AI tools like ChatGPT from liability. 

Sen. Ron Wyden (D-Ore.), an original drafter of the 1996 law, maintains that the law will not allow AI technology created by Big Tech giants like Google to run unchecked. AI chatbots don’t host content, they produce content, he argued.

“To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,’ he told The Washington Post. “So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense.’”

47 U.S. Code Section 230 states that no “provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” [Emphasis added.] 

"AI tools like ChatGPT, Stable Diffusion and others being rapidly integrated into popular digital services should not be protected by Section 230,” Wyden said. “And it isn’t a particularly close call.”

Wyden added that Big Tech giants would have to suffer the consequences of their own actions.

“Section 230 is about protecting users and sites for hosting and organizing users’ speech,” he reportedly stated. He added that the law “has nothing to do with protecting companies from the consequences of their own actions and products.”

Former Congressman Chris Cox (R-CA), who co-wrote the law, now sits on board for the pro-tech trade group NetChoice. Google, Amazon, Twitter, and Meta are listed as “association members” of the tech group. Cox said the law is clear: artificial intelligence is on its own.

“Section 230 as written provides a clear rule in this situation,” he said, according to The Post

Legal experts have expressed concern that the law is being abused and that lawmakers never intended the law’s scope to be stretched so far.

William Barr, an Attorney General during the Trump administration, warned in a 2020 Department of Justice workshop that Big Tech platforms had no qualms about sacrificing the free speech of their users for financial gain.

“[T]he avenues for sharing information and engaging in discourse have concentrated in the hands of a few key players…the big tech platforms of today often monetize through targeted advertising and related businesses, rather than charging users. Thus, their financial incentives in content distribution may not always align with what is best for the user,” Barr said at the time.

Barr recognized the positive ways Section 230 impacted the internet over the years, but added that no one could predict how far technology would advance and allow the law to be stretched beyond its original meaning.

“Technology has changed in ways that no one, including the drafters of Section 230, could have imagined. These changes have been accompanied by an expansive interpretation of Section 230 by the courts, seemingly stretching beyond the statute’s text and original purpose,” he said. 

He also expressed concern that internet platforms would use the law as an excuse to engage in illegal activity.

“We are concerned that internet services, under the guise of Section 230, can not only block access to law enforcement … [t]his would leave victims of child exploitation, terrorism, human trafficking, and other predatory conduct without any legal recourse,” he added. “Giving broad immunity to platforms that purposefully blind themselves – and law enforcers – to illegal conduct on their services does not create incentives to make the online world safer for children.”

Conservatives are under attack. Contact your representatives and senators and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on so-called hate speech and equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.