Donate
Text Audio
00:00 00:00
Font Size

The OpenAI CEO inadvertently confirmed to Congress that concerns over AI becoming a digital god that could wreak havoc on the world are not hyperbolic.  

During a congressional hearing on Tuesday, Sam Altman — who leads the company that created ChatGPT — was pressured by fellow witness Professor Gary Marcus and Sen. Richard Blumenthal (D-CT) to expound on the concerns about artificial intelligence.

Altman did not hold back, saying: “My worst fears are that we cause significant — we the field, the technology, the industry — cause significant harm to the world.” Altman called for more regulations during the hearing, but apparently had no issues praising AI’s potential in spite of the glaring risks.

Altman admitted that the “significant harm” could happen “a lot of different ways” but presented his own company as an answer to the problem. “It's why we started the company.” Altman, trying to put lipstick on the pig, proceeded to cast his company as a saint trying to bring supposed responsibility to the industry:

“Its a big part of why I am here today and why we have been here in the past and we’ve been able to spend some time with you.  I think if this technology goes wrong it can go quite wrong and we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.”

 

 

The problem is that Altman’s technology has already been proven to go “wrong.” Blumenthal opened the hearing by playing a ChatGPT recording before noting how the responses flattered him and his political record, which is unsurprising, given the well-documented leftist bias of the chatbot.

Altman’s changing of tone on AI completely contradicts his explicitly more cautionary stance from eight years ago.

Blumenthal also quoted a 2015 blog post by Altman, which was damning by all accounts. “Development of superhuman machine intelligence [SMI] is probably the greatest threat to the continued existence of humanity,” Altman wrote at the time. Altman dodged being confronted with his own words and tried to spin the focus on the impact of artificial intelligence on employment.

 

 

But Altman in 2015 vs. Altman in 2023 is like dealing with two different people. Altman wrote in his 2015 blog: “SMI does not have to be the inherently evil sci-fi version to kill us all.” He continued: “A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out.” 

Talk about an about-face. 

Compare that statement with his opening statement in 2023. Altman suggested that AI could be massively beneficial for humanity, testifying that “OPEN AI was founded on the belief that artificial intelligence has the ability to improve nearly every aspect of our lives, but also that it creates serious risks we have to work together to manage. We’re here because people love this technology, we think it can be a printing press moment, we have to work together to make it so."

Later on his statement, Altman was more specific, “we are working to build tools that can one day help us make new discoveries and address some of humanity’s greatest challenges like climate change and curing cancer.” 

Conservatives are under attack! Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on so-called hate speech and equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.