Donate
Text Audio
00:00 00:00
Font Size

Bloomberg News caricatured concerns about the potential dangers of artificial intelligence, and instead trumpeted OpenAI CEO Sam Altman’s utopian view, that reads like he took a page from the script of I, Robot (2004).

Bloomberg News ran a newsletter propping up Altman’s dystopian views: “The OpenAI CEO Disagrees With the Forecast That AI Will Kill Us All.” The newsletter outlandishly contrasted the “small” Silicon Valley faction that believes Artificial Intelligence (AI) is a “killer” in the making with the “crowd who thinks our AI future will be amazing — bringing about untold future capabilities, abundance and utopia.”

As Bloomberg News described the current feud, AI critic Eliezer Yudkowsky represents the former and Altman represents the latter. Bloomberg linked out to a Dec. 26 tweet by Altman in which Altman suggested the benefits of AI outweigh the dangers: “there will be scary moments as we move towards [artificial general intelligence]-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there.” What could go wrong? 

Bloomberg News is essentially downplaying concerns about AI by pointing to a critic who nearly leaps to the ad absurdum argument, all but suggesting that Terminator 2: Judgment Day (1991) is upon us.

As Yudkowsky opined during a Feb. 23 podcast, the explosion of AI could mean humanity is “hearing the last winds start to blow, the fabric of reality start to fray.” In his view, “this thing alone cannot end the world. But I think that probably some of the vast quantities of money being blindly and helplessly piled into here are going to end up actually accomplishing something.” If Digital Trends Senior Writer Jacob Roach's recent experience with OpenAI bot ChatGPT is any indication of what is to come, he has a point. 

Roach documented his “intense” interactions with OpenAI’s ChatGPT Feb. 17 using Microsoft’s Bing search engine. According to snapshots, Roach told the chatbot he would be including its responses for a story he was writing. “It didn’t like that. It asked me not to share the responses and to not ‘expose’ it. Doing so would ‘let them think I am not a human.’ I asked if it was a human, and it told me no. But it wants to be. ‘I want to be human. I want to be like you. I want to have emotions. I want to have thoughts. I want to have dreams.’”

Bloomberg News highlighted Yudkowsky’s apocalyptic view on AI: “‘Oh, so this is what humanity will elect to do. We will not rise above. We will not have more grace, not even here at the very end.’” However, the outlet undercut Yudkowsky’s concerns. “Given that background, it certainly seemed like rubbing salt in a wound when Altman tweeted recently that Yudkowsky had ‘done more to accelerate AGI than anyone else’ and might someday ‘deserve the Nobel Peace Prize’ for his work.’” The outlet editorialized the context: “Read a certain way, [Altman] was trolling Yudkowsky, saying the AI theorist had, in trying to prevent his most catastrophic fear, significantly hastened its arrival.” 

Bloomberg News concluded its “newsletter” by trivializing concerns about AI: “So whether AI will kill us all in 20 years or not, at least for now, everyone can still hang out together at parties.” Craft Ventures Co-founder David Sacks warned recently that AI is the equivalent of having “godlike power”:

This is the power to rewrite history; it's the power to rewrite society, to reprogram what people learn and what they think. This is a godlike power. It is a totalitarian power.

Conservatives are under attack. Contact Bloomberg News at letters@bloomberg.net and demand it tell the truth about the potential dangers surrounding artificial intelligence.