Elon Musk says he wants to create his own "safer" version of ChatGPT called TruthGPT

Elon Musk, who has in the past raised concerns with artificial intelligence, now says he wants to develop his own version of ChatGPT, the AI chatbot gaining widespread use and popularity. 

Last month, Musk joined Apple co-founder Steve Wozniak and hundreds more in signing an open letter calling for a six-month pause on AI experiments, saying we could face "profound risks to society and humanity."

In an interview with Fox News' Tucker Carlson that aired Monday, Musk reiterated his concerns with AI, saying just like the Food and Drug Administration regulates what we consume in the U.S., AI should have a regulator. 

"It's not fun to be regulated. It's sort of arduous to be regulated," he said, adding that he's used to regulated industries, since he builds cars and rockets with Tesla and SpaceX. "Some people may think I'm some revelatory maverick that defies regulators on a regular basis. This is not the case."

He said if we regulate AI companies there would be a "better chance of advanced AI being beneficial to humanity." He said if regulations are put into effect after "something terrible happens," it may be too late and AI could lose control at that point.

Musk was one of the original co-chairs of OpenAI, the company behind ChatGPT, a tool that became the fastest-growing app ever, according to a USB study. The artificial intelligence bot is often used to answer questions and complete tasks with seeming accuracy. 

OpenAI started out as open source, meaning it was available to the public. The tool is now closed source and the company is for-profit and in partnership with Microsoft. Google also has its own AI branch and Bard, a tool similar to ChatGPT, but Musk wants to create a third option, which he plans to call TruthGPT. 

He acknowledged he'd be "starting late" in the AI game, but his plan would be to make "a maximum truth-seeking AI that tries to understand the nature of the universe."

"I'm worried about the fact that [ChatGPT] is being trained to be politically correct, which is another way of saying untruthful things," he said. "Certainly the path to dystopia is to train AI to be deceptive." 

He called this idea "the best path to safety," because in trying to understand the universe, his AI would know humans are an important part of the universe and be unlikely to try and annihilate them. 

When Musk took over Twitter, one of his goals was to end many of Twitter's user policies, which he said would allow more freedom of expression. Many people, however, thought ending Twitter's policies would only allow more hate speech and harassment on the social media platform. 

The billionaire is known for fantastical ideas – some that come to fruition and some that do not. In 2021, Musk began using SpaceX resources to send civilians to space, and has since launched successful flights. 

In 2018, he promised to use resources from his Boring Co., which digs tunnels for high-speed transportation, to create a mini submarine to help rescue the Thai boys' soccer team stuck in a cave. In the end, Thailand's Navy SEALs and an international team of divers spent 18 days coordinating the rescue of the 12 boys and their soccer coach. 

    In:
  • Elon Musk
  • Artificial Intelligence
  • AI
  • ChatGPT
Caitlin O'Kane

Caitlin O'Kane is a digital content producer covering trending stories for CBS News and its good news brand, The Uplift.

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.