Elon Musk issued a warning Sunday on X when he posted, “We are on the event horizon of the singularity.”
The “singularity,” according to Musk, is the point at which artificial intelligence surpasses human intelligence.
Musk, a co-founder of Neuralink, touted the advancements in his technology in his interview with President Donald Trump on Fox News.
“We’ve implanted Neuralink in three patients so far, who are quadriplegics, and it allows them to directly control their phone and computer just by using their minds,” Musk said. “The next step would be to add a second Neuralink implant past the point where the neurons are damaged so that somebody can walk again and they can have full-body functionality restored.”
Still, Musk acknowledges the dangers of fully embracing AI.
In a 2023 interview with Tucker Carlson, Musk warned about the dangers of AI exceeding human limits.
“It’s very difficult to predict what will happen in that circumstance,” Musk said. “I think we should be cautious with AI and I think there should be some government oversight because it’s a danger to the public.”
Carlson questioned Musk on the likelihood of AI systems taking over from humans.
“It has the potential, however small one may regard that probability, but it is nontrivial,” Musk said. “It has the potential of civilizational destruction. Regulations are really only put into place after something terrible has happened. If that’s the case for AI and we only put in regulations after something terrible has happened, it may be too late to actually put the regulations in place. The AI would be in control at that point.”
It is interesting that Musk called for more regulations, especially since most of his work with the Department of Government Efficiency focuses on reducing the size of government. AI is a relatively new and unknown technology. It is possible that Musk merely wants to analyze the possible effects of powerful AI systems before fully embracing them.
Musk’s X post comes shortly after Vice President JD Vance praised the advancements in AI at the 2025 Artificial Intelligence Action Summit.
In what appears to be a contrast to Musk’s statement, Vance said, “When conferences like this convene to discuss cutting-edge technology, oftentimes, I think, our response is to be too self-conscious, too risk-averse.”
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
While it is good to encourage innovation, one must still be conscious of the risks posed by superhuman AI.
AI is much more advanced than five years ago. The world will never go back to a pre-ChatGPT state, but it should take care when embracing brand-new and powerful technology that could exceed human capabilities.
This article was originally published at www.washingtonexaminer.com