The chief artificial intelligence (AI) scientist at Meta has spoken out, reportedly saying that worries over the existential risks of the technology are still “premature,” according to a Financial Times interview.

On Oct. 19 the FT quotes Yann LeCun as saying the premature regulation of AI technology will reinforce dominance of Big Tech companies and leave no room for competition.

“Regulating research and development in AI is incredibly counterproductive,” he said. LeCun believes regulators are using the guise of AI safety for what he called “regulatory capture.”

Since the AI boom really took off after the release of OpenAI’s chatbot ChatGPT-4 in November 2022, various thought leaders in the industry have come out proclaiming threats to humanity at the hands of AI.

Dr. Geoffrey Hinton, known as the “godfather of AI,” left his position in machine learning at Google so that he could “talk about the dangers of AI.

Director of the Center for AI Safety, Dan Hendrycks tweeted back in May that mitigating the risk of extinction from AI should become a global priority on par with “other societal-scale risks such as pandemics and nuclear war.”

However, on the same topic, LeCun said in his latest interview that the idea is “preposterous” that AI will kill off humanity.

“The debate on existential risk is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities, which we don’t have at the moment.”

He also claimed that current AI models are not as capable as some claim, saying they don’t understand how the world works and are not able to “plan” or “reason.”

According to LeCun, he expects that AI will eventually help manage our everyday lives, saying that, “everyone’s interaction with the digital world will be mediated by AI systems.”

Nonetheless, fears surrounding the power of the technology remain a concern among many. The AI task force advisor in the United Kingdom has warned that AI could threaten humanity within two years.

By admin

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *