Tuesday, December 24, 2024
HomeTechnologyWhy advances in artificial intelligence may be slowing down | Real Time...

Why advances in artificial intelligence may be slowing down | Real Time Headlines

Generative artificial intelligence has advanced so rapidly over the past two years that major breakthroughs seem more a matter of when than if. But in recent weeks, concerns have grown in Silicon Valley that progress is slowing.

One early sign is the lack of progress among models released by the biggest players in the field. information Report OpenAI’s next-generation model, GPT-5, offers much smaller quality improvements, while Anthropic has delayed the release of its most powerful model, Opus, according to wording removed from its website. Even among tech giants GoogleBloomberg Report The upcoming version of Gemini has not lived up to internal expectations.

“Keep in mind, ChatGPT was launched in late 2022, so it’s almost two years old now,” said Dan Niles, founder of Niles Investment Management. “Initially, all of these new models had huge improvements in functionality, and what’s happening now What happens is, you really train all these models, so the performance gains level off.”

If progress stalls, it raises questions about a core assumption that Silicon Valley considers a religion: the law of scale. The idea is that adding more computing power and more data guarantees better models to an infinite degree. But recent developments suggest they may be more theoretical than legal.

The key issue may be that AI companies are running out of data to train models, hitting what experts call a “data wall.” Instead, they are turning to synthetic data or data generated by artificial intelligence. But Scale AI founder Alexandr Wang said this is just a Band-Aid solution.

“Artificial intelligence is a garbage-in, garbage-out industry,” Wang said. “So if you feed these models a lot of AI gobbledygook, then the models are going to spit out more AI gobbledygook.”

But some leaders in the industry are pushing back against the idea that the rate of improvement is hitting a wall.

“The base model pre-training extension is intact and continuing,” Nvidia CEO Jensen Huang said at the chipmaker’s press conference. latest financial report call. “As you know, this is an empirical law, not a fundamental physical law. But the evidence shows that it continues to expand.”

OpenAI CEO Sam Altman release X Simply put, “there are no walls.”

OpenAI and Anthropic did not respond to requests for comment. Google says it’s pleased with Gemini’s progress and is seeing meaningful performance gains in features like inference and encoding.

If AI accelerates being exploited, the next stage of the race will be to find use cases—consumer applications that can be built on top of existing technology without further refinement of the model. For example, the development and deployment of artificial intelligence agents is expected to be a game changer.

“I think we’re going to live in a world where there are going to be hundreds of millions, billions of artificial intelligence agents, which may ultimately be more than there are people in the world.” Yuan CEO Mark Zuckerberg said in a recent podcast interview.

watch video learn more.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments