✅ AI Giants Hitting a Plateau?

Is the Race for Advanced AI Slowing Down?

In partnership with

Welcome to this new (special) edition!

Hello readers, Douglas here!

As OpenAI, Google, and Anthropic aim to build the next generation of powerful AI models, they're hitting unexpected hurdles. Recent developments show that advancing AI may not be as straightforward as it seemed during the initial rush of breakthroughs. From data limitations to skyrocketing costs, the industry giants are now struggling to achieve the ambitious performance leaps they once promised.

Today, we’ll explore why scaling AI is getting harder and what this means for the future of cutting-edge AI.

Enjoy! 💚

In partnership with 1440 Media

Have to thank 1440 Media for trusting The AI Newsroom for another time.

We share the same values and ambitions, both in the quality of the content we try to create and in our desire to make our readers smarter. 🤓

By supporting them, you support me! 💚

About 1440:

All your news. None of the bias.

Be the smartest person in the room by reading 1440! Dive into 1440, where 3.5 million readers find their daily, fact-based news fix. We navigate through 100+ sources to deliver a comprehensive roundup from every corner of the internet – politics, global events, business, and culture, all in a quick, 5-minute newsletter. It's completely free and devoid of bias or political influence, ensuring you get the facts straight.

Big dreams meet big challenges: Orion, Gemini, and Opus Stumble

Each of the three major players has encountered roadblocks on its path to releasing the latest and greatest AI models.

  • OpenAI: Their new model, Orion, aimed to surpass GPT-4 with enhanced capabilities. But after an initial training round, Orion’s performance fell short, especially on coding questions outside its training set. While OpenAI is refining Orion through post-training, it’s not expected to launch until 2024.

  • Google: Their Gemini model update was supposed to push new boundaries, but according to insiders, it isn’t meeting internal expectations. A spokesperson from Google DeepMind insists progress is being made but has withheld a public timeline.

  • Anthropic: Their anticipated Claude model, 3.5 Opus, was hyped as a breakthrough but continues to face delays. Although it performed better than its predecessor, Opus didn’t deliver the major advancements expected given its cost and size. Anthropic remains cautious about release timing, focusing instead on enhancing available models.

These struggles challenge the belief in scaling laws—the idea that more data and computing power would always yield smarter, more capable AI. As Anthropic CEO Dario Amodei put it, “Scaling laws aren’t laws of the universe.” Even with substantial investments, companies are finding it harder to realize the exponential improvements they anticipated.

Data limits and rising costs: the new AI constraints

Behind these setbacks lie two key factors: data scarcity and cost constraints.

Data Quality Over Quantity: Companies like OpenAI and Google are grappling with the need for more high-quality data. Traditional sources, such as social media and web scraping, are proving insufficient for training models to meet next-level performance benchmarks. While synthetic data offers an alternative, its usefulness is limited without the diversity and quality found in human-created data.

Skyrocketing Expenses: As AI systems grow more complex, so do the resources required. Developing and running these advanced models is estimated to cost companies up to $100 million per model. Projections for future models estimate costs could soar to billions—making it harder for even the tech giants to justify additional spending without significant gains in performance.

These issues have slowed the release cycle of new AI models, with companies increasingly choosing to improve existing systems rather than building entirely new ones.

Shifting Focus: From Giant Models to Specialized AI Agents

In response to these challenges, the leading AI companies are shifting their priorities from model size to usability and real-world applications.

AI Agents: OpenAI CEO Sam Altman recently highlighted AI agents—tools that can autonomously handle tasks like booking travel or managing emails—as the next potential breakthrough. While model improvements will continue, Altman believes that task-specific AI agents could drive the “next giant breakthrough” in user experience.

Enhanced Functionality Over Size: Instead of focusing solely on massive models, companies are exploring ways to make current models more versatile. Google and OpenAI are refining the ability of AI systems to “reason” through complex questions, giving the models extra time to compute responses. These incremental upgrades aim to make existing models more practical for everyday applications, even if they don’t represent huge leaps in core capability.

What’s Next for AI’s Big Players?

The AI industry may be entering a new phase, moving away from the era of rapid-fire, large-scale model releases to a period focused on refinement, practical applications, and specialized tasks. With data sources plateauing and costs rising, companies are reconsidering what progress in AI should look like.

While the dream of Artificial General Intelligence (AGI) remains, these recent setbacks serve as a reminder: advancing AI is not only about size and speed; it’s also about finding new approaches that can handle the complexities of real-world tasks. As we look forward, AI’s path may rely less on brute force and more on innovation in data handling, cost management, and user-centered design.

Thanks for your time!

See you on Tuesday at 9:12 am!

Hey readers, Doug here!

I'd like to sincerely thank you for taking the time to read The AI Newsroom every week.

I make every effort to send you valuable emails every week.

Please let me know how you found this edition by replying to this email or by answering the questionnaire below. 👇

Keep Learning! 💚

I’d Love Your Feedback!

Did you enjoy this edition? Let me know:

Login or Subscribe to participate in polls.