From GPT-2 to AGI? AI’s Growth Curve is Steeper Than We Thought.

By FKlivestolearn | Technicity | 25 Mar 2025


Historically, Moore’s Law was the gold standard for exponential technological growth. It’s what powered the meteoric rise of computing power and, frankly, everything from your smartphone to the self-driving car on your wishlist. But with Large Language Models (LLMs), it’s not just about hardware anymore—it’s about integrating cutting-edge algorithms, massive data sets, and ingenious engineering.

A new study by research group METR shows us that frontier AI models are improving their ability to handle longer and more complex tasks at a rate that’s doubling twice as fast as we anticipated. This has implications not just for what these models can do today, but for what they might be able to do in the near future. More nuanced reasoning, complex problem-solving, and perhaps even hints of creativity are within reach—and all at warp speed.

To put things into perspective, Moore’s Law has been a reliable benchmark for technological progress for decades. It dictated the pace of computing power, doubling roughly every two years. However, AI doesn’t seem to be following the same gradual incline—LLMs are doing the same in seven months. The METR study highlights that LLMs are expanding their problem-solving capabilities exponentially.

If you're keeping up with the AI race, you know that artificial general intelligence (AGI) is the ultimate prize—the holy grail of machine learning. The idea is to create models that possess general reasoning capabilities on par with, or exceeding, human intelligence. And while AGI once seemed like a distant dream, this accelerated growth rate is shrinking timelines. Frontier models might not only help us reach AGI sooner, but they might redefine what AGI actually means. The real kicker? As these models evolve faster, they may change the very benchmarks of intelligence and capability, making AGI less of a fixed goal and more of a moving target.

The charts below highlight the exponential growth of the LLMs. Starting from GPT-2's humble beginnings in 2020, where AI could handle tasks lasting mere seconds, we've now reached a point where models like Claude 3.7 Sonnet can tackle complex tasks spanning hours.

  • GPT-2 (2020): Capable of tasks around 1 second long
  • GPT-3 (2021): Expanding to tasks around 4 seconds
  • GPT-3.5 (2022): Pushing to 15-second tasks
  • GPT-4 (2023): Reaching tasks of several minutes
  • Claude 3.7 Sonnet (2025): Handling tasks up to 4 hours long

1b2f4651d9dc834e4c6273df307c0ef724ba5d69fb9d6cc23cb72a3a86b32fb2.png

1742900591237?e=1748476800&v=beta&t=zRCKxFxypJJb19UqftukkkQa6EcptqFKXhvMhZ2P7Yg

1742900591237?e=1748476800&v=beta&t=zRCKxFxypJJb19UqftukkkQa6EcptqFKXhvMhZ2P7Yg

But let’s not break out the celebratory confetti just yet. With accelerated growth comes monumental challenges. Training these larger, smarter models requires increasingly vast computational resources—not to mention ethical considerations around bias, misuse, and safety. As we inch closer to AGI, we will need to ask ourselves some tough questions: How do we ensure these systems act responsibly? How do we govern them? And, perhaps, what does it mean for humanity when machines become our intellectual equals?

The pace of progress is exhilarating, but it’s also humbling. LLMs are helping us tackle problems we once thought unsolvable, from understanding complex protein folding to writing poetry that could rival Shakespeare (well, almost). If the trend highlighted by METR holds, these models will continue to expand their reach, unlocking capabilities we haven’t even dreamed of yet.

As we ride this wave of innovation, one thing is clear: The journey towards AGI isn’t just about building smarter machines—it’s about shaping the future of humanity itself. And with frontier models outpacing Moore’s Law, that future might be knocking on our doors sooner than we thought.

Originally published on LinkedIn.

How do you rate this article?

44


FKlivestolearn
FKlivestolearn

I am a prolific Blogger on Substack/Medium with a newsletter. Extensive trading experience in Forex & Stocks based on technical studies. Cryptocurrency trader and Enthusiast, Blockchain/Fintech Evangelist & generally just a Technology Freak.


Technicity
Technicity

Keeping you up to date & empowered within the fields of Technology, Finance, Science & Space.

Send a $0.01 microtip in crypto to the author, and earn yourself as you read!

20% to author / 80% to me.
We pay the tips from our rewards pool.