A Framework for AGI

Jeff Brown
|
Aug 8, 2024
|
Bleeding Edge
|
6 min read


What does it all mean?

How can we think about what’s happening right now?

And how far along are we?

Understanding the latest developments and breakneck pace of artificial intelligence (AI) isn’t easy. We’re striving to do that here in The Bleeding Edge.

For regular readers, I can all but guarantee that we’re ahead of 99.9% of the global population in understanding the significance of what’s going on. That’s the good news.

But even with a firm understanding of the latest developments in AI – and how fast things are moving – it’s still hard to process it all.

Sometimes it helps to have a framework.

A Framework for Artificial General Intelligence

One of the more interesting papers that was presented at this year’s International Conference on Machine Learning (ICML) last month in Vienna, Austria, did just that.

The paper, Position: Levels of AGI for Operationalizing Progress on the Path to AGI, stood out as it was not technical AI research, but a paper about how we can think about the path to artificial general intelligence (AGI).

Equally important was the particular group of scientists who published the paper.

It was a team from Google’s DeepMind division in the U.K.

This is the same AI research division that released AlphaFold 3, an advanced AI that can accurately predict the structure and interactions of plant, animal, and human molecules.

I know that it seems like an impossible task. Yet, the team at DeepMind did it this May. I wrote about their breakthrough on May 9 in Outer Limits – The Keys to Life. I wrote:

AlphaFold 3 is capable of accurately predicting the structures of proteins, DNA, RNA, and ligands – “binding” molecules that create bonds of various strengths with other molecules and ions. It even predicts how they interact.

This is likely the most valuable scientific tool and repository of life sciences information that the biotech industry could have ever asked for. And it’s free.

DeepMind has been working at the outer limits of artificial intelligence for years, which is precisely why it’s worth it for us to pay attention to how they’re thinking about artificial general intelligence (AGI).

Discussions around AGI typically focus on AI that has human-level intelligence capable of performing at or above the level of most humans in a wide range of cognitive tasks.

An AGI doesn’t necessarily need to be self-aware or sentient to provide economic value. It just needs to be able to learn and reason on its own to solve and complete tasks that it has been assigned.

And the “general” in artificial general intelligence is intentional.

“General” intelligence AIs don’t need to be sentient… they just need to have the human-like skills to reason and solve problems unassisted.

Its range of knowledge and reasoning capabilities should enable autonomous operation as a general-purpose AI that can be used for a wide range of tasks similar to those that we humans perform throughout the day.

We explored this topic in yesterday’s Bleeding Edge – “If I Only Had a Brain.”

Narrow vs. General AI

To better understand the context of where the industry is in its journey toward AGI, the team at DeepMind developed a framework with five different levels of AI.

This framework included developments in both narrow AI – think of limited, task-oriented AIs like Apple’s Siri assistant – and general-purpose AI (i.e. AGI).

Source: Morris, et. al., Position: Levels of AGI for Operationalizing Progress on the Path to AGI, June 5, 2024

(click here to expand image)

What I like about this framework is that it provides concrete examples of the technology that underpins achievements in both narrow and general AI.

The reality is that most of us don’t even realize that we’re using an AI-powered product or service. For example, Facebook and Google’s search and advertising technology is built on narrow forms of AI.

When we speak to Google Assistant or Siri, that’s built on natural language processing – a form of narrow AI and a precursor to today’s large language models.

Some 90% of software developers are using AI-powered software development tools to write code daily. Not doing so is highly disadvantageous. Lawyers have begun to widely use AI for both drafting legal agreements, as well as for e-discovery.

Anyone who has ridden in a Tesla on autopilot or full self-driving mode has experienced one of the most advanced autonomous AIs available today.

So it shouldn’t be a surprise when we look at the above table that the industry has already achieved Level 5: Superhuman levels of narrow AI.

I’ve provided some examples above, but my early example of AlphaFold 3 is highly relevant. In fact, this was achieved back in 2018 when DeepMind developed AlphaZero – a narrow AI that mastered the games of chess, Shogi, and, most impressively, Go.

But when it comes to general-purpose AI, and ultimately AGI, we still have some work to do.

As shown above, in the table, the DeepMind team categorizes recent versions of large language models like OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini as Level 1: Emerging AGI.

My perspective is that this categorization is on the conservative side.

I can make a strong argument that Level 2: Competent AGI has already been reached on a wide variety of tasks, and even Level 3: Expert AGI is on the verge of being accomplished. After all, it was just last year when OpenAI’s GPT-4 demonstrated its ability to pass the bar exam (test required to practice law) at a level “around the top 10% of test takers.”

My point is… LLMs aren’t perfect yet, but for certain tasks, they are right up there with skilled humans – an indication of being near artificial general intelligence.

Where We’re Going… And Soon

It’s a reasonable premise that the next generation of LLMs like OpenAI’s GPT-5, Anthropic’s Claude 4, or xAI’s Grok 2 will achieve Level 3: Expert AGI on a wide range of tasks.

This is when things get really exciting. Level 3 is what empowers us humans to collaborate with AI in ways that enhance our performance, save time, and even provide us with social benefits through personalized AIs that deeply understand us, remember our conversations, and act as an assistant, sounding board, and even as a friend.

And Level 3 is happening now. We won’t have to wait years. It will only be a matter of months…

Level 4: Virtuoso is something much larger. It’s what we generally think of when we refer to AGI. It’s the stage at which a general-purpose AI is capable of performing at levels equally as good as the most talented human in any field.

And even more relevant is that a true AGI will be capable of self-directed research and development.

It won’t have to be continuously prompted by human experts to take the next step forward. It will be able to reason and progress, determining the next most productive use of “its” computational power (i.e. time in a human sense).

It’s at this stage that we’ll see a radical improvement in productivity. Anywhere there are labor shortages, the industry will use this technology to fill those gaps by manifesting AI in the form of humanoid robots.

It will be exhilarating to witness, and it will also be disconcerting as many things will change, and portions of the workforce will have to adjust and be retrained.

My plan here at Brownstone Research is to be here with you as we navigate this technology-powered transition. The best thing that we can do is to stay ahead of these changes, be well informed, and be empowered to make changes to adjust to this quickly approaching reality.

That way, we won’t get caught off guard.

There is an awful lot to look forward to. Hopefully, we can position both our lives and our investment portfolios to take advantage of what’s coming.


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.