Smarter Than Human PhDs

Jeff Brown
|
Sep 16, 2024
|
Bleeding Edge
|
7 min read

So often, the battle for power and control is built on a philosophical argument.

And most often, the stance is positioned to appear as if it’s on the moral high ground. This is especially true when one group has been making – or has made – remarkable progress.

The ouster can’t build a case on failure to deliver to its constituents, so they fall back on an argument positioned as moral superiority.

We need to “be safe,” “responsible,” and “take time to put the proper safeguards in place.”

After all, who can argue with that?

And the incumbents are positioned as “reckless,” “irresponsible,” “taking on too much risk,” and deemed a “threat.”

This was precisely the situation in the attempted ousting of Sam Altman of OpenAI back in November last year. I wrote about the topic at the time in Outer Limits – The Drivers Behind Sam Altman’s Ousting.

Corporate dramas like this happen all the time. It doesn’t matter if it’s a large, powerful public company or a high-growth startup – it’s human nature.

But that’s not what made the ousting so interesting…

The Ouster

While Altman’s firing itself made for good headlines, the catalyst is what’s important.

The ouster was Ilya Sutskever, one of the most well-known executives in the world of artificial intelligence (AI).

His doctoral supervisor was Geoffrey Hinton, one of the three godfathers of AI. The startup that Sutskever spun out of the University of Toronto with Hinton – DNN Research – was acquired by Google in 2013.

While at Google, Sutskever worked on developing TensorFlow, which has become one of the most important open-source software libraries for machine learning (ML) and AI. (I wrote about the launch of TensorFlow right here in The Bleeding Edge – Google Offers Open-Source Quantum Computing Library for Developers.)

Sutskever also contributed to developing AlphaGo, the breakthrough deep-learning AI capable of easily beating the world’s best human Go player.

It wasn’t a surprise when Sutskever left Google to found OpenAI with Sam Altman in 2015. By 2019, the power duo was able to raise $1 billion in funding. And by early 2023, they had pulled in $10 billion thanks to the incredible progress they had made with just the first billion – specifically, releasing ChatGPT.

Sutskever, as an expert and an insider, saw the rapid pace of development of AI at OpenAI. The pace of the advancement even took him by surprise. He was vocal about issues related to “AI safety” and concerned about the “risk to humanity” and the “potential harms caused by AI.”

The rhetoric resulted in an open dispute between Altman and Sutskever… and a crisis for OpenAI.

This was the “moral high ground” veneer that Sutskever used to convince the board of OpenAI to remove Altman from OpenAI.

Altman was positioned as greedily racing toward shorter-term commercialization at any cost and uncommitted to the company mission of creating AI that “benefitted humanity,” which would require slowing the pace to put some kind of arbitrary safety protocols in place.

It worked, but only for a few days.

The Very Real Threat

When Altman’s ousting was made public, several high-profile executives at OpenAI also tendered their resignations.

And then the majority of the company signed a letter indicating that they would resign, as well.

Had that been allowed to happen, OpenAI would have imploded…

Which is why it didn’t.

The board promptly invited Altman back, Sutskever publicly “regretted” what he had done and then quietly took the back seat at OpenAI.

The problem was that Sutskever’s philosophical argument, as is often the case, wasn’t a real threat. It was theoretical.

How do we know this? How can we be certain Sutskever didn’t see something at OpenAI that should give us pause?

For one, it ignored the risk of not moving forward – of not accelerating.

In other words, it ignored the reality that by slowing down and putting artificial controls in place in the name of “keeping everyone safe,” it would increase the likelihood that a real adversary would overtake the Western development of artificial general intelligence.

Because let’s face it, just because Ilya Sutskever says we should slow down and take our time developing artificial intelligence, does that mean our adversaries will immediately agree to do the same? Absolutely not.

And that is a very real threat.

How is that real? Because we know how much China, among others, is already investing in and committed to developing artificial general intelligence (AGI). And they’ve been at it for years.

Unlike the theoretical threats pushed by AI moral grandstanders, this is not theoretical in the slightest. It is a stated priority by China’s government to become the world’s AI superpower by 2030, just six years from now. (The Bleeding Edge – Washington’s Radical Shift Toward Nuclear is a must-read on this.)

Curiously, in the months that followed Altman’s ousting and subsequent return, Sutskever stepped down from OpenAI…

And he started his own firm, Safe Superintelligence. It just raised $1 billion at a $5 billion post-money valuation for its seed round. How’s that for a company launch?

As the name implies, the goal is to develop “safe artificial intelligence” that surpasses human intelligence. There’s that veneer again…

It suggests that the rest of the industry isn’t trying at all. Which is complete nonsense.

Both Alphabet (Google), Meta, and the Microsoft/OpenAI camps are designing with their own versions of “safety” in mind… which equates to a heavily biased AI that censors certain information and even tries to rewrite history. Some view this as “safe” while others view this as “dangerous.”

Others like Anthropic and xAI are making concerted efforts to develop a far more neutral, fact-based AI hopefully devoid of bias.

Early Signs of AGI

When Sutskever left OpenAI in May, it was the beginning of many rumors that there was a brain drain at OpenAI. “The best are leaving,” they said. “OpenAI’s days are numbered…” was another whisper. How fickle.

But we knew differently.

There had already been talk of what OpenAI had been working on in the laboratory…

It was showing signs of early AGI.

They called it Q* (pronounced Q star). And last month, we learned that this project was now known as Strawberry. And it’s a precursor to what will eventually become known as Orion, which is believed to be the name of OpenAI’s next-generation, multi-modal large language model (LLM). We explored these developments in The Bleeding Edge – AI’s Need For Speed.

The facts tell a very different story about what’s happened at OpenAI…

OpenAI was racing ahead and focused on improving the ability of its AI to reason, something that has been a challenge for LLMs.

Smartly, Altman has been proactive in speaking with government officials and providing demonstrations of the technology, which had not been made public. It’s better to have an open dialog with the government than to wake up one morning only to find that your company has been sequestered in the interests of “national security.”

Better yet, last Thursday, the world got a preview of what’s to come…

OpenAI released a preview of its latest LLM “o1.” And anyone who subscribes to ChatGPT can select it and experiment with a preview of the latest model.

The results are absolutely stunning.

o1 (shown above in orange and pink) is demonstrating a remarkable leap above GPT-4o (shown in teal). It’s such a large leap, it’s almost hard to believe.

Just look at the images above. The increase in competition math, which has historically been difficult for LLMs, has skyrocketed. The same is true for competitive software coding.

And just look at the results of the GPQA analysis. GPQA is a fairly new benchmark for AI. It stands for Graduate Level Google-Proof Questions and Answers benchmark.

The test is a dataset of complex questions in biology, physics, and chemistry that require domain expertise to answer correctly and are hard to answer even with the help of a search engine like Google.

Highly skilled human non-experts are only able to achieve a score of 34% with the use of Google. GPT-4 was only able to achieve 39% accuracy, and GPT-4o only demonstrated 56% accuracy.

But just look at the performance in the chart on the right above! Both o1-preview and o1 were able to achieve 78% accuracy, higher than an expert human.

OpenAI’s latest model has now surpassed human PhD-level intelligence.

It appears that the detractors and the decels (decelerationists) were dead wrong, as they almost always are. The gaslighting about the threats and the safety risk – as well as OpenAI’s impending slide into irrelevance – were all nonsense.

The company has just released something capable of incredible productivity and societal good. This is a major leap in terms of intelligence and reasoning that will lead to more positive breakthroughs in more fields than we can name. And it will lead to a world of abundance.

The correct moral framework is to build and improve…

To create technology that will become of immense value to society…

Technology that will lead to nuclear fusion and limitless clean, cheap electricity – capable of powering the planet’s growing power demands and bringing the last 700 million people out of poverty (The Bleeding Edge – Should We Scale Back Our Energy Consumption?)…

Technology that will unlock the secrets of human biology, so that we may reduce and eliminate human disease and suffering (The Bleeding Edge – The Tech That Will Change the Economics of Biotech)…

Technology that will help us discover hundreds of thousands of new synthetic materials, so that we may build stronger and longer-lasting infrastructure, reactors, and computing systems. (The Bleeding Edge – DeepMind’s Latest AI Breakthrough).

It is right and just to accelerate.

There are huge, complex problems to solve. And it won’t happen if the world panders to pontificators and fearmongers.

We must keep building.


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.