Earlier this month, we saw the Aurora supercomputer ascend the rankings to the second most powerful supercomputer on Earth.

It clocked in at an incredible 1.012 exaflops (floating-point operations per second).

An exaflop is equivalent to 1 quintillion flops – a number so large, it’s hard to comprehend. Exascale computing, once the realm of science fiction, is now possible on two supercomputers based in the U.S.

Aurora is in Illinois at the Argonne National Laboratory. And Frontier is in Tennessee at the Oak Ridge National Laboratory. Both are under the U.S. Department of Energy (DOE).

Which is kind of funny considering how much electricity is required to run a computing system of this size…

Source: Argonne National Laboratory

Replicating the Human Brain

Frontier currently holds the top spot at 1.206 exaflops.

It’s a monster of a supercomputer built on Hewlett Packard Enterprise (HPE) Cray supercomputers – technology HPE now owns through its 2019 acquisition of Cray.

But at its core, the Frontier supercomputer is powered by Advanced Micro Devices (AMD) central processing units (CPUs) and graphics processing units (GPUs). These powerful chips give Frontier the capabilities for remarkable computations, allowing scientists to tackle subjects like nuclear fusion, cosmology, complex climate models, and subatomic particle research.

And to do that, Frontier requires 21 megawatts of electricity.

Source: Oak Ridge National Laboratory for the U.S. Department of Energy

Now we know why Aurora and Frontier are housed under the DOE! That’s enough electricity to power roughly 20,000 homes.

This has long been a point of consternation in computing. After all, the human brain – a supercomputer in its own right – can perform tasks a supercomputer can’t perform. And it only requires 12 watts (W) to operate. That’s less than a normal light bulb.

The human brain is remarkable with 100 billion neurons and the capacity for 100 trillion parameters. No computing system has been able to do that yet.

Which is why the human brain is such an intense area of study.

In a perfect world, we have massively powerful computing systems that require very little energy to operate. Biological computing is the field exploring these types of systems.

A big part of the challenge has been understanding how the brain functions. The field of connectomics focuses on how each brain cell is connected to the other, in an effort to understand how the human brain performs so well with so little energy.

And last month, Google finally published some remarkable research in Science after a decade of work in connectomics.

The research shed a little light on this dark corner of our understanding of the human brain…

AI Advances Human Understanding

The team at Google, with the help of researchers at Harvard, was able to map out a single cubic millimeter portion of a human temporal cortex using electron microscopy.

Put more simply, the team was able to image a piece of brain tissue about the size of half a grain of rice by using an extremely high-resolution microscope.

The data collected is remarkable, amounting to 1.4 million gigabytes (or 1.4 petabytes). For comparison, an average smartphone is typically 128 gigabytes. The connectomics data for this tiny portion of the brain is about 11,000 times that amount.

Excitatory Neurons | Source: Google Research

Such a small volume of brain tissue revealed 16,000 neurons, 32,000 glia, 8,000 blood vessel cells, and 150 million synapses.

This mapping was done using specialized software, specifically machine learning (ML), to make such a daunting task possible.

I remember back in 2020 when the same team at Google released their connectome for a portion of the brain of a fruit fly. That was a monumental achievement at the time, also aided by machine learning, that revealed the connections of 25,000 neurons in a portion of the fruit fly brain.

This makes Google’s latest research that much more remarkable, showing the progress – enabled by ML – in just four years.

One of the more interesting discoveries was that of a rare class of synaptic connections, shown below. A portion of one neuron (blue) is shown making more than 50 connections (yellow) with another neuron (green).

Source: Google Research

We don’t yet understand the significance of this, or how it all works. This is just the starting point. Once we understand how the brain is structured and interconnected, we will ultimately be able to figure out the secrets to how it works.

And in the meantime…

The semiconductor and computing industries are in an accelerated race to build technology capable of performing like the human brain…

Not so much the energy efficiency of the brain… but to be able to manage a trillion bits or a trillion parameters of information or more.

The Tech Behind the World’s First Ultra-Intelligence AI Computer

One of the possibilities for Frontier is to facilitate mapping out the human brain.

The significance of exascale computing is that the data from electron microscopy of an entire brain would be measured in exabytes, not petabytes. The task will therefore require an exascale supercomputer.

And while the efforts to map the brain will continue, a much faster race is taking place to develop a computing system capable of exceeding the human brain.

This requires both semiconductors and computing systems designed and optimized for neural networks, a form of artificial intelligence (AI). This kind of design is more similar to our brains than the supercomputers like Aurora or Frontier.

Graphcore is a private company that has been on this mission, developing its intelligence processing unit (IPU).

Graphcore first released its IPU in 2022, with its unique stacked wafer-on-wafer design, enabling a 3D semiconductor architecture.

Source: Graphcore

This design enables parallel processing on a massive scale.

A simple reference, shown below, is the difference between a CPU (like AMD or Intel), a GPU (like Nvidia or AMD), and an IPU.

GPUs are designed to ingest large blocks of contiguous data. The design is a single instruction with multiple data inputs in parallel.

Source: Graphcore

In the case of the IPU, the design is for multiple instructions and multiple data inputs. In this way, each processor in an IPU can function independently of the other processors, maximizing complex computational throughput.

2024 is significant as it is the year in which Graphcore is scheduled to deliver what it calls the world’s first ultra-intelligence AI computer.

The Good Computer

Called the Good Computer, it’s named after Jack Good, a computer scientist who described a computer more capable than the human brain back in 1965.

The Good Computer will be powered by the next generation of Graphcore IPUs and capable of more than 10 exaflops of AI-specific computational power.

And it will support AI models with sizes of 500 trillion parameters – something we haven’t yet seen. Not only would a system like this provide a path towards artificial general intelligence (AGI), which is right around the corner, but it’s the kind of horsepower that will lead to artificial superintelligence (ASI).

And what’s equally incredible is that the system is expected to cost “only” $120 million. I say “only” because today’s most advanced large language models are multibillion-parameter models (OpenAI GPT-4o is a 200 billion-parameter model), which cost hundreds of millions of dollars to train.

We’re only halfway into the year in terms of AI breakthroughs. It’s our brain’s natural thought process to assume that we’ll take small incremental steps as AI improves. But this is precisely where our brains are weak.

The advancements in semiconductor technology, specifically related to AI, are happening at an exponential pace.

In fact, they are happening at a speed even faster than Moore’s law right now.

And these AI-specific semiconductors, and related computing systems like the Good Computer, are declining in cost per unit of computing power.

Which means that those developing AI software are able to do it faster and much cheaper than the year before.

And that means accelerated innovation in AI is happening at a pace that none of us have ever seen before.

Soon, very soon, the entire world will be alive with a network of intelligence more powerful than what the collective brains of the human race can produce together. And there won’t be just one AGI, there will be millions.

Just imagine the implications of having intelligent “machines” capable of working around the clock and performing research and development autonomously.

We’re at the bleeding edge of technology and at the outer limits of what’s possible. And we’re all on this incredible ride together.

I’m back,

Jeff Brown

Editor, Brownstone Research


In Case You Missed It

We invite you to watch Jeff Brown’s return message to his readers. You can view it right here…

FAQ

Editor’s Note: We recognize readers may have questions regarding the transition to the new Brownstone Research with Jeff Brown. Our team has prepared some helpful answers right here in our FAQ.

We welcome all reader questions and feedback – we want to hear from you! Please write to us here. After reading the FAQ, if you have remaining questions, you may reach our dedicated Customer Service team at 1-888-493-3156.