“If I Only Had a Brain”

Jeff Brown
|
Aug 7, 2024
|
Bleeding Edge
|
5 min read


Having driven more than 500 miles through rolling fields of corn across parts of the Great Plains over the last few weeks, I couldn’t help but be reminded of the Scarecrow in The Wizard of Oz.

Walking along and between the stalks of corn, it might appear that I was taking some time off this summer to get some fresh air. But my travels were all about researching what’s happening with artificial intelligence (AI).

But what does the brainless Scarecrow have to do with what’s happening in AI right now? More on that in a moment…

Manifested AI

AI is manifesting itself before our eyes.

We’re seeing this happen in ways that allow us to both better understand the technology and, of course, use it. This is something I refer to as Manifested AI.

Chatbots are the early manifestations of this incredible technology, but while it feels like we’re chatting with a human on the other “side” of OpenAI’s ChatGPT chat window, we can’t see who, or what, we’re talking to.

And naturally, we haven’t been able to touch “it” either.

But it won’t be much longer before we can.

One of the most obvious manifestations of AI will be that of the human form – a humanoid robot of a similar shape and movement to us.

Yesterday, one of the better-funded players in humanoid robots announced its next generation of manifested AI – Figure 02.

Source: Figure

Shown above is Figure’s Figure 02 – its fourth-generation robotics platform – which was trialed for a short time at BMW’s Spartanburg, South Carolina, manufacturing plant.

In the clip, Figure 02 takes a sheet metal part and places it on another fixture.

This kind of task and mobility is a common focus in the industry right now. These new technology platforms are designed to perform tasks that are monotonous, tiring, and can cause injury due to the nature of the lifting and movement.

Automotive manufacturing plants have proven to be a popular application for this powerful technology…

  • Tesla has been manifesting its own future labor force through its development of Optimus, which builds upon Tesla’s powerful autonomous driving technology.
  • Agility Robotics has had a partnership with Ford, primarily focused on last-mile logistics – package delivery – but this technology could certainly be applied to moving containers of parts around a manufacturing plant.
  • Apptronik is piloting its Apollo humanoid robot with Mercedes focused on using the technology for low-skill, physically demanding labor.
  • Sanctuary AI is testing its Phoenix humanoid robot with Magna International, one of the largest car parts manufacturers in the world.
  • And Hyundai acquired 80% of Boston Dynamics from Softbank in 2021 for $1.1 billion, driven largely by the same reasons as the others.

Urgent Demand For Labor

It is getting harder and harder to fill these repetitive, physically taxing jobs in the workforce. Millions of jobs continue to go unfilled, turnover is consistently high in the manufacturing industry, and these issues are getting worse every year.

It’s an urgent problem to solve.

We know that Tesla has been testing and employing Optimus in-house since December. And with the new upgrades announced by Figure for Figure 02, I figured that BMW would be anxious to put the new tech to work full-time.

But the trial with Figure is now over. And BMW has “no definite timetable established” for adopting Figure 02 in its manufacturing plants.

That tells us one thing…

Despite raising $845 million to date, having 16 degrees of freedom with its hands, supporting 20 kilograms of payload, five hours of runtime, three times more computational power for inference than Figure 01, and six cameras… Figure 02 still isn’t as efficient as a human.

We can see that in the video shown earlier, but the fact that BMW isn’t moving forward full-time tells us a lot more.

What was more interesting about Figure’s latest announcement is that it redesigned its hardware and software from the ground up with this latest release. And it revealed that it partnered with OpenAI for the software on Figure 02.

Source: Figure

Figure also refers to a “vision language model” as part of the new software for Figure 02. While not explicitly stated, I believe that Figure is relying on OpenAI’s multi-modal large language model (LLM) GPT-4o. It is explicitly designed to use computer vision as an input, from which it can reason to some extent.

This latest announcement didn’t exactly come as a surprise as OpenAI is a major investor in Figure. The adoption of OpenAI’s software is a tacit acknowledgment of Figure’s biggest weakness, specifically the lack of software expertise. The lack of a cerebral cortex.

The lack of a brain.

If It Only Had a Brain

Which is what reminded me of the Scarecrow in the first place.

The hardware for a humanoid robot, while complex, is technologically an easier problem to solve for.

The really hard part about developing an autonomous general-purpose humanoid robot is the software – the brain.

And this is where Tesla’s Optimus is lightyears ahead of the competitive field.

After billions of miles of data collected by Teslas driven on auto-pilot and full self-driving mode, Tesla has an artificial intelligence capable of seamlessly driving any machine from point A to point B, with the ability to infer the correct actions in a split second, even in completely unstructured environments in areas where no Tesla has driven before.

This autonomous software is the foundation for Optimus’ “brain.” And just like the human brain, that software has been trained on video inputs, just what the cars “see” in real-world conditions.

The inputs for Optimus are the same. Computer vision is used to “see” the environment so that it can carry out the tasks that it has been instructed to perform.

Optimus processes the world much in the same way that we do. It sees its environment and then reasons the most efficient way to get a task completed. No other player comes close to what Tesla is doing right now with humanoid robots.

OpenAI’s GPT-4o is a great platform to enable us to speak with Figure 02. It’s like having a walking, talking chatbot to communicate with. An advanced chatbot, sure… But it’s nothing like having a brain built on fully autonomous software capable of navigating and operating within any environment.

It’s great to see the progress on the hardware at Figure, but the big problem to solve is autonomy.

Source: Figure

I’d be willing to bet that if Figure 02 could think, it would say:

“If I only had a brain…”


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.