What Will You Name Your Robot?

Jeff Brown
|
Nov 11, 2024
|
Bleeding Edge
|
7 min read

Most people are very surprised to learn that commercial aircraft pretty much fly themselves.

Except for a few minutes at take-off and landing, commercial aircraft are basically on autopilot.

Passengers aren’t very bothered when they hear this. After all, they assume there are two pilots up front monitoring the situation.

They can’t see inside the cockpit, though, so they don’t really know if the pilots are there or if they’re sleeping. But we’re all conditioned to believe that they are there… and that they are “flying” the plane.

When it comes to autonomous vehicles, it’s a different story, though.

The reactions around autonomous technology are quite varied.

When it comes to vehicles, I often hear back from those who enjoy driving – they aren’t interested in handing that task over to a self-driving car.

Others are too nervous about trusting the technology. Unlike a commercial aircraft, passengers do see the “cockpit” of the autonomous vehicle, and they also see all of the traffic around the car. That can be unnerving, especially for those who aren’t familiar with the technology.

As regular readers know, I’ve been testing the technology for years. In fact, the very first time I sat down in a self-driving vehicle was in 2011 on NASA’s Ames campus in Mountain View, California. It was an early prototype of Google’s self-driving technology, which eventually became Waymo.

That was 13 years ago. It wasn’t safe at all back then. But today, in a Waymo or Tesla, it’s a very different story.

I continuously test every software release from Tesla and am more impressed with each release. I rarely have to touch the steering wheel anymore. And I have to say, it’s much less stressful and more enjoyable to be driven by Tesla’s AI than it is to drive myself.

Last week, I had a lot of fun testing the latest version of Tesla’s full self-driving (FSD) software for its Cybertruck, version 12.5.2.2. It’s a slightly different software release than what I’m using on my own Model Y, but they are very close.

I’m so comfortable with the technology… that I did something some might think is crazy.

Blindfolded & Hogtied

I “hogtied” my hands behind my back…

I “blindfolded” myself with fully blacked-out sunglasses…

And I let the Tesla do all the work.

If you look closely at the sunglasses I’m wearing in the photo below, you’ll see that the interior of the lens is completely blacked out. We taped in black paper on the inside of the glasses so I couldn’t see a thing – not kidding.

The latest version of Tesla’s FSD allows the “driver” to wear sunglasses. So I figured out a “hack” for being “blindfolded” – by blacking out the inside of the sunglasses.

In other words, I fooled the computer vision in the Tesla into thinking that I was watching the road. But in reality, I couldn’t see a thing.

In previous versions of the Tesla tech, the car would “nag” the driver to touch the wheel if it saw that the driver wasn’t paying attention. But in May 2024, Tesla’s tech got so good, it removed the steering wheel nag.

Now, I just had to keep my head looking forward, and the AI didn’t nag me at all to touch the wheel. I just sat back and enjoyed the ride.

I don’t recommend this to anyone. I just performed the experiment to make a point: The technology is very real, very functional, and on the cusp of widespread adoption and regulatory approvals.

Not everyone will adopt it. It will certainly take some time for passengers to get comfortable with the technology. But the safety data, in time, will show that this technology is far safer than human drivers.

And the convenience of being able to relax, or get some work done and be productive, or read, or play a game – instead of using that time focused on driving – will eventually win out. Any product or service that removes friction, reduces stress, and recaptures time is a guaranteed winner.

Which brings me to humanoid robots…

We Grow Fond of the Frictionless

Unlike self-driving cars, which are just big robots on wheels, most people don’t have any fear or concern about the idea of an autonomous humanoid robot performing undesirable tasks.

The simple idea of having a robot in the house to perform cleaning and do the laundry is something most people get really excited about. Even today, consumers tend to be very fond of their Roombas from iRobot (IRBT).

iRobot’s Roomba Combo 10 Max | Source: iRobot

So fond, in fact, that about 80% of owners actually give their Roomba a name. Mr. Clean, Furminator, Dirty Harry, Optimus Grime, DJ Roomba, Mr. Sweep, Mr. Roboto, and Meryl Sweep are popular names.

No disrespect to the Roombas, but they aren’t very smart. They still don’t know how to avoid the indoor doggy accident – which makes a terrible mess. And they clearly can’t do the dishes or the laundry.

But what if a robot could? What if a general-purpose robot could wash and fold clothes, clean the dishes, clean up a mess, and tidy up the kids’ rooms… all with a simple command. Would that interest you?

Just imagine how much time a week would be recaptured by not having to do these tasks. And aside from the cost of the robot itself, the only cost of the labor would be electricity.

That’s exactly what’s about to happen within the next couple of years. We’ve been exploring the latest developments with Tesla’s Optimus, and most recently with Boston Dynamics in The Bleeding Edge – Smart Money is Coming for the Robots.

What makes these common daily tasks around the home such challenging tasks is that they require fine motor skills and the ability to perform unstructured tasks. Unlike moving a part from one bin to another, dishes can be scattered everywhere. Each load of laundry will be different. And any room will be messy in countless different ways.

One of the more interesting companies that is tackling this problem is one that I suspect almost no one has heard of.  And it’s a company we should be tracking.

Artificial Physical Intelligence

Founded early this year by two PhDs from UC Berkeley and Stanford, Physical Intelligence appears to have made some incredible progress as a company.

I’ll let the short videos below speak for themselves.

Source: Physical Intelligence

Above we can see Physical Intelligence’s (PI) prototype taking clothes from the dryer and placing them in a bin in preparation for folding. The video of the autonomous robot has been sped up by 10 times so that we can see it performing a task over some time.

In the next video, the PI robot performs an even more complex task of folding the clothes on the table. Again, the video is sped up for brevity.

Source: Physical Intelligence

It’s worth noting that the actual speed of the PI robot is slower than that of a human. With that said, it’s important to note that while these tasks are very simple for humans to perform, they are extremely complex for an AI.

PI refers to Moravec’s paradox, which highlights that it is not so difficult for a computer to best the world champion in the game Go, or to develop a novel synthetic compound. But it is extremely difficult to do something like train a computer on how to fold clothes.

Physical Intelligence (PI) has developed its own foundation model of an AI capable of learning and performing these complex tasks that require perception and fine motor skills.

It refers to its technology as artificial physical intelligence, a name that speaks to its foundation model interacting with the physical world. For anyone interested in the research paper on this topic, you can find it right here.

PI trained its foundation model on a wide range of tasks including bussing dishes, packing items into envelopes, folding clothing, routing cables, assembling cardboard boxes, plugging in power plugs, packing food into to-go boxes, and picking up and throwing out the trash.

These are all tasks that are time-consuming and monotonous and ones that I suspect most of us would gladly be willing to hand over to an intelligent robot.

The most remarkable dynamic with PI’s technology is that it was developed in less than nine months. This explains why after raising an initial $70 million at a $470 million valuation at the beginning of the year, it raised an additional $400 million at a $2.4 billion valuation last month.  How’s that? From a seed round to a $2.4 billion valuation in seven months.

It doesn’t matter that the technology is currently slower than a human performing the same task. It doesn’t even matter if it makes a mistake from time to time…

What we should focus on is that the foundation model is learning from each task. It is getting smarter, improving its consistency, and in time will improve its speed. This technological development is happening at an exponential pace.

These intelligent robots are coming to our businesses and our homes. And we’re going to welcome them and celebrate our recaptured time.

What will you name yours?

Jeff


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.