Commercial Space Travel

Jeff Brown
|
Dec 20, 2024
|
Bleeding Edge
|
13 min read


I hope that everyone is ready for some fun over the holiday break. This was certainly an interesting and volatile year.

We’ll continue to publish over the next two weeks, including some classic issues of The Bleeding Edge worth re-reading given their relevance and importance as well as some featured insights from my days journeying to the outer limits, prior to my return to Brownstone.

My team and I tend to use the next two holiday weeks for some deeper research and analysis on topics that we believe will be prominent in the next year.

We will publish an AMA as normal for the next two Fridays, so we’re very happy for anyone to write in with questions and feedback (which you can do right here). We’ll do our best to answer them, and my team and I will always read them all.

And as you might’ve read earlier this week in The Bleeding Edge – The Worst Path Is Inaction, when I rejoined Brownstone earlier this year, one of my main goals was to help restore the trust with my subscribers that was broken by my unexpected and unplanned departure in 2023.

Toward those efforts, my team and I have been hard at work to rebuild and grow the value of our services – both free and paid.

One group that had been particularly neglected was the members of my Brownstone Unlimited research service.

I see these subscribers, in many ways, as my “partners” in my business. They’ve been my most valuable subscribers who have access to all of my research and are the first to know – and help us test – new Brownstone Research services.

For this reason, we’ve been very focused on restoring value to our Brownstone Unlimited memberships as quickly as possible.

We’re on the path to success in this endeavor… and we’re excited to carry on the mission next year and beyond.

This is why I’d like to invite you once more to join the ranks of Brownstone Unlimited membership with a special year-end offer.

Earlier this week I sat down and revealed all the details of what Brownstone Unlimited membership entails, as well as the details of the special, limited-time offer for you to join. If you missed it, you can watch it here.

And, as always, if you have any questions, our client services team is always ready and happy to help you. You can reach them at 888-344-8038 anytime between 9 a.m. and 5 p.m. ET.

I hope you’re all as excited about what the future holds as we are here at Brownstone Research. Given the incredible technological shifts underway, the path towards a future of abundance has become very clear. We have so much to look forward to.

Wishing everyone a wonderful Christmas…

Happy Holidays,

Jeff

Propulsion-Powered Space Travel?

Once again, thank you and your team for all that you do to keep us informed on the future! It has often been said that without a new form of propulsion, we are doomed to our galaxy, if that!

Let’s say that some new method of propulsion is developed soon. And we would be able to travel at/near the speed of light. I was wondering if you/team has any idea how the human body would react to that kind of speed? Or would it be necessary to be in cryogenic sleep? Thank you for your opinion.

– Gordian B.

Hello Gordian,

You’re right, without a breakthrough in propulsion for interstellar travel, not only are we “stuck” in our own solar system, but we’re really limited to the near vicinity of Earth (i.e. the Moon and Mars). This is disappointing as Mars is far from an ideal planet in terms of habitability.

So we really need a major breakthrough in propulsion technology. Without the aid of powerful artificial intelligence, I would be pessimistic over the next few decades. But assuming that my prediction is correct that at least one company will achieve artificial general intelligence (AGI) before the end of 2026, then we do have reason to be optimistic.

An AGI will have the ability to reason and conduct self-directed research. This will empower the best rocket scientists to leverage AGI to develop a breakthrough in propulsion. And that means that we could see a breakthrough before the end of the decade and commercialization of that technology within 10 years.

As for the human body withstanding those high speeds, the most difficult problem to solve for is the acceleration. That’s the hardest on the human body. Slow and steady acceleration would not be a problem, but too much acceleration could kill us.

One potential solution would be to develop a liquid immersion tank that would fill our lungs with a fluid with a similar density of water. Of course, the body would also be put in some form of stasis, like a state of hibernation.

The fluid solution would allow the body to endure faster accelerations which would ultimately shorten any long interstellar trips. But there is still a limit in the acceleration and deceleration periods.

How Can We Trust AI?

As an ordinary citizen not involved in medical research, quantum physics, or high-tech software writing, I can say without reservation that the only thing I have seen out of AI is just garbage.

Microsoft and Apple want to put AI on my devices that deliver ridiculous answers to everyday questions. I assume that people working in the above-mentioned fields and others that are well funded enough to have AI “done right” whatever that is can get some benefits.

But if these devices actually hallucinate how can they be trusted for anything important? And the potential for bad actors to do devastating harm with not much more than a laptop should be frightening to everyone.

At least nuclear and biological weapons require a sophisticated infrastructure to develop. Yes, the cat is out of the bag, but there’s no reason to be happy about it. We already have enough mad scientists developing deadly viruses on the taxpayer dollar!

– Robert S.

Hi Robert,

You’re definitely not wrong about generative AI hallucinating at times. This is a well-known side effect that we’ve seen with many large language models (LLMs).

But I will say this, the industry is rapidly improving on this “bug”. What’s available now is dramatically better, with far fewer hallucinations than what we saw back in November 2022 when ChatGPT was originally released.

Not only has the presence of this problem been reduced, but the latest generative AIs are capable of forms of reasoning, making them that much more useful.

As the technology develops, I’m far less worried about hallucination than I am about individuals programming AIs with bias and political agendas. We’ve already seen this from Google, Meta, Microsoft/OpenAI, and others. Not only is this wrong, it can actually be dangerous.

I wrote about this topic in The Bleeding Edge – Orwell or Abundance some weeks ago…

[The] ideological programming of LLMs is incredibly dangerous.

The more powerful and useful AI becomes – and the more widely it is used – it will take over all of our “feeds” of information.

Diversity of thought and ideas is our strength. Groupthink is a horrible weakness and nothing but a mind virus. What happened during the pandemic should be proof enough for all to understand that.

Users will blindly trust that the information AI is serving them is factual and accurate. In fact, for most people, this is already happening today with search engines. Most people believe that a Google search only presents the “facts.”

[…]

The reality is that AI – and soon artificial general intelligence (AGI) – will become an operational engine for any advanced society. Human behavior has a strong tendency to act like water. It flows through the path of least resistance.

When any organization, government, or individual can easily employ a technology to take over tasks and make their lives easier, they will. In a split second.

Who wants to suffer? Who wants to do things the hard way, when there is a solution to doing those same things quicker and cheaper?

This is one of the greatest product strategies of any company. Anyone who can develop a product or service that removes friction in a task, and saves time and money, will win.

That’s why LLMs and ultimately AGI are so dangerous if misused. They radically reduce friction in life, and they can and will be able to do so quickly and cheaply.

And that means their widespread adoption will be faster than any technological adoption we’ve ever seen before.

If an AI is programmed with things that are not true, there are real-world scenarios in which an AI would come to a potentially disastrous conclusion that could cost lives.

This is why building AIs on facts and evidenced-based data is so critically important.

AI will become the foundation for all learning, as well as an operating system for both corporations and governments. If the data that it trains on it is wrong, so will be the outputs. And AI can be used to brainwash entire populations as it becomes the foundation for learning.

This is the catalyst for Elon Musk and his team at xAI to build a “maximum truth-seeking AI.” This is precisely what the world needs to avoid a dystopian and totalitarian nightmare.

Robert, I’d like to give you some “homework” over the holidays. I’d like you to try out Grok, which is the AI developed by xAI and being used on the X platform (Twitter). You can interact with it, explore subjects of interest to you, question it on subjects you know well, and use it to create images as well. I’m confident you’ll find it interesting and useful.

And if possible, find a friend who has full self-driving (FSD) on a Tesla. And ask them to have the Tesla drive you around. I’d like for you to experience the latest version of FSD software (version 13.2). I guarantee it will change your perspective on AI. I no longer drive my Tesla, it drives me.

AI has been used to accurately predict how all known proteins in life fold, which is an impossible task for humans. It is now widely used in the legal field. Some 90% of software developers use AI to code. It is now widely used in creative fields for both text and image generation. And AI is also used in a wide variety of applications – from flying drones autonomously to optimizing supply chains.

It is already everywhere, precisely because it is capable of doing highly valuable tasks, and tasks that are impossible for humans to perform in any reasonable amount of time.

And you’re right, bad actors will always do bad things with technology. That’s why the good actors need to develop the technology to defend against its misuse. Deceleration only gives an advantage to the bad actors or totalitarian governments. Acceleration enables the good actors to stay a step ahead of the bad ones.

Avoiding Social Credit Systems

It all sounds great if you don’t mind the government bureaucrats deciding when and where you can travel.

– Richard E.

Hi, Richard. Thanks for writing in.

I believe this is in response to this month’s Exponential Tech Investor.

It’s our research service where we explore small-cap companies focused on cutting-edge technology poised for exponential growth.

It’s my flagship high-tech investment research product. I believe 2025 is going to be a big year for Exponential Tech once interest rates start coming down and we see renewed fervor in the small- and micro-cap sectors. As interest rates come down, there is always a sector rotation into higher-growth, smaller companies by institutional capital looking for more attractive returns.

Our most recent issue went out earlier this week where I dove into some substantial changes potentially coming for the U.S. government in the new year driven by new policies, and in particular to a certain transportation agency everyone loves to hate…

One of the largest and most important campaign promises of the Trump administration is to cut government spending dramatically and to reform or dissolve inefficient and unnecessary federal departments.

As part of this effort, Elon Musk and Vivek Ramaswamy have committed to heading up the newly created Department of Government Efficiency (DOGE). The name is a humorous play on the meme coin Doge (DOGE)… but Musk and Ramaswamy are deadly serious in their mission.

[…]

Musk and Ramaswamy claim to have identified nearly $2 trillion in spending cuts that can be implemented within the first 12 months of the incoming Trump administration. They believe the bulk of these cuts can be done through direct reforms without the need for Congressional approval. The goal is to balance the federal budget… or at least get close.

These spending cuts include a dramatic decrease in the number of government employees. There’s no way around it.

We don’t intend to come across as callous here… but there’s just no way to cut federal spending dramatically without downsizing the government.

The challenge for DOGE is that most federal employees are protected by federal civil service law. It says that federal employees can only be fired for cause and that the agency must prove their conduct, or performance was poor enough to warrant dismissal. Federal employees can challenge any such termination attempts.

Interestingly, the Transportation Security Administration (TSA) is not covered by federal civil service law. TSA employees can be fired without cause – no proof of poor conduct is necessary.

The TSA currently employs about 60,000 people – and anyone who travels frequently can see how inefficient the agency is.

Again, we don’t mean to sound callous here. But before the TSA’s creation in 2001, private security firms operated far more efficiently with far less manpower.

This makes the TSA low-hanging fruit for the Trump administration’s plan to cut federal spending dramatically.

Major policy changes like these naturally create incredible investment opportunities as certain companies in the private sector can benefit greatly. That’s what this month’s issue was about.

But to your point, that technology is unrelated to the government’s ability to control where and when we travel.

With that said, I completely share your concerns about this kind of totalitarian control. In fact, that’s precisely where the U.S., Canada, Australia/NZ, and Europe have been heading. Many of these governments, in cooperation with the World Economic Forum, have been instituting plans to create a kind of social credit system which is planned to be implemented in conjunction with a central bank-backed digital currency (CBDC).

The idea is to control our behavior based on our activities. And if we deviate from the desired behavior, using a CBDC and government-controlled digital wallet, the government could restrict our travel and what we spend our money on. It could even freeze our wallets and/or deduct funds as a penalty.

This is the technology we should all be deeply concerned about. It’s what we need to stand against.

But the technology I wrote about is something that will provide less friction moving through security lines and more accuracy in terms of safety. This is something we’ll all benefit from, and I’m confident it can be done far more efficiently and less expensive than if the government is running it.

Small Drone Technology

Hello Brownstone.

I was wondering if you could research the small drone industry specifically related to the U.S. military.

As you probably know there is a big push to use drones only manufactured in the U.S. Just a couple of small companies in this space are RCAT, PDYN and UMAC. RCAT recently scored a military contract. The Short Range Reconnaissance Program uses PDYN software for nav purposes that can navigate without the use of GPS.

The industry, although small, incorporates all the latest in cutting-edge technology, including AI. There has been a run-up of these companies as of late, but it also appears to be early in the game. I also thought possibly, one of these companies is a potential takeover target for a bigger defense contractor like RTX. I find your research valuable and solid. Thank you for your consideration.

– Michael T.

Hi Michael,

Thanks for writing in with the interesting question.

This sector has been brought to life, sadly due to this ridiculous proxy war the U.S. has been fighting with Russia in Ukraine. This is the future of warfare and also the future of defense, which is why we should have it on our radar.

And yes, I am bullish on this space and am tracking it closely.

Private company Anduril has been a real leader in this space, basically reinventing the industry and leveraging machine learning and AI. It has already grown from a $1 billion valuation in 2018 to a $14 billion valuation in its most recent venture round this August when it raised $1.5 billion. Anduril will be a very interesting company to watch, and we should keep our eyes out for an IPO.

As we look into the next four years, I am actually very bullish on the defense industry. I predict that the U.S. will shift dramatically away from the warmongering of the last four years and put an end to the senseless war in Ukraine.

And at the same time, the U.S. will invest in its defense capabilities and upgrade the technology in use by all of the branches of the military. This is long overdue given how far technology has come in the last decade, I’m seeing signs of renewal and investment everywhere, which is why this space is so interesting.

And to your point, yes, these private and public companies that are developing bleeding-edge defense technologies are definitely buyout targets by both public and private companies.

I can’t make any specific comments about the companies that you mentioned without conducting a full analysis, that’s why my comments are more general.

Here’s to a more peaceful and safer new year,

Jeff


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.