Colin’s Note: Continuing on this week’s deep dive into what’s next for AI…
We look today at the changing nature of large language models – or LLMs.
It’s the algorithm that processes the data fed to an AI to then understand and predict that data to generate responses. It’s like the brain of an AI.
See, developments in these models are plateauing somewhat. Your favorite generative AIs are being fed the same data… And there can only be so much diversity among LLMs when they’re all getting more or less the same information.
Because of that, I believe we’re going to see a powerful shift in the world of LLMs… And it’s going to see industry heavy hitters outpaced by two companies that have an edge thanks to their massive amounts of what an LLM loves most – constant streams of new data.
It’s all in this week’s video. Just click below to watch, or readers can, as usual, find their transcript below.
Subscribers, welcome back to The Bleeding Edge. I hope you all are having a wonderful day.
Now, this week we’re asking the question, what’s next with artificial intelligence (AI)?
So just like we pointed out on Monday… Nvidia, Microsoft, Supermicro… these stocks have skyrocketed over the past year. But that doesn’t mean that the best AI investments are over.
In fact, it’s just the beginning.
So if you’ve missed out on the meteoric rise of the semiconductor industry or Nvidia over the past year, not to worry. It’s just like in the 1990s when we were buying modems and personal computers and that was just the beginning of the internet age. Top-performing stocks like Google and Netflix weren’t even around yet. Just like then, the best is yet to come for AI.
Where we’ll focus today is on large language models – or LLMs for short. These are the powerful brains behind products like ChatGPT or Google’s Gemini.
The way large language models work is companies like OpenAI and Google obtain large amounts of data. Then the data is put through a training phase, if you will, where large amounts of computational power are used and needed.
This is, in part, why the semiconductor industry has performed so well over the past year. There has been kind of a race by the largest tech companies to create large language models that are creating powerful software products like ChatGPT.
Two interesting things have occurred over the past year.
First, I think we’re seeing kind of a flattening or maybe even a plateauing of the large language models’ capabilities.
So just last week, AI startup Anthropic released several new large language models. That includes its most powerful one – the Claude 3 Opus. But really, to the untrained eye – or more specifically the general public, like us – it would be tough to discern any difference between it and ChatGPT.
That’s because the method of training these models, and more importantly, the data used is all largely standardized at this point. Everyone is scraping the same data from Wikipedia and other free informational websites.
It’s similar to the progression of search engines back in the 1990s. Large language models are experiencing kind of a similar phenomenon. Early on, I remember this, there were noticeable differences between search results that you got, like say for instance, on AltaVista versus Yahoo.
But over time, and even today, the search results on Google aren’t significantly better than Bing. It just so happens Google has kind of marketed and gotten their search product in front of more people.
I believe over the next year, we’re going to see a powerful shift in the world of large language models. The big three – which are OpenAI, Google, and Anthropic – are going to see meaningful competition from two well-known foes with a huge advantage. Two companies that will not only have an edge over the large language model leaders in terms of data but also in the method of distributing the product to the masses.
I’m talking about Meta – which is the parent company of Facebook, Instagram, and WhatsApp – and Twitter, or X, as Elon Musk would like it to be known and like us to call it.
Both companies though, have a near endless firehose of data coming onto their platforms every day, really every second.
This data is increasingly valuable in an era where large language models are scraping and parsing the same data sets over and over.
Not only that, but both of these companies also have a unique approach to developing large language models that will set them apart from Google and Microsoft.
Both Meta and Twitter are going the open-source route.
Now, “open sourcing” is the process of releasing the source code – essentially the secret sauce to the language model – to the general public, often for free.
Meta’s Llama model is one of the most popular and widely used open-source language models on the market today. In fact, Nvidia has used it internally to train its employees and chip designers. It’s been used thousands of times in the AI community.
Many in the community are expecting and even anticipating Meta to release an updated model sometime later this year. Mark Zuckerberg has hinted at that.
Elon Musk and Twitter are expected to do the same thing. Just days ago, Elon Musk said he would open-source Grok – his company’s large language model. It’s known for being somewhat less guarded and restrained like the corporate models that you’re seeing from the tech giants like Microsoft and Google.
In many ways, this is Elon Musk simply poking at the AI community as he has an ongoing lawsuit with OpenAI – the company he helped found in the early days. But let’s not really get lost in the Elon Musk drama. That’s fairly easy to do.
Open-source models are a huge threat to the closed proprietary models that Google and Microsoft are spending billions on to protect.
They also give Meta and Twitter an army of free research and development staff that will be creating and using these models… just like Nvidia used the Llama model earlier this year.
Look, I know it’s still pretty early in 2024… But as we move through this year, the open-source models will help accelerate AI advancement and mitigate the advantage Google, Anthropic, and Microsoft have today.
As an investor, the easiest way to benefit is to maintain your position in the semiconductor and data center stocks. A more advanced way to play this trend is to invest in companies that help accelerate and reduce the cost of large language model training and inference.
For paid-up subscribers of our Exponential Tech Investor advisory, the latest recommendation is poised to do just that. Expect a full write-up here very soon.
Folks, that was The Bleeding Edge for today, and I will be back on Friday. Until then, good luck with your investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.