Yesterday was yet another landmark day in artificial intelligence (AI).

Meta released its Llama 3.1 large language model (LLM).

As a reminder, LLMs are a form of generative AI capable of having conversations, writing software code, providing answers to just about any question, writing legal documents, providing customer support, language translation, writing articles, and more.

Generative AI has already proven to be immensely useful. And companies that have developed this technology continue to aggressively invest in training the next-generation models.

You see, they all know that generative AI is just a stepping stone to a much greater prize: artificial general intelligence (AGI).

They’re investing their billions in hopes of getting to AGI first.

Outperformance

Meta is one of those companies that has been at the forefront of generative AI development. And with $58 billion in cash and an additional $43 billion in free cash flow that will be generated this year, it has no shortage of capital to invest in AI.

The release of Llama 3.1 is significant and worthy of our time because it is the first frontier LLM to be released as open-source AI. Kind of… we’ll get to what that means.

The Llama 3.1 405B foundational model is trained on 405 billion parameters, which is generally on par with the most advanced LLMs. OpenAI’s GPT-4, which powers its ChatGPT chatbot, was trained on 1.76 trillion parameters.

But while the size of the training set generally correlates to the performance of the generative AI models, it doesn’t mean that smaller training sets can’t outperform in certain areas.

Source: Maxime Labonne

Shown above, the green “open-source” AI models are compared with the red closed-source models over the last couple of years. The most striking takeaway is that the more advanced open-source models have approached the highest-performing closed-source models.

We can see in the upper right corner that Meta’s Llama 3.1 405B is very close to Anthropic’s Claude 3.5 Sonnet, which was just released on June 20th. The metric in the chart above that is measured is the massive multi-task language understanding (MMLU) benchmark, which is one of many used to measure LLM model performance.

The performance of Llama 3.1 405B shown above is significant because, in the past, open-source AI models have significantly trailed closed-source competition in performance.

The most advanced AI models can now cost hundreds of millions to train, so there aren’t too many tech companies willing to spend that kind of money and make it available for free.

I’ll get to Meta’s motivations in a bit.

As we can see in the table below, Llama 3.1 405B’s performance is state-of-the-art in many categories. The light blue highlighted boxes indicate which of the LLMs had the highest performance for a number of common AI benchmarks.

Source: Meta

Llama 3.1 405B outperforms both OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet in several categories. Prior to Meta’s release yesterday, those were the two leaders in the industry.

And because Llama 3.1 is an open-sourced AI, just about anyone can download any of the three Llama 3.1 models free of charge.

And that means state-of-the-art, generative AI software is now available and competitive with the most advanced proprietary models to anyone on the planet.

Kind of feels like a tipping point, wouldn’t you say?

There is, however, some nuance in what is and isn’t “open-source.”

Open-Source Versus Open-Weight

Meta doesn’t share the underlying software architecture, its algorithms, or the specific data sets that were used to train Llama 3.1. A true open-source release would include these things.

This is significant because it makes it very difficult to understand and identify the biases or mistakes in Meta’s model.

And we know that Meta has not only a history of programming political narratives and biases into its products, but it also has a history of censoring and even banning people on its social media platforms who have opinions and even scientific research it doesn’t agree with.

What Meta’s “open-source AI” release represents is an AI model that can be trained further… with additional data and fine-tuned for specific needs.

What Meta released is really an open-weight model, whereby Meta released the pre-trained weights of Llama 3.1. These are the learned parameters of Llama 3.1 based on its training.

This is still useful to the industry though. It gives individuals, companies, academia, and governments the ability to leverage, tailor, and fine-tune the model for specific purposes. And despite any biases, there is still incredible utility provided by open-weight models.

The most important implication of this is, of course, that everyone will have access to technology that costs more than $1 billion to create… for free. It goes without saying that this will accelerate the adoption of generative AI for countless applications.

The only cost when working with a royalty-free, open-weight model is the cost to fine-tune that model (which is nominal)… and the cost of computation to run the model (computation costs are highly competitive and widely available, as we know).

I know what some of us are thinking… why? Why would Meta spend $1 billion on technology development and make it available for free?

Meta’s Motivations

Meta’s typical response to that question always centers around a bunch of virtue signaling like “we want to contribute to the betterment of the planet,” “we believe in openness, accessibility, and democratization of technology,” “it will help us connect everyone everywhere,” and it’s “good for the world.” I’m paraphrasing, of course, but this is the message.

But we shouldn’t forget that Meta’s entire business model is based on surveillance, data collection, and monetization of our data by selling access to advertisers. It is one of the most aggressive, for-profit companies in history. And despite sitting on $58 billion in cash – and growing – prior to this year, it never paid a dividend.

In CEO Zuckerberg’s letter announcing the company’s stance on its open-sourcing of Llama 3.1, he was remarkably honest about today’s reality as a justification for releasing the model to all. He wrote:

Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities.

I’m paraphrasing again, but what he basically said above is that China will find a way to steal our technology anyway, so we might as well make it available to accelerate access and development in the Western world.

Zuckerberg went further to say…

I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure they can best take advantage of the latest advances and achieve a sustainable first-mover advantage over the long term.

And he’s right. This is the way.

This is how an E/ACC thinks. It’s the opposite of the decels which want to decelerate, control, and slow things down which would be disadvantageous for the West, and advantageous for China.

The timing of the Llama 3.1 release was also no coincidence, as the destructive California AI bill, SB-1047, will be voted on next month. For those who want to catch up on this topic, I recommend reading Outer Limits – The Power Grab for AI in California.

Meta released Llama 3.1 to pre-empt the vote on SB-1047 and push out the state-of-the-art LLM in hopes of influencing the vote, hopefully to defeat it. I sincerely hope that the move works for everyone’s benefit.

But aside from the possible benefit of defeating the horrible CA legislation, the release of Llama 3.1 will benefit Meta to the tune of billions of dollars. Meta is so large and influential, it can drive technology adoption in the industry, much in the same way that Google became the dominant smartphone operating system by giving Android OS away for free.

When a large part of an industry standardizes on a software or hardware platform, there are always cost benefits from that scale. Meta develops its own semiconductor design that TSMC manufactures. Naturally, its software is optimized to run on its hardware. So the more people that adopt similar technology results in lower costs to Meta as a business.

For well-resourced companies, this is a brilliant strategy that has been employed before by Meta, Google, NVIDIA, and so many others.

Meta just dropped a “bomb” in the world of artificial intelligence. It’s an accelerant for AI… as if the tech wasn’t moving fast enough already…