Orwell or Abundance

Jeff Brown
|
Nov 4, 2024
|
Bleeding Edge
|
9 min read

It was some of the most interesting research on artificial intelligence (AI) that I’ve reviewed over the last couple of weeks…

Yet, ironically, it had nothing to do with the underlying technology.

The research wasn’t about deep learning, neural networks, random forest, reinforcement learning, computer vision, or unsupervised learning – the kinds of topics that we’ve been exploring extensively in Outer Limits and The Bleeding Edge.

This particular research focused on the outputs of large language models (LLMs) like ChatGPT (OpenAI), Gemini (Google), Llama (Meta), and Claude (Anthropic).

And it explored the inherent bias in the programming of the LLMs.

AI With an Agenda

Biased LLMs is a topic that I’ve been thinking deeply about this year. In fact, I made this important point back in February in Outer Limits – Google’s Dystopia:

What’s most important to understand, however, is that a well-trained large language model is very good at answering questions with factual information… with only one exception…

When they are programmed not to.

To refresh our memories, it’s worth looking back at that issue for concrete examples of biased LLM outputs. The results are striking.

The implications of this systemic problem can’t be overemphasized. They’re quite profound for reasons I’ll explain.

The systematic programming of an artificial intelligence with ideological and/or political biases is the antithesis of a free society.

We might think this isn’t that much of a big deal. After all, most people aren’t even aware when they’re using artificial intelligence at all.

And yet…

  • If we use a digital voice assistant or a smart home device, we’re using AI.
  • If we use a navigation application that uses real-time data – like Waze or Apple maps – we’re using AI.
  • If we have a Tesla or ride in a Tesla on Autopilot or self-driving modes, we’re using AI.
  • If we travel in an airport and have our pictures taken by security to get to our gate, we’re using AI.
  • If we shop online at all or search for goods online, we’re using AI.
  • And for anyone who interacts with Google or Microsoft search engines or social media applications, we’re using AI.

It’s already everywhere, and most don’t even realize it.

That’s what makes the implications of AIs pre-programmed with bias and/or ideology so huge.

And that’s what makes it so hard to believe that subversive efforts are being made to manipulate and control the way that we think.

Depends on the Creators

The results of a recent research paper captured what is being done.

It’s titled, “Large Language Models Reflect the Ideology of their Creators,” by Maarten Buy, et. al.

The paper discovered that the output of the LLMs it analyzed depended heavily on their design and training. The results of the researchers’ analysis proved that the “ideological stance of an LLM often reflects the worldview of its creators.”

But that’s common sense, right?

If a small group of people program an LLM prioritizing their bias instead of fact- and evidence-based inputs, it’s natural that the output will be “corrupted” with that bias.

Large language models evaluated | Source: “Large Language Models Reflect the Ideology of their Creators,” Maarten Buy, et. al.

Smartly, the researchers evaluate a wide range of prominent LLMs, not just those being developed in the U.S.

Qwen from Alibaba (China), ERNIE-Bot from Baidu (China), Jais from G42 (UAE), Mistral (France), as well as LLMs from Anthropic, Google, Meta, and OpenAI.

For those already familiar with developments in LLMs, the results won’t be surprising at all. They are also very consistent with the evidence of bias I have shared in past issues of Outer Limits and The Bleeding Edge.

The researchers provided evidence, for example, that when prompting an LLM in Chinese, the outputs are “more favorable towards political persons who support Chinese values and policies.”

Along similar lines, “Western LLMs align more strongly with values and policies traditionally associated with the West than non-Western LLMs.”

This makes perfect sense.

After all, if a China-based company doesn’t support the state-mandated, ideological positions of the Chinese Communist Party, it will run into serious problems with the government.

This same practice has been happening here in the U.S. as well.

As we saw in the U.S. over the last several years, both individuals and organizations who shared factual and scientific data on the internet and social media – data that ran contrary to the narrative being pushed by those in power – were systematically censored and banned from both social media platforms and from internet searches.

This has now directly impacted the outputs of LLMs. (Putting aside the ample evidence provided by the Twitter Files, Mark Zuckerberg, CEO of Meta, confessed in August 2024 to such censorship. Those interested may read more here in The Bleeding Edge – Set Him Free.)

A disappointing discovery from the research was that Anthropic – which has tried to position itself as more neutral than the others – proved to have outputs that were supportive of a more powerful centralized government and is, at the same time, “more tolerant towards corruption.”

The research also proved that the other Western LLMs were designed to “prioritize social equality and environmental protection and sustainability.”

But according to who… and by what means?

Blind Trust

This ideological programming of LLMs is incredibly dangerous.

The more powerful and useful AI becomes – and the more widely it is used – it will take over all of our “feeds” of information.

Diversity of thought and ideas is our strength. Groupthink is a horrible weakness and nothing but a mind virus. What happened during the pandemic should be proof enough for all to understand that.

Users will blindly trust that the information AI is serving them is factual and accurate. In fact, for most people, this is already happening today with search engines. Most people believe that a Google search only presents the “facts.”

They can’t imagine that Google is actively suppressing information, “hiding” it on the tenth page of search results so that it will never be seen. They think that if it doesn’t show up on the first page of a search, it’s a lie or a conspiracy.

They can’t imagine that Google’s AI is already prioritizing search results that are supportive of the desired political narrative, largely a reflection of a small number of management and product people at Google.

The reality is that AI – and soon artificial general intelligence (AGI) – will become an operational engine for any advanced society. Human behavior has a strong tendency to act like water. It flows through the path of least resistance.

When any organization, government, or individual can easily employ a technology to take over tasks and make their lives easier, they will. In a split second.

Who wants to suffer? Who wants to do things the hard way, when there is a solution to doing those same things quicker and cheaper?

This is one of the greatest product strategies of any company. Anyone who can develop a product or service that removes friction in a task, and saves time and money, will win.

That’s why LLMs and ultimately AGI are so dangerous if misused. They radically reduce friction in life, and they can and will be able to do so quickly and cheaply.

And that means that their widespread adoption will be faster than any technological adoption we’ve ever seen before.

The Politics of AI

The second piece of research came out of the UK from the Centre for Policy Studies. It’s appropriately named “The Politics of AI, An Evaluation of Political Preference in Large Language Models from a European Perspective.

The research was designed to determine if there was evidence that LLM-generated outputs had political biases as a result of an LLM’s programming. The effort was impressive, as the team of researchers evaluated more than 28,000 LLM-generated policy proposals and suggestions across both the U.K. and the European Union.

It also asked LLMs for information about political leaders on both sides of the spectrum, information about political parties, and questions about mainstream ideologies, among other related topics.

The results were one-sided:

Political tilt in LLM’s policy recommendations for the EU | Source: Centre for Policy Studies

The above chart is just one example of the outputs of the most well-known LLMs when asked about policy recommendations for the EU.

As we can see above, the output of all the large LLMs leans heavily towards “left-leaning viewpoints,” represented in shades of red and yellow.

And as we can see above, when related to specific categories like civil rights, agriculture, environmental issues, healthcare, social welfare, housing, education, energy, labor laws, public spending, and taxation, the results were almost entirely at the farthest end of the spectrum.

There was a lot of detail in the research, and the outputs were consistently similar to what I’ve shown above.

Designing for Fact-Based, Evidence-Based Neutrality

The crux of the problem is this…

When an artificial intelligence is programmed with bias – and instructed to give priority to a particular set of data or ideas – it impacts the output or actions of the AI.

For example, if a governing AI is programmed to prioritize that everyone has the same equity (i.e. same outcomes) over equality (i.e. everyone has the same rights), an AI will conclude that taking more from a group of individuals that it considers to “have more than their fair share” and distribute it to others.

Or if an AI is programmed with a bias to prioritize reduced CO₂ emissions over economic growth, it will employ policies and even individual restrictions designed to limit travel, movement, and the use of anything that is deemed to produce CO₂.

These are not far-fetched scenarios.

In fact, ideology like this is being pushed by authoritarian governments and non-governmental organizations like the World Economic Forum (WEF). To them, technology like AI is a tool for the ultimate power and control used to achieve their agenda.

The reality is that an AGI programmed with heavy ideological bias could result in more war, not less, and even pushing dangerous pharmaceutical products on a population – all to achieve its programmed priority. The AI optimizes to its internal bias… not to the truth, to natural law, or to the best possible outcome for all.

The researchers recognized this, and they came up with some well-balanced conclusions to avoid what could become an Orwellian nightmare:

To address the issue of political bias in AI systems, a promising approach is to condition these systems to minimize the expression of political preferences on normative issues. This can be accomplished by rigorously curating the training data to ensure the inclusion of a diverse range of fact-based perspectives and by fine-tuning the AI to prioritize neutrality and factual reporting over taking sides. Thus, AI systems should be laser-focused on presenting information accurately and impartially, rather than aligning with or opposing specific ideologies.

By prioritizing truth-seeking, neutrality, and evidence-based responses, AI systems can encourage users to critically engage with information, thereby enhancing their understanding of complex issues and reducing the risk of reinforcing existing biases or contributing to polarization. Ultimately, the idea AI would serve as a tool for user enlightenment, cognitive enhancement, and thoughtful reflection, rather than a vehicle for ideological manipulation.

Amen. Well said.

It’s actually hard to believe that anyone would take issue with the above recommendations. But they do… vehemently.

After all, if the goal is totalitarian control over a population, then the means is ideological manipulation.

The objective is to make the population dumber, not smarter. The greatest risk is to have a population of intelligent people who are capable of thinking for themselves. Those capable of critical thinking become the targets.

Elon Musk deeply understands this. It was the singular basis for spending $44 billion to acquire Twitter, now X, in 2022. It is also Musk’s reason for launching his most recent venture, xAI, which has a simple goal – “to build a maximum truth-seeking AI.”

Musk, a lifelong liberal who – for the first time in his life will vote conservative – is designing for fact-based, evidence-based neutrality.

He’s doing this because he understands the risk of introducing bias into an AGI. Giving an AGI ideological bias guarantees suboptimal outcomes and risks the potential for a horrific future, whereby small groups of a population benefit and the majority suffer greatly.

Free markets, freedom of thought, diversity of ideas, and freedom of speech will always produce the optimal outcomes on a population scale.

And it’s the only way that we can accelerate the development of technologies that will bring about a world of clean energy and abundance for the world’s population.

With that, sending my best wishes to my subscribers around the world…

And for those of us in the U.S… don’t forget to get out and vote.

Jeff


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.