Dear Reader,
In April of 1965, a thirty-six-year-old scientist made a prediction about the future of semiconductors. His paper “Cramming More Components onto Integrated Circuits” was written in the very earliest days of the semiconductor industry.
He predicted the evolution of the industry encapsuled by a rudimentary logarithmic graph shown below.
Projected Growth of Components on an Integrated Circuit (1965)
Source: Intel
What stands out so sharply in the image above is that the projection was based on just a handful of real-world data points. Said another way, the future was predicted on a very limited amount of information.
And yet, this simple chart became legend. Moore’s Law—as it was later known—became a guiding principle for the semiconductor industry.
In his original prediction, Gordon Moore suggested that the number of components in an integrated circuit would double every year. By 1975, after collecting a decade more of data points, he revised that the doubling would take place every two years.
And that’s precisely what happened.
Visualizing “Moore’s Law”
Source: Our World in Data
Almost sixty years have passed since Moore’s original prediction, and it’s remarkable how accurate this industry “law” has proven to be.
And while there were some technological underpinnings that Gordon Moore used to create Moore’s Law, it was never a physical law. It was logical and well thought out, but there were no physical properties that would have dictated the decades of consistently paced advancement shown above.
Moore’s Law became a competitive battle cry for the industry. A gauntlet thrown to the ground daring the industry to keep a torrid pace that would ultimately lead to unbelievable breakthroughs in computing and consumer electronics.
And in the last year, the incessant nature of Moore’s Law has led to an outrageous inflection point in artificial intelligence technology. For without these radical improvements in semiconductor technology, there simply wouldn’t be the computational horsepower to run the mind-blowing software behind the most recent developments in generative artificial intelligence.
The world lost this legend on Friday at the age of 94. What a remarkable career he had, and what remarkable contributions he made to the world through his inventions and collaboration in the semiconductor industry.
It seems like a fitting moment in time.
Moore’s Law will forever live on and will be enshrined in the fabric of not only the semiconductor industry, but also in history itself. But we’ve hit an inflection point that even Moore didn’t originally envision.
What’s happening right now isn’t about raw computational power, or the next doubling of transistors on an integrated circuit. What’s happening right now is that computers are starting to “think” and to “reason” and to “predict.” Computers are beginning to have abilities that are far more similar to the human brain than to a complex integrated circuit.
And that means that technological advancement and “doublings” will be happening now in a matter of months, not years. It’s a pace that will make the most nimble, knowledgeable, and open minded of us feel uncomfortable with the pace of development.
It’s a pace that will ultimately require a new framework for how we view and measure technological advancement.
Moore set the competitive pace for the industry for the last six decades. And now, it turns out, the industry is ready to move even faster.
Rest in peace Gordon Moore.
Some cool research was recently published out of MIT. A team there developed a unique system that can give anyone the equivalent of X-ray vision. That’s right – science fiction just became reality.
This new system is at an intersection of three technologies. Augmented reality (AR), a form of artificial intelligence (AI) called computer vision, and radio-frequency identification (RFID) tech.
The team at MIT combined these three technologies and incorporated them into an AR-based headset.
When worn, the headset can enable the user to find any item that’s embedded with an RFID tag even when it is not visible. The item could be buried in a box. It could be hidden under a rafter or stuck under a rug… it doesn’t matter. The headset will be able to locate it.
This is what real-world X-ray vision looks like.
The only “catch” is that the item in question must contain an embedded RFID tag. These are extremely small semiconductors that can receive and/or transmit over radio frequencies. They are so small they fit on the tip of our fingers.
There is a lot of utility in this latest research. RFID tags have become very inexpensive, as cheap as a few pennies in cost. That makes them affordable for tagging and tracking most items. Technology like this is widely used within a warehouse or logistics system where most if not all items are tagged with RFID chips now.
But just having the tags doesn’t mean that something is easy to find. RFID tags can be used to quickly take an inventory of products in a warehouse, but if items have fallen behind shelves, or been placed into the wrong container, they can be very difficult to locate.
That’s where the X-ray vision come in handy. By using computer vision to ingest what is visible and incorporating the location data collected from an RFID tag, an augmented reality headset can accurately display the exact location of an object down to an accuracy of about 10 centimeters.
Something that could take hours to find could be discoverable in a matter of minutes, or even seconds.
To me, this is a great example of a cross section of AR, computer vision (artificial intelligence), and RFID tech (semiconductor technology). I expect this kind of technology to be commercialized very quickly.
To do so, a company will likely have to partner with MIT to license the technology. I can imagine a company like Zebra Technologies, which has been a leader in RFID systems for logistics and warehousing for years, to be a great fit for something like this.
Or we could see a new company spin up to commercialize this technology. Or we might see an existing semiconductor company jump at the chance to license this tech like Impinj or NXP Semiconductors. Both companies are major players in the RFID semiconductor industry.
And who knows? Maybe one day soon, this technology will help us find that lost television remote buried deep in the couch or that lost set of keys…
The incredible adoption of OpenAI’s ChatGPT continues. This time it’s coming from a powerhouse of a company that Near Future Report subscribers know very well…
ChatGPT is a generative AI that can produce content and write software code upon demand. And it can even have intelligent conversations with humans as well.
And Chat GPT has already been a massive success.
In a matter of months, it now has more than 100 million users and still growing exponentially. And it’s already been embedded in hundreds of other software applications.
The latest major player to get on the ChatGPT bandwagon is none other than customer relationship management (CRM) software giant Salesforce.
At Brownstone Research, we were bullish on Salesforce back when it launched its own AI for CRM applications. Salesforce called its new AI powered software “Einstein” at the time.
Well, Salesforce just came out with an upgraded version of its AI. They are calling it EinsteinGPT… and that’s because the company has integrated with ChatGPT.
This is a big move.
Salesforce’s EinsteinGPT will be capable of training on the entire body of knowledge for every single customer a company has within its Salesforce database. This will allow the AI to “know” everything it could possibly know about each businesses customers. And it will never forget anything. Effectively, it’s perfect recall.
As a result, EinsteinGPT will be able to craft personalized responses and emails to customer questions. At some point, it may even be able to anticipate specific questions or issues before they arise. This is taking CRM services to the next level.
What’s more, the AI will be able to generate software code for Salesforce developers. This will result in improved productivity, and enable non-programmers to perform tasks which previously required software developer support.
And get this – Salesforce also integrated ChatGPT into Slack, a company that Salesforce acquired back in 2021. This will allow employees to interface with the AI directly. And the software will also be able to learn from Slack communications, as well as generate draft replies and responses for team productivity on Slack.
The integration of this generative AI software is significant for the industry as Salesforce is by far the most widely deployed CRM software in the world with almost a 25% market share.
Upgrading its software with AI is like empowering every single one of its global customers with bleeding edge artificial intelligence. This is why AI is going to sneak up on all of us so quickly.
It doesn’t have to be a new purchase or installation. Software that we are already using widely today will simply be upgraded with this incredible technology and it will become widely available through software platforms that we all use throughout the day.
An interesting venture capital financing round just caught my eye. There’s a lot of depth to this one…
A company now known as Amelia just raised an impressive $175 million in venture capital (VC) for artificial intelligence (AI) technology. These funds will go to help the company roll out a conversational AI bot to its customers.
Amelia has pivoted its business to generative AI, in other words. This conversational bot will look and feel something like OpenAI’s ChatGPT.
What makes this so interesting is that Amelia was formerly known as IPsoft. It was a tech company founded way back in 1998. Its purpose was to help automate and improve customer service functions for enterprise clients.
The thing is, IPsoft wasn’t terribly interesting. Nor was it overly successful.
The company did enough business to survive for the last 25 years… but that’s about it. It was never going to be interesting enough, or high-growth enough to have found its way into The Near Future Report or Exponential Tech Investor.
But that’s all changed in a big way. Here we are two decades later and IPsoft has rebranded as Amelia… and it’s going big on generative AI.
And what I love about this is that Amelia already has a sizeable customer base to which it can roll out its conversational AI. Toyota, Alcatel Lucent, and numerous other prominent companies are among Amelia’s customer base.
And get this – the AI is trained in 108 languages. It’s not limited to English conversations.
So this could be absolutely transformative for a business that wasn’t otherwise doing anything to get excited about.
But with this new funding, and employed with this new technology, it could transform the company. I’m excited to see this pivot play out, as it could become a great example of how to use this incredible technology to bring older generation companies back to life and make them relevant again.
Regards,
Jeff Brown
Editor, The Bleeding Edge
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.