Dear Reader,
I’ve been anxiously awaiting numbers from the U.S. Department of Commerce on the second-quarter e-commerce sales.
After all, the U.S. went on economic lockdown around mid-March, and the second quarter was pretty much a write-off… the largest economic contraction in a single quarter in history. The second-quarter gross domestic product dropped almost 33%.
As we can see below, U.S. e-commerce as a percent of retail sales shot up 36% quarter on quarter and an incredible 49% year on year.
While the spike certainly seems logical, the actual percentage is shocking: 16.1%.
That’s right. Even with the economic lockdowns and stay-at-home orders, only 16.1% of total sales were through e-commerce. Said another way, 83.9% of all retail sales were still done in brick-and-mortar stores.
My point: The pandemic certainly forced a lot of people online for shopping, but the U.S. and every other country around the world still has a long way to go before the majority of retail sales are conducted online.
At 16.1%, these are still early days for e-commerce. And that means that this is just “Day 1” for Amazon. Jeff Bezos has a shot at turning Amazon into a $10 trillion company and becoming the world’s first trillionaire.
Now let’s turn to our insights…
We’ll start today with interesting research from a partnership between Facebook’s artificial intelligence (AI) research group and New York University (NYU). The two just put out research demonstrating how AI can cut the time it takes to perform magnetic resonance imaging (MRI) scans in half.
This is something I can relate to directly. I recently spent nearly two hours going through a full-body MRI scan a few weeks ago. The machines are not at all comfortable. It’s like being jammed into a tube that feels like a coffin.
So I know firsthand that patients will be glad for every minute spent in an MRI machine that technology can reduce – as long as the results are just as accurate. That’s the most important thing.
Let’s have a look at these two images. Can you tell a difference between them?
MRI Scans: AI-Enhanced vs. Traditional
Source: Facebook
They look the same right? Even radiologists could not tell the difference between the two.
The image on the right was produced by a traditional MRI scan. The image on the left was created in half the time with the help of AI.
Here’s how it works…
To cut down on imaging time, MRI technicians collect enough imaging data using the MRI machine to be medically accurate. Then they use the AI to “finish” the image. The AI fills in the gaps.
The result is an AI-enhanced image that’s basically indistinguishable from the traditional MRI. Five out of six radiologists in the study were unable to correctly tell which images were completed by the AI.
As I mentioned, this is great news for patients since they’ll spend less time stuck in MRI machines.
And it’s also great for hospitals. They will be able to do twice as many MRIs in a day, which will allow them to serve even more patients. And it will ultimately boost revenue and bring down the cost per MRI.
And Facebook thinks this is just the first step. Its research team believes that it can improve its technology to the point that MRIs can be done three to four times faster than normal. That’s huge.
If we think about stroke victims, doctors need to determine if there is a clot in the brain as soon as possible. Every minute counts. If we can reduce a 15-minute brain scan down to just a few minutes, that could be the difference between patients fully recovering versus losing control over one side of their body.
So this tech isn’t just about convenience. It is also critical for time-sensitive procedures.
Now we have to ask – why is a social media company doing this kind of research?
Well, this is a way to buy goodwill for one. We know Facebook has been under regulatory scrutiny in recent years. Here is a way for the company to show that it is using its AI for good, not just for collecting behavioral data on consumers without our knowledge to generate advertising revenues.
And the bigger reason is that tech companies see massive potential in the health care space. That’s why we’ve seen companies like Facebook, Google, Apple, and Microsoft invest heavily in companies and products in the health-tech space.
These companies recognize that the health care industry has been slow to innovate and evolve. Because of this, there is a treasure trove of untapped potential data that could be used to develop new products or simply market existing services to people based on their medical needs.
A private early stage company called Lightmatter just caught my eye.
Lightmatter is working on semiconductors that can make both AI and machine learning faster. These semiconductors are called AI accelerators. This is a hot segment of the industry right now.
And what makes Lightmatter so exciting is that its semiconductor will use light instead of electrical signals. It’s called an optical accelerator.
We talked about the industry’s shift toward this kind of semiconductor earlier this month.
Because these semiconductors process information using photonics – light – they almost no latency of any kind. And that means these chips will ultimately be 10 times and then 100 times faster than traditional silicon-based semiconductors as the tech improves.
What’s more, optical semiconductors produce far less heat and are more efficient with power consumption. And optical semiconductors are less impacted by temperature and electromagnetic fields, also resulting in higher performance.
Seen below is one of Lightmatter’s prototype chips with an optical fiber running into it.
Lightmatter Prototype Chip
Source: Lightmatter
So I am very excited about Lightmatter’s optical AI accelerator, which is set to be released next year.
Worth noting is who is behind Lightmatter. While we see two typical venture capital firms putting money to work, we also see Google’s venture capital arm, GV, backing the company’s last two financing rounds.
Clearly, Google sees something promising in this company. And it is not an accident. Lightmatter has smartly focused on developing a product that works together with Google’s wildly successful machine learning architecture – Tensor Flow.
This was clearly intentional. And it was smart to focus on a widely used area of the machine learning industry, something that will likely lead to fast adoption once Lightmatter has a commercial product ready for sale within the next few quarters.
Lightmatter is currently private, but we’ll keep an eye out to see if any other semiconductor companies make a run at an acquisition.
I’m an optimist by nature. In The Bleeding Edge, we typically highlight positive developments from the world of high technology.
But for our last insight, I have to issue a word of warning…
Our smartphone data is being acquired by several divisions within the U.S. government. And it’s all happening without due process.
Thanks to some solid investigational journalism from Protocol and Motherboard, we now know a company called Babel Street is selling us out, helping the U.S. government effectively bypass the Fourth Amendment. That’s the amendment that protects our rights against unreasonable search and seizure.
Babel Street’s business is to aggregate and analyze data. It goes out and compiles all available data on individuals from social media networks. And it matches that up with geolocation data that it purchases from companies making “free” apps for smartphones.
In this way, Babel Street has created a comprehensive tool that can monitor U.S. citizens. And Babel Street boasts that its clients can precisely filter our data for dates, times, language used, and “sentiment” on various topics. Talk about frightening.
Recent reports show that several U.S. government divisions have signed deals with Babel Street worth millions of dollars to get access to our data. The Secret Service and U.S. Customs and Border Protection (CBP) are among the divisions named.
Typically, these agencies would need a warrant or court order to get such sensitive data on citizens. Babel Street helps them get around those limitations.
So the big takeaway here is that we need to be mindful of what apps we download on our phones. If the app is free, the first thing we should ask is “What’s the business model?”
Some apps, like Fortnite and other games, can be downloaded for free, but then users can make in-game purchases to advance more quickly. That’s a valid business model that doesn’t require the company to harvest and sell our data to make money.
But many “free” apps don’t have a clear mechanism for generating revenue. If that’s the case, we can bet the app is mining our data and selling it to companies like Babel Street.
My strong recommendation is to look for premium alternatives to these apps. It is worth the time to search for apps from reputable software developers and worth paying a premium for quality and security.
For example, most free weather apps are notorious for collecting and selling our data. That’s their business model.
But there are weather apps that charge a small subscription fee in exchange for no advertisements and very limited and transparent data collection. Earlier this year, CNET rated the premium version of Weather Underground as the “best weather app for privacy.”
If we find premium alternatives like this, we can limit underhanded data collection and help protect our privacy.
Regards,
Jeff Brown
Editor, The Bleeding Edge
Like what you’re reading? Send your thoughts to feedback@brownstoneresearch.com.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.