Dear Reader,
Before we get to our insights, let’s check in on Georgia. The state was one of the very first to open up its economy after the nationwide lockdowns. It began that process on April 24, more than six weeks ago.
On April 29, just after Georgia opened up, I remember reading a shocking article in The Atlantic titled “Georgia’s Experiment in Human Sacrifice.” It was a frightening piece. One passage in particular stuck out to me.
Because of how infections tend to progress, it may be two or three weeks before hospitals see a new wave of people whose lungs look like they’re studded with ground glass in X-rays. By then, there’s no telling how many more people could be carrying the disease into nail salons or tattoo parlors, going about their daily lives because they were told they could do so safely.
The article was clear to point out how stupid it was for Georgia to be opening its economy. The state was knowingly committing “human sacrifice.” Let’s look at what happened since then.
This data is taken directly from the Georgia Department of Public Health. April 24 was more than six weeks ago. The red line is the number of deaths, and the orange line is the seven-day moving average. The shaded area on the far right is a 14-day window where the data isn’t yet finalized due to the lag time of receiving some data.
But the data is clear. The numbers are dropping almost as quickly as they rose back in March, despite opening Georgia’s economy. This is a great sign for the rest of the world.
It looks like my friends in Georgia were the smart ones, and the journalists at The Atlantic weren’t so smart after all…
Now let’s turn to our insights…
Chances are many readers use Google Chrome as their primary web browser. Chrome offers a feature called “incognito” mode that claims to allow users to browse the web privately. It’s an attractive feature on the surface.
There’s just one problem. Incognito browsing isn’t private.
New revelations show that Google still tracks everything and collects behavioral surveillance on consumers, even when they think they are using a privacy feature. Now users want to hold Google to account.
A $5 billion class action lawsuit has just been brought against Google. And this case is far from frivolous. Here’s why…
The lawsuit alleges that Google violated the Federal Wiretap Act. That act says that users have the right to sue if their private conversations are unlawfully intercepted without cause.
So this case is saying that Google is illegally intercepting private communications between users and websites in direct violation of federal law. That’s what makes this case so interesting.
By now, we know Google’s dirty secret. The company’s entire business model is based on behavioral surveillance. And aside from a few public scoldings on Capitol Hill, Google has never really had to answer for these practices.
But this lawsuit could change that. This is the first data privacy lawsuit that stands a fighting chance in court.
And it will be groundbreaking if Google is found guilty. Data surveillance could become significantly hampered overnight if successful.
That would change the entire scope of what internet companies can and cannot do. And it would be very bad news for companies like Google and Facebook, whose entire business revolves around tracking everything consumers do and say online.
Around 83% of Google’s revenue in 2019 came from serving ads based on behavioral data.
Sadly, I suspect that Google’s business practices will likely survive the lawsuit. I wouldn’t want to be up against Google’s legion of lawyers.
And to put things in context, Google is sitting on $117 billion in cash. Even if Google lost the entire case and paid out the full $5 billion, the market wouldn’t even blink.
Google isn’t the only data collector under fire right now. There have been some very interesting discussions around Section 230 that could be disastrous for Facebook and other social media platforms.
I’ll explain with a little backstory…
It’s not an understatement to say that Section 230 is what allowed the modern internet to become what it is today. Section 230 evolved from two different lawsuits more than 20 years ago.
The first lawsuit was against Prodigy Services, a popular internet service provider (ISP). Prodigy provided email and content like America Online (AOL). In the case, the court determined that Prodigy could be held liable for whatever it published online specifically because the company tried to set standards and moderate content.
In other words, because Prodigy tried to police its platform, the company could be sued for the content it published.
The second lawsuit was against CompuServe, another internet company that predated Prodigy and AOL. In this case, the court determined that CompuServe could not be held liable for content on its platform because CompuServe allowed third parties to put up whatever they wanted without restriction.
Because CompuServe did not try to set standards or moderate content, it did not operate as a publisher. It was simply a distributor and therefore could not be sued for content.
Section 230 was established on this basis. As long as internet platforms, social media, or discussion boards didn’t moderate their content, they couldn’t be held liable for what their users shared. This allowed social media companies like Facebook to grow like wildfire.
Yet that’s not what has been happening. Social media platforms have been moderating and censoring content on their platforms, and they have been trying to influence outcomes. This is precisely where they have crossed the line.
President Trump recently signed an executive order that directs the federal government to review federal laws that protect social media companies from liability. Section 230 is back up for discussion.
And social media companies – Facebook in particular – have to be sweating. The company could potentially be held liable for what it moderates, amplifies, and censors.
This is a very tricky issue to solve. Who gets to decide what is and isn’t allowable on a social media platform? There are a lot of obvious things that most can agree on. But there are also a lot of gray areas.
The largest conflict of interest that I see is that these platforms are financially incentivized to amplify polarizing content to drive more eyeballs to their platform. More eyeballs mean more advertising revenue.
While not perfect, decentralized platforms show promise for solving at least part of this conflict of interest. Artificial intelligence can also likely help. AI could decide what is inappropriate content.
Assuming that we can design an unbiased AI, this is the only way that a platform would be able to manage the sheer volume of posts.
I suspect that we’ll end up with a bit of both. Some platforms – those used by children and teens – will be “sterilized.” On others like Reddit, freedom of speech will reign no matter how bad it gets.
We talked two weeks ago about the hydroxychloroquine research published in prominent medical journal The Lancet. As a reminder, that data suggested that the drug was not an effective treatment for COVID-19. And it suggested that the drug was even linked to higher death rates in patients.
In our coverage here in The Bleeding Edge, we noted that the research was based only on observational data. It didn’t use the proper standards and controls that clinical trials employ to make sure the data is accurate and meaningful.
To me, the report wasn’t trustworthy and didn’t convey conclusive data.
For this reason, I suggested we wait until data comes back from Novartis’ clinical trial of hydroxychloroquine before passing judgment on the drug.
Of course, that didn’t stop much of the mainstream press from running headlines like this one from The Washington Post…
Source: The Washington Post
Well, it turns out The Washington Post jumped the gun…
In an incredible twist of fate, The Lancet and the New England Journal of Medicine have each removed this research from their platforms. The research has been retracted because of serious flaws in the data.
The data for the research originally came from a small company just outside Chicago called Surgisphere. And further analysis shows that there are glaring potential errors and discrepancies in the data.
It appears that Surgisphere may have decided that hydroxychloroquine was dangerous first and then manipulated the data to reach that conclusion.
This reminds me of “Climategate.”
In that controversy, some researchers from the University of East Anglia were accused of concealing research in order to boost the global warming theory.
Then, even without certain data sets, the industry referred to those results as definitive, and it was the foundation for more research on global warming.
While not on the same scale, it’s a similar story here.
The World Health Organization (WHO) suspended its trials of hydroxychloroquine after this research came out. After all, The Lancet carried it, so the industry assumed it was credible.
Fortunately, hundreds of scientists signed an open letter to The Lancet and the head researcher at Surgisphere criticizing these shoddy research standards.
This is why we need proper double-blind, placebo-based clinical trials before we can determine whether a therapy is safe and effective. This poor research slowed down investigations into a drug that – if proven effective – could be a useful tool against COVID-19.
Let’s wait for Novartis to release data from its official clinical trial. And we’ll also keep an eye on the work being doing by the National Institutes for Health. We’ll look forward to that soon.
Either way, at least we’ll finally have accurate data on a potential therapy to fight off COVID-19.
Regards,
Jeff Brown
Editor, The Bleeding Edge
Like what you’re reading? Send your thoughts to feedback@bonnerandpartners.com.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.