One of the more hopeful developments at tech platforms this year has been their investment in removing misinformation related to the COVID-19 pandemic. Facebook, YouTube, and Twitter were all relatively quick to acknowledge the threat that COVID hoaxes represent, and have worked to purge it from their networks. Enforcement of those misinformation policies has sometimes lagged behind the companies’ public statements, though. A piece of anti-vaccination agitprop catchily titled “Plandemic” racked up millions of views before it was spotted and taken down by the platforms in May. More worryingly, a new piece of propaganda pushing a phony COVID cure was seen by 20 million people on Facebook alone before the company got it under control.
On Monday, the misinformation researcher Ben Decker warned that a true “Plandemic” sequel was coming. Makers of the original video promised that a second installment would premiere Tuesday, and promoted it at least 887 times on Facebook, from pages with hundreds of thousands of followers. The fact that the new video was likely to be taken down became part of the marketing campaign around it. All that remained was to see what happened when it actually went live.
Somewhat surprisingly, LinkedIn acted first, removing a key account promoting the video from its platform for violating the terms of service before it could even premiere. And when it did — under the hilariously bad name Plandemic: Indoctornation — the New York Times’ Davey Alba helpfully chronicled the various pieces of misinformation it contains. In short: bog-standard anti-vaxx conspiracy theories, rooted in the notion that a handful of shadowy elites brought the world to a standstill in order to profit from it. If you’ve heard it all before, it’s easy to dismiss. But we also know that this stuff is spreading like wildfire online, particularly within Facebook groups, and that anti-vaxx sentiment could slow the pandemic recovery and limit our prospects of herd immunity.
So what happened after the video went up? Adi Robertson picks up the story at The Verge:
Social media sites are trying to stop the spread of Plandemic: Indoctornation, a follow-up to the Plandemic conspiracy video about the novel coronavirus. As NBC News reporter Brandy Zadrozny noted, Facebook blocks users from reposting a link to the new video, which was uploaded to an external site earlier today. Twitter doesn’t block the video link, but it sends users who click it to a warning screen, saying that the link is “potentially spammy or unsafe.”
Twitter confirmed to The Verge that it’s warning people rather than blocking the link; the company will evaluate any short clips that are directly uploaded on a case-by-case basis and may remove any that it deems dangerous misinformation. Streaming channel London Real, which posted the video, reported that it was suspended by LinkedIn before its premiere. According to CrowdTangle, London Real’s original post linking to the video has about 53,000 interactions on Facebook. A reposted version of the video can be found on YouTube, but it currently has under 200 views.
There’s still some possibility that Indoctornation will find new life on the social platforms. But it appears that for the most part, this time platforms passed the test: they identified the video as being in violation of their standards in real time, stopped hosting it and prevented users from sharing it. In 2020, this is what successful content policy looks like: you can’t prevent every bad thing from ever being uploaded, but you can identify it quickly and take effective action. Last month, a video like this got 20 million views. Today, on YouTube, it got fewer than 200. Work like this is hard, but it’s also possible, and I think it’s important to call it out when it’s done correctly.
Of course, you could say I’m damning the platforms with faint praise. As Kevin Roose notes, Indoctornation was the rare piece of misinformation to be announced 887 times ahead of its arrival. It was misinformation with a premiere date. Contrast that with last month’s content moderation disaster, which was essentially a bunch of nonsense that was uttered during a live stream of a press conference. Looking at the circumstances of both videos, it’s not hard to understand why Facebook, YouTube, and Twitter had an easier time on Tuesday than they did in July.
And Indoctornation is arguably less dangerous than previous COVID hoaxes, since it doesn’t attempt to give medical advice and is mostly an extended riff on the idea that Bill Gates is conspiring against you. That’s one reason why the video is still technically allowed on Facebook — it’s just placed under a fact-checking warning, and will not get wide distribution in the feed.
“Given the previous Plandemic video violated our COVID misinformation policies, we blocked access to that domain from our services,” a spokesman told me. “This latest video contains COVID-19 claims that our fact-checking partners have repeatedly rated false so we have reduced its distribution and added a warning label showing their findings to anyone who sees it.”
YouTube told me that for its part, it had seen few attempts to upload Indoctornation, and is removing full uploads as it sees them for violating its policies around COVID misinformation. If other uploads contain segments of the original video, they’ll be evaluated on a case-by-case basis, the company told me.
I want to be clear that the issues around health misinformation on social platforms are much bigger than the down-ranking of a single piece of content. The information ecosystem is clearly polluted, and people are suffering. Adam Satariano took a look at the issue this week at the New York Times:
Last week, researchers said that at least 800 people worldwide died in the first three months of the year, and thousands more were hospitalized, from unfounded claims online that ingesting highly concentrated alcohol would kill the virus. Their findings, based on studying rumors circulating on the web, were published in the American Journal of Tropical Medicine and Hygiene.
Doctors’ frustrations fill Facebook groups and online forums. The American Medical Association and other groups representing doctors say the false information spreading online is harming the public health response to the disease. The World Health Organization is developing methods to measure the harm of virus-related misinformation online, and over two weeks in July the group hosted an online conference with doctors, public health experts and internet researchers about how to address the problem.
But the platforms’ actions on Tuesday showed us that progress is possible. They know how to do the right thing, and they can. Especially when the bad guys tell them where to look.
Today in news that could affect public perception of the big tech platforms.
Trending down: Facebook abandoned thousands of gallons of drilling fluid under the ocean floor just off the coast of Oregon in April. The company was constructing a landing site for an undersea telecommunications cable when when it hit an unexpected snag.
⭐ Some Justice Department staffers are worried the case against Google isn’t ready, despite Attorney General William Barr pushing for a lawsuit this summer. They say the department needs more time to consider whether the millions of pages of documents they have are enough to win an antitrust case in court. Brent Kendall at The Wall Street Journal has the story:
Dozens of government antitrust lawyers are on teams investigating whether the search giant has used its dominance to stifle competition. One group is focused on Google’s search practices, and some of its members have voiced the belief that there are vulnerabilities in a case built around those issues, people familiar with the matter said. Details about the Justice Department’s legal theories couldn’t be learned.
Another team is examining Google’s online advertising business, where the company owns industry-leading tools at every link in the complex chain between online publishers and advertisers. Some attorneys working on that aspect of the probe aren’t ready to move forward because they are still untangling the new and complex issues raised by that part of Google’s business and how it affects the many companies in the digital ecosystem, the people said.
A sprawling report released by a Republican-controlled Senate panel confirmed key facts about Russia’s attempted meddling in the 2016 election. The Russian government tried to sabotage the 2016 election to help Trump win, and some members of Trump’s circle of advisers were open to the help from an American adversary. (Mark Mazzetti and Nicholas Fandos / The New York Times)
The Lincoln Project, a political action committee started by former anti-Trump Republicans, has been stealing memes, and the online left isn’t happy about it. It’s operating under a playbook popularized by marketing groups like Jerry Media and accounts like @thefatjewish. (Makena Kelly / The Verge)
Facebook is rejecting a request from The Gambia to help it investigate the genocide in Myanmar, saying the request was “extraordinarily broad” as well as “unduly intrusive or burdensome.” In June, the West African nation filed an application in US federal court seeking information from Facebook that would help it hold Myanmar accountable at the International Court of Justice. (Matthew Smith / Time)
Children’s Health Defense, a group founded by anti-vaccine activist Robert F. Kennedy Jr., is suing Facebook for rejecting ads and labeling debunked claims about vaccines and 5G networks. The complaint is against Facebook, its CEO Mark Zuckerberg, and the fact-checking organizations PolitiFact, Science Feedback, and Poynter Institute. It is extremely dumb. (Adi Robertson / The Verge)
Epic Games is reaching out to other tech companies about forming forming a coalition of Apple critics, after filing a lawsuit against the tech giant. Both Sonos and Spotify are considering joining the group. (Nick Wingfield and Alex Heath / The Information)
Epic Games said Apple is threatening to remove it from the Apple Developer Program, threatening to break every game that uses its popular Unreal Engine. Apple responded: “The problem Epic has created for itself is one that can easily be remedied if they submit an update of their app that reverts it to comply with the guidelines they agreed to and which apply to all developers.” (Sam Byford / The Verge)
Apple is facing its toughest environment in China in years amid the decline of US-China relations. Apple operates the App Store in China without government licenses and local partners, but it’s having to walk an increasingly difficult line between placating Chinese officials and not triggering a backlash back home. (Wayne Ma / The Information)
Privacy-friendly, voluntary COVID-19 tracking apps have not had a demonstrable impact on the pandemic, researchers and public health officials say. The more aggressive apps introduced in South Korea and Kuwait may have been more effective, but they also introduced privacy concerns. (Craig Timberg, Steve Hendrix, Min Joo Kim and Fiona Weber-Steinhaus / The Washington Post)
Miami police used Clearview AI’s facial recognition technology to arrest a protestor who they say was involved in a standoff on May 30th. The person in question was arrested and charged with battery on a police officer. They have pleaded not guilty. (Connie Fossi and Phil Prazan / NBC)
A former Trump staffer who worked on the administration’s controversial family separation policies, and later joined Google, is on leave from the tech giant to support Joe Biden’s campaign. Miles Taylor appeared in an anti-Trump ad Monday and backed Biden for president. (Jennifer Elias / CNBC)
⭐Oracle is in talks to acquire TikTok, a move that would challenge Microsoft. Both companies are far ahead of other US firms that have expressed interest. It’s notable because Oracle founder Larry Ellison has spent much of the past four years sucking up to Trump. Alex Sherman at CNBC has the scoop:
Oracle doesn’t have a consumer-facing social media or video business. In theory, Oracle could use customer data collected by TikTok to improve its marketing products, but spending tens of billions to acquire a consumer social media company would be a significant departure for the company. Oracle has struggled to find new avenues of growth as Amazon Web Services has dominated cloud computing, followed by Microsoft Azure and Google Cloud. In Oracle’s fiscal fourth quarter, revenue declined 6% to $10.4 billion. Oracle has a history of being acquisitive but has slowed down on large deals in recent years.
TikTok started a website and Twitter account to address misinformation in real time. “Let us set the record straight,” the company said on the website. “TikTok has never provided any U.S. user data to the Chinese government, nor would it do so if asked.” My basic feeling is, if you have to say “let us set the record straight,” you have already lost the argument. This is the 2020 version of the old Facebook “Hard Questions” blog, and seems likely to have about the same level of impact on brand perception. (Nathan Crooks / Bloomberg)
Advertisers are rethinking plans to advertise on TikTok ahead of a potential US ban. Privacy concerns have also prompted some to avoid the app. (Anissa Gardizy / The Information)
TikTok launched a creator ambassador program in 2019 to help a small group of influencers grow their followings on the app. The program has several perks, like all-expenses-paid trips, connections with brands, and access to new app features. Get those benefits while you can, hype houses! (Amanda Perelli / Business Insider)
TikTokers are posting videos making fun of their Trump-supporting parents. So, at least one good thing has come out of quarantine. (Kadia Goba / BuzzFeed)
Snapchat is testing a feature that would allow users to share Snap originals, shows and publisher stories with friends off the platform. It’s an expansion of a previous push the company made to get users to share stories outside of its app. (Sara Fischer / Axios)
Oculus will soon require all of its virtual reality headset users to sign up with a Facebook account. The Facebook-owned company says it will start removing support for separate Oculus accounts in October. (Adi Robertson / The Verge)
Facebook and NYU are working on a project that uses AI to make MRI scans four times faster. The scientists trained a machine learning model to “predict” what final MRI scans look like from just a quarter of the usual input data. (James Vincent / The Verge)
While some companies embrace remote work, Amazon is doubling down on its office infrastructure. The e-commerce giant is expanding its physical offices in six US cities including New York and adding thousands of corporate jobs in those areas — and without the enormous subsidies it once demanded as part of its sham HQ2 search. (Sebastian Herrera / The Wall Street Journal)
there is a 100% chance that dozens of trust fund silicon valley bros are getting together to make USPS-replacement startup apps and they’re all going to be called shit like “letr.” or “MĀL”
“I extracted all this wealth from SF but now that it’s struggling I’m taking my money and leaving behind the devastation, please recommend all your whitest communities”
Dudes say they stand 6’ apart but really they’re only standing about 5’9” away
Legit thougt this was batman twerking pic.twitter.com/eMUbwguLS7