Friday , April 10 2020
Home / Facebook / Tech / Facebook’s deepfakes ban has some obvious workarounds

Facebook’s deepfakes ban has some obvious workarounds

We’re used to social networks waiting until the damage has already been done before announcing a cleanup effort. When it comes to the synthetic media known as “deepfakes,” they’ve been notably ahead of the curve. In November, Twitter announced a draft policy on deepfakes and began soliciting public input. And on Monday night, Facebook announced that it would ban certain manipulated photos and videos from the platform. Here’s the blog post from Monika Bickert, Facebook’s vice president of global policy management:

Going forward, we will remove misleading manipulated media if it meets the following criteria:

– It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:

– It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.

The move comes ahead of a planned hearing Wednesday about misinformation at which Bickert is scheduled to speak.

The change represents a significant step forward at a time where anxieties over deepfakes, and their potential role in shaping the 2020 election, are running high. The technology is improving at a steady clip: see these companies selling synthetic (but convincing) people to populate dating apps. It’s not hard to imagine an unscrupulous campaign posting synthetic videos on Facebook or Instagram of their opponent saying or doing something they didn’t. As of today, that’s officially against policy.

Notably — and contra to what Facebook initially said — the policy will apply to advertisements as well as regular posts. Create a phony video of your opponent clubbing a baby seal and Facebook will make (yet another) exception to its policy against fact-checking political speech in advertisements, and remove anything found to be fake.

Still, some doubts lingered. Nina Jankowicz, who has a book coming out this year on Russian disinformation operations, said she is “still more worried about cheap fakes than deep fakes. Crudely edited, deliberately misleading videos and images are still effective, and they’re still allowed on most platforms.”

What’s a cheap fake? Something like this video of campaign workers doing a corny dance in support of presidential candidate Michael Bloomberg. In reality, they aren’t campaign workers at all — they’re audience members at an improv show filming a bit for a comedian, who shared it on a Twitter profile he had edited to make it appear as if he worked for Bloomberg. The ruse was exposed relatively quickly, but plenty of people still fell for it.

There are all sorts of ways to trick people like this. You can also grab an old video and put a new date on it, or just tweet it as if it’s brand new. You can Photoshop. You don’t need a state-of-the-art media lab to wreak havoc. That’s one reason why, even as the technology improved, information operations haven’t yet seemed very interested in deepfakes, as my colleague Russell Brandom wrote last year. “Uploading an algorithmically doctored video is likely to attract attention from automated filters, while conventional film editing and obvious lies won’t,” Brandom wrote. “Why take the risk?”

There’s one last workaround to Facebook’s new rule: comedy. For good reason, Facebook permits people to post satire and parody. Unfortunately, this rule is often exploited by fake-news purveyors and other sites adept at straddling the line between comedy and misinformation. Last week, in the wake of the military strike in Iran, an article titled “Democrats Call For Flags To Be Flown At Half-Mast To Grieve Death Of Soleimani” was posted to a site called the Babylon Bee. From there, it was shared more than 660,000 times on Facebook.

Surely some of the people who shared the article knew that the Babylon Bee is a satirical site. But read the comments in the original Facebook post and you’ll see that just as many seem to believe the article is real. In the flattened design of the News Feed, where every shared article carries equal weight, it can be hard to tell.

All these many asterisks help explain why Democratic politicians seem mostly unimpressed with Facebook’s deepfakes ban. “Facebook wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation,” said a spokeswoman for Rep. Nancy Pelosi, the unwitting star of a famously misleading (though not deepfaked) viral video last year.

Joe Biden’s campaign struck a similar note: “Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation, but rather how professionally that disinformation is created.”

Still, Monday’s action doesn’t preclude the company from addressing some of these nuances down the road. And the more we see this sort of thing in 2020, I suspect that it will. In the meantime, one of the big platforms has established at least a partial bulwark against the infocalypse — though its strength will depend entirely on how strongly Facebook defends it. Policy, as ever, is what you enforce.

Before the break, I reported here that Pinterest had cut contractors’ vacation benefits, forcing them to work over the holiday if they wanted to be paid during Christmas week. After I published that piece, employees were upset, and the company reversed course. Contractors got their paid week off after all, just like Pinterest’s full-time employees. “We realized our communication of this change may have come too late in the year for people to plan accordingly for this holiday season,” a spokesman told me.”

Not much more to say here, other than that journalism is the best job in the world and don’t let nobody ever tell you different.

Today in news that could affect public perception of the big tech platforms.

Trending up: Facebook fundraisers have generated $37 million for fire relief in Australia, the company says. Actor Celeste Barber’s fundraiser alone raised $30 million from 1.1 million people, and is now the largest fundraiser in Facebook history.

Trending sideways: Facebook is setting up a new engineering team in Singapore to focus on its lucrative China ad business. The news comes as CEO Mark Zuckerberg has ramped up criticism of the country over human-rights issues.

Trending sideways: Shares of Google hit an all-time high yesterday, closing out at $1,397.81 per share. Apparently, investors are unfazed by the ongoing antitrust investigation into the company, as well as employee unrest.

The FBI asked Apple to help unlock two iPhones linked to a shooting at Naval Air Station in Pensacola, Florida last month. Apple said it has been cooperating with the government and had already handed over all the data in its possession. Here’s what the company told Pete Williams at NBC:

“We have the greatest respect for law enforcement and have always worked cooperatively to help in their investigations,” Apple said in a statement. “When the FBI requested information from us relating to this case a month ago, we gave them all of the data in our possession and we will continue to support them with the data we have available.”

A law enforcement official said there’s an additional problem with one of the iPhones thought to belong to Alshamrani, who was killed by a deputy during the attack: He apparently fired a round into the phone, further complicating efforts to unlock it.

A leaked Facebook memo shows longtime executive Andrew “Boz” Bosworth told employees that the company has a moral duty not to tilt the scales against President Trump in the 2020 election. Kevin Roose, Sheera Frenkel and Mike Isaac from The New York Times have the scoop:

In a meandering 2,500-word post, titled “Thoughts for 2020,” Mr. Bosworth weighed in on issues including political polarization, Russian interference and the news media’s treatment of Facebook. He gave a frank assessment of Facebook’s shortcomings in recent years, saying that the company had been “late” to address the issues of data security, misinformation and foreign interference. And he accused the left of overreach, saying that when it came to calling people Nazis, “I think my fellow liberals are a bit too, well, liberal.”

Boz then shared the memo in its entirety from his own Facebook page. Read it.

The White House unveiled 10 principles that federal agencies should consider when devising laws and rules for the use of artificial intelligence in the private sector, but stressed that a key concern should be limiting regulatory “overreach.” (James Vincent / The Verge)

The 2020 election is likely the most anticipated event in US history when it comes to digital security. Russia still poses a massive threat, as do Iran and China. Experts are also warning that it’s not just the general election that is at risk — the primaries will be a target, too. (Joseph Marks / The Washington Post)

Experts warn that the United States needs to be prepared for cyber retaliation from Iran, which employs different tactics than Russia. Iran has spent years building an online influence apparatus that uses fake websites and articles meant to mimic real news and disappear quickly. (Sara Fischer / Axios)

Sonos sued Google, seeking financial damages and a ban on the sale of Google’s speakers, smartphones and laptops in the United States. Sonos accused Google of infringing on five of its patents, including technology that lets wireless speakers connect and synchronize with one another. (Jack Nicas and Daisuke Wakabayashi / The New York Times)

A researcher dove into the narratives surrounding the death of Qassem Soleimani, a top Iranian commander, on one of Iran’s most popular social media platforms, Telegram.

As Taiwan gears up for a major election this week, officials and researchers worry that China is experimenting with social media manipulation to sway the vote. Voters are already awash in false or highly partisan information, making such tactics easy to hide. (Raymond Zhong / The New York Times)

Violence erupted at Jawaharlal Nehru University in India last week, after members of a student group — apparently coordinating through WhatsApp — attacked fellow students and teachers. An investigation into the group revealed who the attackers were and how they coordinated the violence. (Meghnad S, Prateek Goyal and Anukriti Malik / Newslaundry)

Politicians, parties, and governments, are hiring dark-arts public-relations firms to spread lies and misinformation. One firm promised to “use every tool and take every advantage available in order to change reality according to our client’s wishes.” Craig Silverman, Jane Lytvynenko and William Kung have the story:

If disinformation in 2016 was characterized by Macedonian spammers pushing pro-Trump fake news and Russian trolls running rampant on platforms, 2020 is shaping up to be the year communications pros for hire provide sophisticated online propaganda operations to anyone willing to pay.

Also — the threat isn’t limited to the US:

Most recently, in late December, Twitter announced it removed more than 5,000 accounts that it said were part of “a significant state-backed information operation” in Saudi Arabia carried out by marketing firm Smaat. The same day, Facebook announced a takedown of hundreds of accounts, pages, and groups that it found were engaged in “foreign and government interference” on behalf of the government of Georgia. It attributed the operation to Panda, an advertising agency in Georgia, and to the country’s ruling party.

AI start-ups are selling pictures of computer-generated faces that appear to be real people. They offer companies a chance to “increase diversity” in their ads without needing human beings. They’ve also signed on dating apps that need more images of women. (Drew Harwell / The Washington Post)

The new trick to going viral on Instagram is making an Instagram filter, as seen by the bewilderingly popular “What Disney Character Are You?” sensation. This story breaks down how it works. (Chris Stokel-Walker / Input)

Michelle Obama launched an Instagram video series about students navigating their first year of college. The former First Lady partnered with digital media company ATTN: to launch a video series on IGTV, Instagram’s video platform. (Sara Fischer / Axios)

The CES gadget show in Las Vegas is all-in on surveillance technology, from face scanners that check in some attendees to the cameras-everywhere array of digital products. (Matt O’Brien / Associated Press)

Woe unto the big tech executive who uses an extended metaphor about Lord of the Rings without checking his facts. Here’s Chaim Gartenberg on the Boz memo:

As part of his argument, Boz makes the comparison by citing none other than J.R.R. Tolkien’s The Lord of the Rings to explain his decision. Facebook, Boz argues, is akin to Sauron’s One Ring, and wielding its power — even with noble intent — would only lead to ruin. […]

In Tolkien’s books and the film adaptations, Galadriel is concerned about the power of the Ring corrupting her — as it does all, save the Dark Lord himself. But not once does she contemplate using its power for good. “In place of the Dark Lord you will set up a Queen. And I shall not be dark, but beautiful and terrible as the Morning and the Night!.. All shall love me and despair!” Tolkien writes.

Later, nerds.

Send us tips, comments, questions, and Lord of the Rings analogies casey@theverge.com and zoe@theverge.com.

This Article was first published on theverge.com

About IT News Ug

Check Also

Facebook is shutting down MSQRD, the AR selfie app it acquired in 2016

Facebook will remove it from app stores this April

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

//ofgogoatan.com/afu.php?zoneid=2572107