How Three Election-Related Falsehoods Spread

Instagram on Thursday took aim at the spread of misinformation on its platform, announcing that it would temporarily eliminate users’ ability to view some recent posts ahead of Election Day.

In two tweets, Instagram said it would remove the “recent” tab from hashtag pages. Previously, users could view the most recent posts tagged with a particular hashtag. Now, only “top” posts under a specific hashtag that have been reviewed by the platform will be viewable.

Instagram, which is owned by Facebook, said the change was made to “reduce the real-time spread of potentially harmful content that could pop up around the election.”

The change took effect Thursday night and the “recent” tab will return sometime after the election, an Instagram spokeswoman said. The platform said it hopes the change will allow it to proactively stop misinformation from spreading, rather than having to wait until a falsehood has already been widely shared.

Nina Jankowicz, a disinformation analyst at the Wilson Center, a nonpartisan think thank, said Instagram’s decision, just days before the election, was “rather late” but “better than nothing.”

“I hope in the future the social media platforms recognize that elections are an inflection point in disinformation campaigns, not an end point,” she said. “A decision like this earlier in the cycle might have lessened the spread of voting misinformation, for instance.”

Ms. Jankowicz said much of the misinformation spreading on Instagram was related to QAnon, the baseless pro-Trump conspiracy theory, and false information about Senator Kamala Harris of California, the Democratic vice-presidential nominee.

Instagram’s decision follows crackdowns on viral falsehoods by Facebook and Twitter. Twitter recently added context to trending topics and de-emphasized retweets ahead of the election, while Facebook said it would ban political ads indefinitely. Both companies have added some labels to misleading posts and highlighted accurate voting information after rising alarm about the possibility of a disputed election outcome and the chaos that could follow.

Falsehoods about Tuesday’s election have overwhelmed local election officials, who said they were dealing with “tsunamis of misinformation,” have lost sleep and were working extra long hours.

The officials told us they were dealing with several common flavors of election-related misinformation. So we decided to track three categories of the rumors they had encountered using CrowdTangle, a Facebook-owned analytics tool, and then focused on the spread of one the lies in each of the categories. We also recorded the volume of tweets about the rumors we followed using BuzzSumo, another analytics tool.

The data showed how a single rumor pushing a false narrative could rapidly gain traction on Facebook and Twitter, generating tens of thousands of shares and comments. That has made the misinformation particularly hard for elections officials to fight.

“The true costs of misinformation are not paid by platform companies,” said Joan Donovan, the research director at Harvard University’s Shorenstein Center. “They are paid by everyone else who has to deal with the aftermath.”

A spokesman for Facebook, Andy Stone, said that it prohibits voter interference, is working with fact-checking organizations and has introduced a voter information hub of accurate information.

Twitter said it did not create any specific Twitter Moments explaining these particular rumors, but does aim to proactively debunk false claims and provide information about voting by mail.

Here’s what we found.

This misinformation features the unproven assertion that ballots are being “harvested,” or collected and dropped off in bulk by unauthorized people.

In the example we focused on, Representative Ilhan Omar, a Minnesota Democrat, was falsely accused last month of being engaged in or connected to systematic illegal ballot harvesting.

There were 3,959 public Facebook posts sharing this rumor, according to our analysis. Those posts generated 953,032 likes, comments and shares. Among those who shared the lie were two pro-Trump Facebook groups targeting Minnesota residents, as well as President Trump himself. At least 26,300 tweets also discussed the falsehood.

Jeremy Slevin, a spokesman for Ms. Omar, said in an emailed statement that there was no truth to the claim.

Mail-in ballots and related materials being tossed was another popular falsehood that election officials said they were hearing. We looked at one of these rumors, which was pushed by a far-right website called The Right Scoop. This month, the site published an article with the headline, “Tons of Trump mail-in ballot applications SHREDDED in back of tractor trailer headed for Pennsylvania.”

The article generated 163 individual public posts on Facebook. It was liked, commented and shared 91,000 times on the social network, according to our analysis. It was also shared 1,032 times on Twitter.

Politifact debunked the video on which the article was based. Facebook added a label to posts that shared the rumor saying it contained false information.

The Right Scoop later corrected its post — but its correction did not travel as far as the lie, receiving just a single like on Facebook. The Right Scoop did not respond to a request for comment.

Election officials also said people were confronting them with false assertions that antifa, the loose collection of left-wing activists, and Black Lives Matter protesters were coordinating riots at polling places across the country.

One of those rumors began this month when The Federalist, a conservative outlet, noticed that a liberal activist website called Shut Down DC said people should protest on the streets if Mr. Trump was re-elected. Right-wing commentators then attached inflammatory captions to their posts sharing The Federalist’s article. Many said it was evidence of planned far-left violence on Election Day and after, and stated, without proof, that Black Lives Matter was involved.

The false rumor was then shared in 472 public Facebook posts, according to our analysis. It generated 99,336 likes, shares and comments. On Twitter, the rumor was shared at least 400 times.

Craig Sawyer, a right-wing commentator and Marine veteran, shared the rumor on Facebook on Oct. 16. He said in an email that his post was not a call for violence and that The New York Times should focus on “the key planners and financiers of all the rioting, arson, looting and murder” instead.

Oct. 29, 2020, 5:00 p.m. ET

Oct. 29, 2020, 5:00 p.m. ET

By

A new study identified thousands of automated accounts, or “bots,” on Twitter that will most likely sow disinformation.
Credit…Lionel Bonaventure/Agence France-Presse — Getty Images

Be aware: Fake Twitter accounts will very likely sow disinformation in the few remaining days before Election Day on Nov. 3.

This week, researchers at the University of Southern California released a new study that identified thousands of automated accounts, or “bots,” on Twitter posting information related to President Trump, Joseph R. Biden Jr. and their campaigns. The study examined over 240 million election-related tweets from June through September.

Many of these bots, the study said, spread falsehoods related to the coronavirus and far-right conspiracy theories such QAnon and “pizzagate.” The study said that bots accounted for 20 percent of all tweets involving these political conspiracy theories.

“These bots are an integral part of the discussion” on social media, said Emilio Ferrara, the University of Southern California professor who led the study.

A Twitter spokesman questioned the study’s methods. “Research that uses only publicly available data is deeply flawed by design and often makes egregiously reductive claims based on these limited signals,” the spokesman said. “We continue to confront a changing threat landscape.”

Social media companies such as Twitter and Facebook have long worked to remove this kind of activity, which has been used by groups trying to foment discord in past elections in the United States and abroad. And the University of Southern California study showed that about two-thirds of the conspiracy-spreading bots it identified were no longer active by the middle of September.

In some cases, bots exhibit suspicious behavior. They might “follow” an unusually large number of other accounts — a number nearly as large as the number of accounts following them — or their usernames will include random digits.

But identifying bots with the naked eye is far from an exact science. And researchers say that automated accounts have grown more sophisticated in recent months. Typically, they say, bots are driven by a mix of automated software and human operators, who work to orchestrate and vary the behavior of the fake accounts to avoid detection.

Some bots show signs of automation — like only retweeting rather than tweeting new material, or posting very frequently — but it can be difficult to definitively prove that accounts are inauthentic, researchers say. An automated account may stop tweeting at night, for example, as if there is a person behind it who is sleeping.

“You can clearly see they are automated,” said Pik-Mai Hui, an Indiana University researcher who has helped build a new set of tools that aim to track these bots in real time. “But they are operated in a way that makes it very difficult to say with complete certainty.”

These bots are operating on both sides of the political spectrum, according to the study from the University of Southern California. But right-leaning bots outnumbered their left-leaning counterparts by a ratio of 4-to-1 in the study, and the right-leaning bots were more than 12 times more likely to spread false conspiracy theories.

The study indicates that 13 percent of all accounts tweeting about conspiracy theories are automated, and because they tweet at a higher rate, they are sending a much larger proportion of the overall material.

“This is the most concerning part,” Dr. Ferrara said. “They are increasing the effect of the echo chamber.”

Oct. 28, 2020, 1:36 p.m. ET

Oct. 28, 2020, 1:36 p.m. ET

By

Last week, a political action committee called the American Principles Project unveiled a new video on Twitter falsely claiming that Democratic presidential nominee Joseph R. Biden Jr. supported sex changes for 8-year-olds.

Since Friday, a similar video has also appeared on Facebook as many as 100,000 times — primarily in Michigan, a swing state in the Nov. 3 election.

What has been harder to pinpoint is how widely the video has been spreading through text messages.

Though companies like Facebook and Twitter have developed tools for tracking and policing disinformation on their social networks, texting activity is largely a free-for-all that receives little scrutiny from tech companies and government regulators.

“There is no way to audit this,” said Jacob Gursky, a research associate at the University of Texas at Austin. “Organizations are just collecting cellphone numbers from data brokers and mass-texting people.”

The video circulated in Michigan, Wisconsin and Pennsylvania as part of a coordinated texting campaign, according to a study by researchers at the University of Texas at Austin. Over the weekend, it reached a reporter who covers online disinformation for the news site Protocol. The reporter had a Pennsylvania cellphone number.

Credit…handout

Twisting the meaning of Mr. Biden’s statements during a recent “town hall” event — which condemned discrimination against children who identify as transgender but did not address sex changes — the campaign was a high-profile example of increasingly widespread efforts to distribute disinformation through text messages.

“During a recent town hall, Joe Biden endorsed giving 8- to 10-year-olds sex change treatments,” the texts read. “This is way too extreme for me. I can’t support him.”

The texts tracked by Mr. Gursky and his fellow researchers said they were sent by the American Principles Project, but they referred to the organization only as “the APP PAC.” The texts purport to arrive from a “Democratic volunteer.”

The American Principles Project did not respond to a request for comment.

Data on texting campaigns is hard to come by. But Robokiller, a company that blocks automated phone calls and texts, said Americans received 2.6 billion political text messages in September, a 400 percent increase since June. The company estimated that since June, Republication-affiliated organizations have sent roughly six times more messages than their Democratic counterparts.

The Texas researchers said texting campaigns are in part a reaction to increased scrutiny on social media services. As Facebook and Twitter have pushed disinformation networks off their services, the networks have resurfaced on private texting apps like Signal, Telegram and WhatsApp, where they can continue operate without being monitored.

Private disinformation networks are prevalent in places like India and Mexico, the researchers said. But they are becoming more common in certain parts of the United States, such as southern Florida, where apps like WhatsApp are popular.

Oct. 27, 2020, 10:32 p.m. ET

Oct. 27, 2020, 10:32 p.m. ET

By

Facebook recently said it would not accept any new political ads in the week before Election Day, but would continue to run ads that had been bought ahead of time.
Credit…Laura Morton for The New York Times

Facebook said on Tuesday that it had removed ads from both the Trump and Biden presidential campaigns that arguably could mislead voters in states where early voting has not started.

The ads were bought by the campaigns over the weekend, as part of a last-minute push to secure Facebook ads before the end of Monday. Facebook recently said it would not accept any new political ads in the week before Election Day, but would continue to run ads that had been bought ahead of time.

The Trump and Biden campaigns did not immediately respond to requests for comment.

Megan Clasen, a Biden campaign media adviser, tweeted that Facebook had told her office that it could not run ads that urged people to vote by saying that “Election Day is tomorrow” or “Election Day is today.” She then pointed to a similar ad by the Trump campaign that said, “Election Day is today.”

Several hours after journalists and Biden campaign officials contacted Facebook, the Trump campaign ad was removed. Facebook said the ads were misleading because they could be seen by voters in states where voting was not currently open.

“As we made clear in our public communications and directly to campaigns, we prohibit ads that say ‘Vote Today’ without additional context or clarity,” a Facebook spokesman said.

Facebook had previously said it would not fact-check political ads. But the company said it would remove advertisements that could mislead voters or provide incorrect information on how to vote.

Oct. 27, 2020, 6:16 p.m. ET

Oct. 27, 2020, 6:16 p.m. ET

By

George Soros, the liberal financier who has featured prominently in right-wing conspiracy theories for years.
Credit…Fabrice Coffrini/Agence France-Presse — Getty Images

This has been, by any measure, a bad year for consensus reality.

First, there was President Trump’s impeachment — a divisive and emotionally charged proceeding that unleashed a torrent of lies, exaggerations and viral innuendo.

Then came the Covid-19 pandemic — an even bigger opportunity for cranks, conspiracy theorists and wishful thinkers to divide us along epistemic lines, into those who believed the experts and those who preferred to “do their own research.”

The Black Lives Matter protests this summer were a feeding frenzy for those looking to distort and reframe the narrative about police violence and racial justice.

And while election years are always busy times for fact-checkers, Mr. Trump’s fusillade of falsehoods about voter fraud, Spygate and Hunter Biden’s emails this year has resulted in a bigger challenge for those charged with separating truth from fiction.

Zignal Labs, a firm that tracks online misinformation, analyzed which major news topics in 2020 were most likely to generate misinformation. Its data, which draws from sources including social media apps like Facebook, Twitter, Instagram and Reddit, as well as newspapers and broadcast TV transcripts, isn’t an exact accounting of every single piece of misinformation out there. But it’s a rough gauge of which topics are most frequently used as vehicles for misinformation, by those looking to inject confusion and chaos into media narratives.

(Quick methodological note: These “misinformation mentions” are limited to topics related to either the election or the Covid-19 pandemic, and are calculated by Zignal’s automated system based on the number of mentions of a given term along with a term that is frequently associated with misinformation. So, for example, a post that mentions vaccines in the context of Covid-19 would not be counted as a misinformation mention, but a post that mentions vaccines along with a hashtag like #FauciTheFraud or a name like Bill Gates — a frequent target of anti-vaccine activists — would be counted, even if the underlying story was debunking such a false claim.)

The topic most likely to generate misinformation this year, according to Zignal, was an old standby: George Soros, the liberal financier who has featured prominently in right-wing conspiracy theories for years.

Out of 2.6 million total media mentions of Mr. Soros so far this year, nearly half (1.1 million) were accompanied by terms (“Soros-funded,” “bankroll”) that suggested that he played a role in funding left-wing agitators. They peaked this summer, as false claims that Mr. Soros had funded Black Lives Matter protests went viral following the killing of George Floyd.

Second on the list was Ukraine, which peaked as a misinformation topic in January and February, during Mr. Trump’s impeachment proceedings along with keywords like “deep state” and “WWG1WGA,” a shorthand used by followers of the QAnon conspiracy movement. About 34 percent of Ukraine’s 9.2 million total media mentions were flagged as misinformation-related.

Third was vote-by-mail, which has been the subject of a torrent of misinformation by Mr. Trump and right-wing media outlets. Roughly one out of every five vote-by-mail stories in 2020 has been misinformation, according to Zignal’s analysis, with terms like “fraud” and “scam” being common red flags.

With all three subjects, some of the most common spreaders of misinformation were right-wing news sites like Breitbart and The Gateway Pundit. YouTube also served as a major source of misinformation about these topics, according to Zignal.

Of course, the misinformation we’ve seen so far this year might pale in comparison to what happens after next week’s election, if a contested result or allegations of fraud result in a new wave of false or misleading claims. Social media platforms have signaled that they will remove premature claims of victory, and attempts to delegitimize the election. But they also pledged to take down misinformation about Covid-19, and have had only mixed success in doing so.

Here are the topics that generated the highest percentage of misinformation narratives:

1. George Soros (45.7 percent misinformation mentions)

2. Ukraine (34.2 percent)

3. Vote by Mail (21.8 percent)

4. Bio Weapon (24.2 percent)

5. Antifa (19.4 percent)

6. Biden and Defund the Police (14.2 percent)

7. Hydroxychloroquine (9.2 percent)

8. Vaccine (8.2 percent)

9. Anthony Fauci (3.2 percent)

10. Masks (0.8 percent)

YouTube, like other social media companies, faces a misinformation challenge ahead of the Nov. 3 election.
Credit…Jim Wilson/The New York Times

With a week to go before Election Day on Nov. 3, YouTube, like other social media firms, is girding for a test of its ability to keep misinformation and other problematic videos off its site.

In a blog post on Tuesday laying out its approach, the company said it planned to apply its basic approach of removing content that violates its policies, elevating videos from authoritative sources, and limiting the spread of so-called borderline that tests the boundaries of its policies.

YouTube said it would be especially vigilant about content that encourages interference in the electoral process, such as videos inciting others to commit violent acts at polling stations or ones making false claims that mail-in ballots have been manipulated.

“Our teams have been working around the clock to make sure we have the systems and policies to prevent the abuse of our systems and provide access to authoritative information this election season,” wrote Leslie Miller, YouTube’s vice president for government affairs and public policy.

The election is a critical test of YouTube’s efforts to prevent the spread of dangerous conspiracy theories and hate speech on its platform. As the biggest repository of videos on the internet, YouTube has come under criticism in recent years for not doing enough to rein in the toxic content on its site while pushing viewers toward increasingly radical points of view.

In the days leading up to Nov. 3, YouTube’s home page will feature links to information about how and where to vote. As the polls close, YouTube will feature a playlist of live election results coverage from what it deems authoritative news sources. YouTube did not provide a full list of the sources, but cited CNN and Fox News as authoritative sources.

Starting on the day of the election, YouTube said, it will place a so-called information panel above election-related search results and below videos discussing the election. The panel will warn viewers that results may not be final and offer a link to Google’s real-time election results feature, based on information from The Associated Press.

The former Kentucky governor Matt Bevin, right, initially disputed the win by his challenger, Andy Beshear, left.
Credit…Pool photo by Albert Cesare

Local election officials, politicians and disinformation researchers continue to express concern about how misinformation about voting could disrupt Election Day next week. False and misleading information, research shows, has already been spreading widely.

The 2019 race for governor of Kentucky illustrates what can go wrong, as we explored in the latest episode of “Stressed Election.” In that race, the standing governor, Matt Bevin, a Republican, disputed the results when the vote tally showed him narrowly losing to his Democratic challenger, Andy Beshear.

Mr. Bevin and some of his allies argued, without showing any evidence, that there were voting irregularities and fraud, echoing some false and misleading statements made on social media. The governor initially refused to concede even though returns showed him trailing by about 5,000 votes. Mr. Bevin conceded about a week later.

The race offers some lessons about the power of disinformation in American elections:

1. Misinformation efforts don’t need to be sophisticated to be successful. In Kentucky, an account with just 19 followers sent out a tweet on election night that claimed to have “shredded a box of Republican ballots.” The tweet, sent as a joke by a college student, would eventually reach thousands.

2. Stopping the spread of misleading election information is not easy. Election officials noticed the false “shredded” tweet, which was retweeted by a few popular conservative accounts, and reported it to Twitter. The company removed the post within an hour, but screenshots of the post were retweeted by dozens of accounts, with retweets reaching well into the thousands. Tracking all of those screenshots proved difficult for both election officials and Twitter.

3. One piece of misinformation can beget much more. The sudden spread of the false tweet about shredding ballots seemed to be a green light for other claims. Some tweets started to question the accuracy of voter rolls in Kentucky, others wondered about “hackers” attacking the “cloud” where election results were stored, except there is no “cloud” used in Kentucky elections. And baseless claims of voter fraud were rampant.

4. There are networks ready to amplify and spread misinformation. Some groups on Twitter spread countless conspiracies, be it the QAnon cabal conspiracy or an anti-mask conspiracy. These networks can quickly seize on a piece of conspiratorial misinformation and amplify and accelerate its spread, which is part of why a single tweet from an obscure account reached so many in Kentucky.

5. An extremely close election is particularly ripe for misinformation. Following election night in Kentucky, the brush fire of misinformation that was spreading online quickly took hold offline. Mr. Bevin’s supporters staged news conferences with baseless claims of fraud, and set up a robocall network telling people to “please report suspected voter fraud” to the state elections board. Online, the discussion had now moved far beyond a case of shredded ballots to accusations of a stolen or rigged election.

Oct. 26, 2020, 3:34 p.m. ET

Oct. 26, 2020, 3:34 p.m. ET

By

Twitter, Facebook and YouTube have been rushing to place safeguards on their networks in the days leading up to Election Day.
Credit…Laura Morton for The New York Times

Twitter’s emphasis on up-to-the-second posts has made the site a must-visit destination for people to find the latest in news and current events. It has also made Twitter a vessel for the spread of false information.

To stem that tide, Twitter on Monday announced a new effort to preemptively debunk, or “prebunk” in Twitter parlance, some of the most commonly circulated false and misleading information about the election.

The company will, for the first time, pin information to the top of users’ timelines about how to vote, as well as a notice that voting results may not come immediately on Election Day — two common topics for misinformation across social media.

“We believe it’s critical that we make it easy for people to find that information,” said Nick Pacilio, a Twitter spokesman. “These prompts will alert people that they may encounter misinformation, and provide them with credible, factual information on the subject.”

The move is the latest in a series of actions taken by Twitter, Facebook and YouTube to place safeguards on their networks in the days leading up to Election Day. Lawmakers and the public harshly criticized the companies for allowing misinformation to spread ahead of the 2016 presidential election.

Facebook, which at three billion users is much larger than Twitter, has announced several changes in the past few months to stem misinformation about the election. It has started to pin facts about voting to the top of users’ timelines, added labels to posts that spread false voting information, placed a ban on new political advertising in the seven days before Election Day, and removed paid political ads entirely after the polls close.

Twitter has taken several steps, too. Last week, the company turned off some of the features that help tweets go viral faster. That includes adding an extra step to retweeting posts, and prompting users to avoid retweeting a post with a link to a news article if they had not already read the attached article.

The new pinned information will appear in the home timeline of every person with a Twitter account located within the United States, and will be available in 42 languages, beginning Monday.

The prompts will also appear in Twitter’s search bar when people search for related terms or hashtags. Each pinned alert will also link out to a collection of credible information on the subject — be it information on how to vote, or election returns — curated within a Twitter “moment” compiled from election experts, journalists and other authoritative sources of information.

Oct. 23, 2020, 8:04 p.m. ET

Oct. 23, 2020, 8:04 p.m. ET

By

In Thursday’s presidential debate, President Trump made several misleading claims about the business dealings of the family of his opponent, Joseph R. Biden Jr.

Mr. Trump suggested, without evidence, that Mr. Biden had consulted for his son Hunter Biden to help with the younger Biden’s business. Mr. Trump also said that Mr. Biden had used his influence during his time as vice president to help his son land lucrative business deals. Both claims were misleading.

But the comments nonetheless drew attention to Hunter Biden and his work, according to a New York Times analysis of Google searches and Facebook posts during and after the debate.

Searches for “Hunter Biden” on Google more than tripled during the debate compared with before the event, according to Google Trends data. Facebook posts about Hunter Biden also spiked, according to data from CrowdTangle, a social media analytics tool owned by Facebook.

Nearly 70,000 new Facebook posts popped up after the debate mentioning “false, unproven or misleading claims” about Hunter Biden’s business interactions, said Avaaz, a progressive human rights organization that studies misinformation. The majority of the posts came from Facebook pages that had been repeatedly flagged for sharing false or misleading claims, Avaaz said.

A Facebook spokeswoman said the company’s third-party fact checkers had assessed and debunked several claims related to Hunter Biden.

Mr. Trump’s comments at last month’s presidential debate also led to spikes in internet traffic. After he said that the Proud Boys, a far-right group that has endorsed violence, should “stand back and stand by,” searches for the group soared, as did posts about them on Twitter and Facebook.

Oct. 23, 2020, 5:38 p.m. ET

Oct. 23, 2020, 5:38 p.m. ET

By

Video

Cinemagraph

Here at Daily Distortions, we try to debunk false and misleading information that has gone viral. We also want to give you a sense of how popular that misinformation is, in the overall context of what is being discussed on social media. Each Friday, we will feature a list of the 10 most-engaged stories of the week in the United States, as ranked by NewsWhip, a firm that compiles social media performance data. (NewsWhip tracks the number of reactions, shares and comments each story receives on Facebook, along with shares on Pinterest and by a group of influential users on Twitter.) This week’s data runs from 9:01 a.m. on Friday, Oct. 6, until 9 a.m. on Friday, Oct. 23.

This week, as the presidential election approached, the most viral news on social media was, surprisingly, not directly related to the election.

Of the 10 most-engaged stories on our list this week, only three — two Fox News stories and a MSNBC story — were directly linked to the candidates. Two other stories that got lots of engagement were Pope Francis’ support for same-sex civil unions and the revelation that the parents of 545 children who had been separated from their families under the Trump administration’s family separation policy were unable to be found.

Here’s the full list:

Oct. 23, 2020, 3:02 p.m. ET

Oct. 23, 2020, 3:02 p.m. ET

By

A month before the 2016 presidential election, WikiLeaks released hacked emails from John Podesta, Hillary Clinton’s campaign chairman.

Last week, The New York Post published an article featuring emails from a laptop purportedly owned by Hunter Biden, the son of the Democratic presidential nominee, Joseph R. Biden Jr. The emails, about business dealings in Ukraine, have not been independently verified.

So how did cable news treat these two caches, which were both aimed at Democratic candidates during the heights of their presidential campaigns?

The answer: Fox News is giving more airtime to the unverified Hunter Biden emails than it did to the hacked emails from Mr. Podesta in 2016, according to an analysis from the Atlantic Council’s Digital Forensic Research Lab, which studies disinformation.

Credit…Photographs by The New York Times

While Fox News’s mentions of the word “WikiLeaks” took up a peak of 198 seconds in one day in mid-October 2016, the news channel’s references to “Hunter” reached 273 seconds one day last week, according to the analysis. Fox News did not respond to a request for comment.

In contrast, most viewers of CNN and MSNBC would not have heard much about the unconfirmed Hunter Biden emails, according to the analysis. CNN’s mentions of “Hunter” peaked at 20 seconds and MSNBC’s at 24 seconds one day last week.

CNN and MSNBC covered the WikiLeaks disclosures more, according to the study. Mentions of “WikiLeaks” peaked at 121 seconds on CNN in one day in October 2016 and 90 seconds on MSNBC in one day in the same period.

“In 2016, the WikiLeaks releases were a gigantic story, covered across the political spectrum,” said Emerson Brooking, a resident fellow at the Digital Forensic Research Lab, who worked on the report. “In 2020, the Hunter Biden leaks are a WikiLeaks-sized event crammed into one angry, intensely partisan corner” of cable news television.

As for online news outlets, 85 percent of the 1,000 most popular articles about the Hunter Biden emails were by right-leaning sites, according to the analysis. Those articles, which were shared 28 million times, came from The New York Post, Fox Business, Fox News and The Washington Times, among other outlets. The researchers did not have a comparative analysis for the WikiLeaks revelations.

Oct. 22, 2020, 1:30 p.m. ET

Oct. 22, 2020, 1:30 p.m. ET

By

President Trump’s official Facebook page has gotten 130 million reactions, shares and comments, compared with 18 million for Mr. Biden’s page.
Credit…Doug Mills/The New York Times

President Trump has made his war on Big Tech a central piece of his re-election campaign. For months, he has accused Facebook and Twitter of attempting to rig the election by silencing criticism about his rival, former Vice President Joseph R. Biden Jr., and called for new regulations to rein in Silicon Valley giants.

But Mr. Trump is far from muzzled online. In fact, in recent weeks, he has widened his social media engagement lead over Mr. Biden.

In the past 30 days, Mr. Trump’s official Facebook page has gotten 130 million reactions, shares and comments, compared with 18 million for Mr. Biden’s page, according to data from CrowdTangle, a Facebook-owned data platform. That is significantly larger than the engagement gap for the preceding 30-day period, when Mr. Trump got 86 million interactions to Mr. Biden’s 10 million.

Mr. Trump trounced Mr. Biden on Instagram, too, getting 60 million likes and comments on his posts in the past 30 days, nearly twice as many as Mr. Biden’s 34 million. In the preceding 30-day period, Mr. Trump got 39 million likes and comments, while Mr. Biden got 13 million.

Mr. Trump also far outpaced Mr. Biden on YouTube, getting 207 million views on his videos in the last 30 days to Mr. Biden’s 29 million, according to SocialBlade, a data firm that tracks video performance. (SocialBlade’s data, which includes views on YouTube ads as well as unpaid videos, is slightly different than CrowdTangle’s Facebook and Instagram engagement data, which counts mostly engagement on unpaid posts.)

Social media performance is not a proxy for electoral success, of course, and Mr. Trump’s campaign would probably prefer to be leading in swing-state polls than on Facebook and YouTube. Engagement data also does not capture how many people view or click on posts, only how strong a reaction they elicit. And Facebook has argued that data about “reach” — the number of people who actually see a given post in their feeds — shows a more accurate picture of what is popular on the platform. (It does not, however, make this data publicly available.)

But it is useful to look at the president’s claims of partisan bias by tech companies in light of his sky-high engagement on those same companies’ platforms, because it hints at the nature of his complaints. His arguments are not the pleas of an underdog being silenced, but the threats of a star who wants to be allowed to keep his megaphone.

Some of the president’s posts in recent weeks have included misinformation about mail-in voting, dubious claims about Covid-19 and false and unproven allegations of corruption against Mr. Biden. Several of his posts have been taken down or had fact-checking labels applied to them. But these measures do not appear to have dented his account’s overall engagement.

The president’s strongest week on Facebook and Instagram came during his early October hospitalization for Covid-19, when well-wishers flooded his pages with supportive likes and comments. On YouTube, his best day came this week, when he took out a number of ads about accusations against Mr. Biden’s son Hunter, published by The New York Post. (The New York Times has not independently confirmed The Post’s reporting, and Mr. Biden’s campaign has dismissed the allegations as “Russian disinformation.”) Those ads performed well for Mr. Trump, and his channel got nearly 22 million views on Tuesday alone.

One bright spot for Mr. Biden is Twitter, where the former vice president has been performing well of late. According to Axios, which cited data from the media intelligence company Conviva, Mr. Biden has overtaken Mr. Trump in recent days when it comes to the average number of retweets and replies on his posts. (Per-post averages may be one social media contest that the president’s nonstop tweeting habit does not help him win.)

Another platform where Mr. Biden has beaten Mr. Trump? TV. His town hall on ABC last week got a bigger audience than Mr. Trump’s head-to-head NBC town hall, according to Nielsen.

And given Mr. Biden’s significantly smaller social media audience, he is punching above his weight. His Facebook page’s “interaction rate” — a measure of engagement that takes into account how many followers an account has — is currently more than twice as high as Mr. Trump’s.

Experts say that much of the disinformation is designed to suppress support for Joseph R. Biden Jr. and is likely to increase in the final weeks of the campaign.
Credit…Chang W. Lee/The New York Times

QAnon conspiracy theory videos on YouTube. Homespun “remedies” for the coronavirus sent via text messages on WhatsApp. Socialist and communist memes on Twitter. Anti-Black Lives Matter posts on Facebook.

The universe of misinformation is not just widespread and vast. It is also bilingual.

For several months, researchers and Democrats have worried increasingly about misinformation in Spanish being spread through social media, talk radio and print publications that target Latino voters.

The problem has been particularly acute in South Florida, where a worrying loop of misinformation has gone from social media to mainstream and back again.

Some of the most insidious messages have tried to pit Latinos against supporters of Black Lives Matter, by using racist language and tropes. But the distortions hardly stop there.

Other news outlets have reported on the phenomenon in recent weeks, and taken together, the reports paint a picture of just how deep and wide the misinformation has spread.

Last month, Politico published an article examining efforts to paint the billionaire Democratic fund-raiser George Soros as the director of “deep state” operations and exploring anti-Black and anti-Semitic efforts that have spread across Spanish-language channels in the Miami area. A local Univision station soon followed with its own article.

A Florida public radio station found that conservative elected officials in Colombia were also helping to push the false idea that Joseph R. Biden Jr. is a clone of left-wing dictators in Latin America, such as Hugo Chávez.

This week, an article in the Boston Globe looked at how the spread of misinformation has driven a wedge between many younger Latino voters and their parents.

It is still too early to tell just what impact, if any, the misinformation is having on who shows up to the polls and who they vote for. But many experts worry that the efforts will only increase in the final days of the campaign, in an attempt to suppress the votes of some Latinos. Understanding how the misinformation spreads in any language could prove key in interpreting the election’s results.

Oct. 21, 2020, 5:31 p.m. ET

Oct. 21, 2020, 5:31 p.m. ET

By

Most people know TikTok for its short-form viral videos, like break-dancing stars or relaxing cooking channels. But TikTok also has a less-publicized darker side — one where Holocaust deniers and QAnon conspiracy theorists run rampant.

This week, the company announced a series of policy changes restricting the types of content it would allow, including a crackdown on QAnon supporters and a prohibition of “coded” language that could serve to normalize hate speech across TikTok.

“These guidelines reflect our values, and they make clear that hateful ideologies are incompatible with the inclusive and supportive community that our platform provides,” TikTok said in a corporate blog post on Wednesday. The approach will not only target outright hate speech and Nazi paraphernalia, but less obvious references to white supremacist groups as well.

The changes expand on TikTok’s existing policies, which had long banned certain forms of hate speech and direct references to Nazism and white supremacy.

The company now, for instance, also bans “coded language and symbols that can normalize hateful speech and behavior.” Some examples include numbers, code words or visual cues that are widely seen as signals to white supremacist groups.

Earlier this week, TikTok announced a wider ban of posts and users related to QAnon, the pro-Trump conspiracy theory, which included expanding a ban on hashtags related to the digital movement.

TikTok’s changes follow in the footsteps of its larger and more popular contemporaries. Over the past month, Facebook and Twitter have each introduced a series of changes to policies on what types of speech are allowed on their services.

Together, the changes represent a retreat from these companies’ long-held embrace of unfettered free speech. In the past, Twitter employees referred to their company as “the free speech wing of the free speech party,” erring on leaving all forms of objectionable content up on its site. That position has waned over the past two years, and especially in the past few months, with the company adding labels and in some cases taking down tweets entirely when they become an issue of public safety.

It is a distinct reversal for Mark Zuckerberg, chief executive of Facebook, in particular. One year ago, Mr. Zuckerberg championed mostly unfettered free speech on Facebook in a full-throated defense of his content policies in an address at Georgetown.

His views have changed abruptly. Over the last month, Facebook has banned buying advertising that supports anti-vaccination theories, further cracked down on QAnon’s presence and outlawed all forms of Holocaust denial on the platform. All three of those were positions Mr. Zuckerberg defended as views that he may not have personally agreed with but would still be allowed on the site.

TikTok used its announcement on Wednesday to take a thinly veiled swipe at Mr. Zuckerberg’s about-face.

“We’re proud that we have already taken steps to keep our community safe, for example, by not permitting content that denies the Holocaust and other violent tragedies,” TikTok wrote.

Mr. Zuckerberg has personally spoken out against Chinese-backed companies and TikTok in particular, a start-up that also happens to be a threat to his business. President Trump has made similar arguments about TikTok, saying it posed a national security threat, and moved to ban the app in the United States. That fight may also be defused by a potential sale of TikTok’s business to Oracle, though the deal is not yet complete.

Oct. 20, 2020, 5:38 p.m. ET

Oct. 20, 2020, 5:38 p.m. ET

By

For years, it was the subject of countless Fox News segments, talk radio rants, and viral right-wing tweets and Facebook posts. It spawned congressional hearings, Justice Department investigations, and investigations of those investigations. President Trump called it “the biggest political crime in the history of our country,” and suggested that its perpetrators deserved 50-year prison sentences.

Now, weeks before the election, “Spygate” — a labyrinthine conspiracy theory involving unproven allegations about a clandestine Democratic plot to spy on Mr. Trump’s 2016 campaign — appears to be losing steam.

The theory still commands plenty of attention inside the right-wing media sphere. But Mr. Trump’s quest to turn Spygate into a major mainstream issue in this year’s campaign may be coming up short. Data from NewsWhip, a firm that tracks social media performance, shows that stories about Spygate and two related keywords — “Obamagate” and “unmask/unmasked/unmasking”— received 1.5 million interactions on Facebook and from influential Twitter accounts last month, down from about 20 million interactions in May.

Video

Cinemagraph

Part of Spygate’s fizzle may be related to the fact that three years on, none of Mr. Trump’s political enemies have been charged with crimes. Last year, a highly anticipated Justice Department inspector general’s report found no evidence of a politicized plot to spy on the Trump campaign — angering believers who thought the report would vindicate their belief in a criminal “deep state” plot against the president.

And this fall, the Spygate faithful got insult added to injury when a Justice Department investigation into one of their core concerns — whether Obama-era officials had acted improperly by “unmasking” the identities of certain people named in intelligence documents — came up empty-handed.

Few right-wing narratives have been as durable as Spygate, which has morphed over time into a kind of catchall theory encompassing various allegations of Democratic malfeasance. Fox News hosts including Sean Hannity, Laura Ingraham and Tucker Carlson went all in on it, as did Republicans in Congress, including Representative Devin Nunes of California and former Representative Trey Gowdy of South Carolina. But nobody embraced the theory like Mr. Trump, who has returned to it frequently to deflect attention from his own troubles, whether it was the Mueller investigation or his administration’s response to the Covid-19 pandemic.

As the election approaches, it’s worth looking back on Spygate’s evolution, both because it illustrates the way that partisan misinformation bubbles up through the right-wing media ecosystem, and, ultimately, because it shows how Mr. Trump’s obsession with a confusing, hard-to-follow narrative may have backfired as a campaign strategy.

Here is a (very) abridged version of the main waypoints in Spygate.

March 2017: Right-wing blogs and media outlets began discussing theories they called “DeepStateGate” or “Obamagate,” a reference to false claims that President Obama had tapped Mr. Trump’s phone.

May 2018: Mr. Trump seized on the news that an F.B.I. informant was sent to meet with members of his campaign staff, dubbing it “Spygate,” and said that it “could be one of the biggest political scandals in history.” Pro-Trump media outlets ran with the unsubstantiated claims. Top-ranking Republicans initially tried to distance themselves from the theory, although many would later embrace it.

April 2019: Spygate gained momentum when William P. Barr, the attorney general, testified to Congress that he believed “spying did occur” on Mr. Trump’s 2016 campaign, appearing to contradict previous Justice Department statements.

December 2019: Michael Horowitz, the Justice Department’s inspector general, released a long-awaited report detailing his findings about the origins and conduct of the F.B.I.’s Russia investigation. Mr. Trump’s media allies spent weeks hyping the report. (Sean Hannity predicted it would “shock the conscience.”) Followers of the QAnon conspiracy theory also latched onto the Horowitz report, predicting that it would set in motion indictments and mass arrests of the president’s enemies.

But the Horowitz report did not deliver a knockout punch. It revealed errors and lapses in some F.B.I. actions, but found no evidence of political bias in the F.B.I.’s Russia investigation, and rejected Mr. Trump’s suggestion that there was an organized Democratic conspiracy against him.

May 2020: As the country reeled from the Covid-19 pandemic, two developments brought Spygate (which had since been rebranded as “Obamagate”) back onto the national stage. First, the Justice Department dropped its criminal case against the former national security adviser Michael T. Flynn, a central figure in Spygate, who had pleaded guilty to lying to the F.B.I. about his conversations with a Russian diplomat.

Then, days later, a list of Obama administration officials who might have tried to “unmask” Mr. Flynn was declassified and released by Richard Grenell, the acting director of national intelligence. (“Unmasking,” in intelligence parlance, refers to a process by which officials can seek to reveal the identity of individuals who are referred to anonymously in intelligence documents. Unmasking is common, and such requests are made thousands of times a year.) Those named on the list included former Vice President Joseph R. Biden Jr., giving new fuel to Mr. Trump’s attempt to paint himself as the victim of a partisan conspiracy.

This was, in many ways, the closest that Spygate came to escaping the right-wing media ecosystem. Fox News devoted hours to the theory, which received more airtime than the coronavirus on some days. Mainstream news organizations tried to make sense of the theory, and Mr. Trump himself seemed obsessed with it, even though he often struggled to describe what the conspiracy actually was. In a flurry of more than 100 tweets sent on May 10, Mother’s Day, Mr. Trump raged about Obamagate, and repeated many of the debunked allegations about Obama-era misconduct, Mr. Flynn, and the Russia investigation.

By this point, many Trump supporters had pinned their hopes on two government reports, which they hoped would soon blow the entire scandal wide open.

The first was a sweeping investigation led by John Durham, the U.S. attorney from Connecticut who was tapped by Mr. Barr to look into the origins of the F.B.I.’s Russia probe.

The second was a smaller piece of the Durham investigation led by John Bash, a U.S. attorney Mr. Barr appointed to look into whether Obama-era officials had improperly “unmasked” Mr. Flynn and others.

October 2020: With less than a month to go before the election, Spygate/Obamagate continued to unravel. Mr. Barr has told Republican lawmakers that Mr. Durham’s report would likely not arrive before the election. And the unmasking investigation led by Mr. Bash, which many Spygate aficionados believed would lead to indictments and arrests of top Democrats, instead ended with no findings of irregularities or substantive wrongdoing.

Still, for Mr. Trump, hope springs eternal. He has continued his crusade, comparing Spygate to a “treasonous act” that should disqualify Mr. Biden from the presidency.

Oct. 20, 2020, 1:55 p.m. ET

Oct. 20, 2020, 1:55 p.m. ET

By

WASHINGTON — When some viewers in Arkansas tuned in to their local television news station last week, they found a surprising report: President Trump had defeated Joseph R. Biden Jr. in the state — three weeks before Election Day.

KNWA, the NBC affiliate serving northwest Arkansas and the Arkansas River Valley, said it was all a mistake. The station had been working on its election-night graphics and mistakenly broadcast fabricated results on a banner at the bottom of the screen during its 5 p.m. local newscast.

In an email, Lisa Kelsey, the vice president and general manager of KNWA and other stations in the area, said the slip-up was inadvertent and only a local issue.

A producer activated the wrong control, which displayed “a crawl of information about the election” for about a minute, she wrote, adding that no election results are currently available.

“We take this mistake very seriously and will ensure it doesn’t happen again,” Ms. Kelsey said in an email.

But the episode highlighted concerns about how news organizations report and characterize incomplete returns on election night and whether, by mistake or design, erroneous or misleading data could shape perceptions about who won before the outcome can be officially declared.

The issue has been a particular concern for Democrats, who fear that Mr. Trump’s statements about election fraud and his reluctance to commit to accepting the outcome could lead him to seize on early returns showing him with a lead to assert that the election is over.

Credit…September Dawn Bottoms/The New York Times

A fast-growing network of nearly 1,300 websites is filling a void left by vanishing local newspapers across the country. But many of their stories are ordered up by conservative political groups and corporate P.R. firms, a Times investigation found.

We are publishing the names of those sites so readers can see whether the sites target their area.

See the sites ›

We compiled the list with the help of Global Disinformation Index, an internet research group, which analyzed Google advertising and analytics data imprinted in the sites’ digital codes to find links between the sites. We then confirmed that sites belonged to the network by analyzing their layouts, bylines, privacy policies and “About” pages, as well as by interviewing employees and examining internal records of the companies behind the sites.

Columbia University’s Priyanjana Bengani tallied a similar number of websites in August.

The network is run under a web of companies, though it is largely overseen by Brian Timpone, a former TV reporter who has sought to capitalize on the decline of local news organizations for nearly two decades. Mr. Timpone did not respond to multiple requests for comment.

As a guide, the different segments of the network include nearly 1,000 local news sites under the Metric Media brand; more than 50 business news sites; 34 news sites in Illinois under the Local Government Information Services brand; and 11 legal-news sites owned by a U.S. Chamber of Commerce group.

Some of the sites are dormant, and we culled ones from our list that are now defunct. In the past, dormant sites have sprung to life when news hit the region they target, like what happened with the Kenosha Reporter site after protests broke out in Kenosha, Wis., over the police killing of an unarmed Black man there.

For months, public health experts — backed by guidelines from the World Health Organization and the Centers for Disease Control and Prevention — have stood firm on one resounding refrain: Against the coronavirus, masks work.

But on Saturday, Dr. Scott Atlas, one of President Trump’s most prominent science advisers, took to Twitter to say otherwise.

“Masks work? NO: LA, Miami, Hawaii, Alabama, France, Phlippnes, UK, Spain, Israel,” Dr. Atlas tweeted, rattling off a list of locations where masks had, in his view, failed to protect large swaths of the population.

The tweet was rapidly debunked by experts, who pointed to a wealth of evidence showing that face coverings reduce the risk that the coronavirus will hop from person to person. Masks, they’ve said, cut down on the amount of virus that is sprayed out of an infected person’s airway. They might also thwart inbound virus by loosely shielding the wearer’s nose and mouth.

Credit…The New York Times

Not long after, Dr. Atlas reshared his first tweet with a message that seemed to walk back his original statement: “Use masks for their intended purpose — when close to others especially hi risk,” he said. “Otherwise, social distance. No widespread mandates.”

On Sunday, Twitter removed Dr. Atlas’s first tweet, saying it violated the company’s policy against false or misleading information about the coronavirus that could lead to harm.

But the damage had already been done: The post had been retweeted at least 1,800 times, and generated over 7,300 likes and replies. The removal then set off a flurry of anti-mask posts, and accusations of tech censorship, across social media. On Facebook, several right-wing pages shared copies of the tweet, while a series of anti-mask and pro-Trump groups and pages claimed that Twitter was suppressing free speech.

Dr. Atlas, a radiologist with no background in infectious disease or public health, has come under heavy fire in recent months for his stances on the coronavirus, which has killed more than 219,000 Americans. Experts have widely dismissed and criticized his views on lockdowns and masking mandates after he has derided them as unnecessary and even harmful in the fight to halt the pandemic.

Dr. Atlas has also promoted the controversial idea that herd immunity — the point at which a virus can no longer spread easily because enough people have contracted it — can be reached when only a small sliver of the community at large has been infected.

In his now-defunct Saturday tweet about masks, Dr. Atlas cast doubt on their usefulness, saying there was little evidence that they reduce disease transmission. As a send-off, he shared a link to an indictment of face coverings published on Friday by the American Institute for Economic Research, a libertarian think tank that recently sponsored a declaration arguing that the coronavirus should be allowed to spread among young healthy people to expedite herd immunity.

Masks, like all other protective measures, cannot halt the coronavirus on their own. But experts consider the accessories a crucial part of the public health tool kit needed to combat the pandemic, alongside tactics such as physical distancing and widely available testing.

Oct. 16, 2020, 6:37 p.m. ET

Oct. 16, 2020, 6:37 p.m. ET

By

Video

Cinemagraph

Here at Daily Distortions, we try to debunk false and misleading information that has gone viral. We also want to give you a sense of how popular that misinformation is, in the overall context of what is being discussed on social media. Each Friday, we will feature a list of the 10 most-engaged stories of the week in the United States, as ranked by NewsWhip, a firm that compiles social media performance data. (NewsWhip tracks the number of reactions, shares and comments each story receives on Facebook, along with shares on Pinterest and by a group of influential users on Twitter. This week’s data runs from 9:01 a.m. on Friday, Oct. 9, until 9 a.m. on Friday, Oct. 16.

The most viral article on social media this week was one that social media companies tried to stop from going viral.

Facebook said it would reduce the visibility of an unsubstantiated New York Post article about Hunter Biden, the son of Joseph R. Biden Jr., until a third party could fact-check it. Twitter initially banned all links to the article, saying it made the move because the article contained images showing private personal information and because it viewed the article as a violation of its rules against distributing hacked material. But the article still traveled widely on social media, receiving more than two million interactions.

Here is the full list of the week’s most-engaged stories:

An obituary for Ms. Ferrell, who played Berta, the housekeeper, on “Two and a Half Men,” was shared widely by the show’s many fans.

Mr. Collins’s resolution, which claimed that Representative Nancy Pelosi “does not have the mental fitness” to continue as House speaker, was a largely meaningless symbolic gesture of opposition. But it was red meat for conservatives on Facebook, for whom Ms. Pelosi is an engagement-bait villain.

Another break from politics, this one about a planned revival of the hit TV show “Dexter,” got nearly a million interactions.

Judge Amy Coney Barrett’s Supreme Court confirmation hearing was the subject of two Top 10 articles this week. This one, from the right-wing news site The Daily Wire, focused on her empty notepad.

A White House proclamation about Columbus Day, which took aim at “radical activists” who “have sought to undermine Christopher Columbus’s legacy,” was widely shared by right-wing pages on Facebook and by groups like the National Italian American Foundation.

A Times article about unofficial ballot boxes that Republican operatives placed in California was shared by several large left-wing Facebook pages, including Occupy Democrats and Ridin’ With Biden.

Leave a Reply