According to a new report released by Facebook (FB), Russia and Iran are the top two sources of coordinated fake activity.
In response to social media companies’ efforts to crack down on fake accounts and influence operations, foreign and domestic covert influence operators have shifted their tactics and grown more sophisticated, according to Facebook’s report, which was released on Wednesday.
According to the report, Facebook has removed more than 150 networks of coordinated fake activity since 2017. Russia is linked to 27 networks, while Iran is linked to 23. Nine of them are from the United States.
According to Facebook’s report, the United States remains the primary target for foreign influence campaigns, with 26 such efforts from a variety of sources from 2017 to 2020. (Ukraine is in a distant second place.)
However, it was US domestic actors, not foreign operatives, who were increasingly responsible for sowing disinformation during the 2020 election season. According to Facebook’s report, in the run-up to the election, the company removed as many American networks targeting the US with so-called coordinated inauthentic behavior (CIB) as it did Russian or Iranian networks.
“Most notably, one of the CIB networks we found was operated by Rally Forge, a US-based marketing firm, working on behalf of its clients including the Political Action Committee Turning Point USA,” the report said. “This campaign leveraged authentic communities and recruited a staff of teenagers to run fake and duplicate accounts posing as unaffiliated voters to comment on news Pages and Pages of political actors.”
The Washington Post first reported on the campaign in September 2020. A Turning Point spokesman described the effort as “sincere political activism conducted by real people who passionately hold the beliefs they describe online, not an anonymous troll farm in Russia,” in a statement to the Post at the time. In response to a request from CNN at the time, the group declined to comment.
Another US network, which Facebook announced it would shut down in July 2020, had ties to Roger Stone, former President Donald Trump’s friend and political adviser. More than 50 accounts, 50 pages, and four Instagram accounts were maintained by the network. It had a reach of 260,000 Facebook accounts and more, according to the company.
(Stone announced his ban on the alternative social media site Parler, along with a statement, after Facebook removed him, he said : “We have been exposing the railroad job that was so deep and so obvious during my trial, which is why they must silence me. As they will soon learn, I cannot and will not be silenced.”)
Following the 2016 election, as revelations about Russia’s attempts to meddle in the US democratic process surfaced, the presence of fake and misleading content on social media became the dominant story hounding tech platforms such as Facebook, Twitter, and YouTube. Foreign influence campaigns have attempted to sow division within the electorate by posing as US voters, targeting voters with misleading digital advertisements, creating false news stories, and other methods.
The revelation of these campaigns has resulted in increased political and regulatory pressure on Big Tech, as well as ongoing concerns about the industry’s disproportionate power in politics and the economy. Many critics have since called for the breakup of large tech companies, as well as legislation governing how social media platforms manage content on their platforms.
Facebook, for example, has responded by hiring more content moderators and enacting new platform policies on fake activity.
In a separate announcement on Wednesday, Facebook said it is increasing the penalties it imposes on individual Facebook users who share misinformation that has been debunked by its fact-checking partners.
When a user shares a post with debunked claims, Facebook’s algorithms demote it in the news feed, making it less visible to other users. Repeat offenders, however, may face having all of their posts demoted in the future as a result of Wednesday’s change.
Facebook had already been degrading pages and groups that shared fact-checked misinformation at the account level, but Wednesday’s announcement covers individual users for the first time. (Political figures are exempt from Facebook’s fact-checking program, so their accounts are not affected by the change.)
Even as Facebook’s moderation efforts have improved, many covert disseminators of misinformation have changed their tactics, according to the report. Threat actors are trying to adapt to Facebook’s enforcement in an ever more complex game of cat-and-mouse, according to the company, by creating more tailored and targeted campaigns that can evade detection and outsourcing their campaigns to third parties.
“So when you put four years’ worth of covert influence ops together, what are the trends?” Ben Nimmo, a co-author of the report, wrote on Twitter Wednesday. “More operators are trying, but more operators are also getting caught. The challenge is to keep on advancing to stay ahead and catch them.”