NY Times

The future of disinformation — this time in Mississippi

Russian President Vladimir Putin gestures at the Future Technologies Forum in Moscow on Feb. 14, 2024. One of the strangest glimpses into the future of information warfare might be what’s happening in Jackson, Miss., where a man named Ramzu Yunus is trying to launch an independent nation for people of African descent on Facebook.

His secessionist movement — while very local and very fringe — already has the backing of an intricate, global cross-platform propaganda network called the Russophere.

Last year, Yunus tried to drum up support for a similar separatist movement in Detroit, and has touted support from Russia on his Facebook page. In Texas, a different Russian influence campaign is amplifying calls for a “Texas secession” and an imminent “civil war” over the border crisis.

What might seem from the outside like an eccentric group of grassroots campaigns is a new front for a global pro-Russia disinformation operation — one that extends to the developing world as well, according to a new report by UK-based AI intelligence group Logically.

Logically’s researchers, who specialize in tracking disinformation networks across social media platforms like Telegram and Facebook, say these online campaigns follow a pattern they’ve seen in Africa, where the Kremlin is stoking anti-colonial sentiment against Western powers.

Nick Backovic, one of the report’s lead researchers, said Yunus’ blatant pro-Russia claims and focus on reaching untapped U.S. audiences with anti-West messaging is an “easily replicable framework” that could “snowball” and potentially destabilize populations across the country.

Like the African campaigns, Yunus’ American campaigns are racially loaded: They try to pit people of African descent against the U.S. government. In African countries, the campaigns pit local populations against Western colonial powers, specifically France.

Yunus claims financial and ideological support from the Russian government and PMC Wagner, the Russian paramilitary group formerly run by the late Yevgeny Prigozhin, now closely affiliated with Vladimir Putin.

If this all sounds familiar, it’s because Prigozhin was also behind the Internet Research Agency, the Russian troll farm that stoked civil discord in the U.S. and elsewhere during the 2016 elections.

Logically said that Yunus confirmed its findings that he was speaking to Kremlin officials about his separatist campaigns, and said Yunus showed the researchers an invitation from a deputy of the Russian legislature to speak on Russia-Africa relations. (The Kremlin did not confirm the report, Logically told DFD. )

Contacted by POLITICO, The State Department declined to comment on Russophere or Yunus’ campaigns. The online propaganda effort in Texas isn’t a Yunus operation, but speaks to the variety of ways deliberate disinformation is still being seeded in hyperlocal ways. It’s part of the massive, ongoing Russian state-linked campaign called “Doppelganger” which uses bot accounts on social media platforms — primarily X — to push narratives that exploit lightning-rod issues like the border crisis.

The Texas operation aims to stoke pro-Trump and anti-Biden sentiment among other narratives, according to the Russian disinformation research group Antibot4Navalny, which is tracking the influence network. (The group is unaffiliated with the namesake dissenter, the late Alexei Navalny.)

Over Signal, an Antibot4Navalny researcher told DFD that selling separatism “is long known as one of Kremlin’s favorite tactics to ‘divide & conquer,’ along with exploiting immigration — both stimulating influx of it and playing with fear of it from conservative audiences.” The researcher requested anonymity for the sake of personal safety.

“Lately, some episodes worth mentioning for the U.S. are exploiting discontent specifically from people of color,” the Antibot4Navalny researcher added. “Beyond increasing tensions between social groups and pressure on authorities, separatism can help with whataboutism: ‘Why should Russia give freedom to Crimea / ex-Ukrainian territories if the U.S. does not do the same to Texas, Michigan nor Mississippi?’”

And if that move sounds familiar — that’s because there’s a very deep history of the Soviet Union exploiting American racial tensions for its own advantage, fomenting separatism in a nation deeply divided over civil rights.

A year into monitoring Yunus’ U.S. separatist movements online, Backovic says it’s important not to dismiss the narratives he is peddling to specific communities in the U.S.

“If we look back, sometimes these things can seem very wacky at the beginning and still have the potential to do damage,” he said. “If you look at QAnon on 4chan and how that blew up… just because it seems wacky, doesn’t mean it’s not dangerous.”

Drone defenses on the radar

In diplomatic circles, Brussels is well-known as a hotbed for spying. But drones peeping on sensitive public buildings open up a new front for foreign adversaries looking to access the very core of the E.U. — so much so that the European Commission is looking for companies that could deploy their anti-drone defenses at its Berlaymont headquarters in Brussels, POLITICO Europe’s Bjarke Smith-Meyer reported this morning.

“The Commission, together with other stakeholders and the host country [Belgian] authorities, are looking at possible measures to address the threat,” a spokesperson said. Authorities in Brussels have mostly banned drones from flying around the Belgian capital without prior authorization. It’s not clear yet what kind of anti-drone technology the Commission will deploy to fend off surveillance and attack drones. –Christine Mui

Read my lips: no new deepfakes

There’s still an unsettled question hanging over ongoing attempts to regulate AI-generated content ahead of the 2024 elections: Who should make the rules?

The fake Joe Biden robocall in New Hampshire last month jolted policymakers into high gear to prevent more instances of what’s become the standout AI controversy of the 2024 campaign. But as Mallory Culhane reported for Morning Tech (for Pros!), it’s state lawmakers and agencies — like the FTC and FCC — that have taken the lead on regulating AI use in election-related content, while Congress dithers on passing federal legislation.

New polling from the Artificial Intelligence Policy Institute shared exclusively with DFD found that an overwhelming majority, 84% of respondents, supported holding AI companies, not just individuals, liable when bad actors use their software to generate misleading political content. That’s true across party lines, with 74% of Democrats, 71% of Republicans, and 63% of independents saying they would back legislation reflecting that.

In fact, three-quarters of all respondents would go so far as to call for making attempts to use deepfake technology to influence elections illegal — with broad bipartisan support. The FCC’s recent vote to ban AI-generated voices in robocalls illegal got a big nod of approval from 81% of polled voters. The think tank surveyed 1,103 American voters with a margin of error of 4.6 percentage points. –Christine Mui

Source: Politico
Original: Read More