UK to fight Russia’s ‘hostile online war’ by forcing internet companies to suppress misinformation

The British government push to do “foreign interference” such as disinformation a priority offense under its online security billforcing tech companies to remove infringing content shared by foreign state actors.

The move follows recent legislation announced by the UK which is designed to deter foreign state actors seeking to “undermine the interests of the UK”, which includes targeting attempted foreign interference in elections with higher maximum penalties. The proposed legislation comes shortly after MI5 warned that a Chinese agent with ties to the Chinese Communist Party (CCP) infiltrated parliamentwhile subsequently the United Kingdom was also intensify its efforts to counter Russian disinformation and “troll factories” seek to spread misinformation around the war in Ukraine. And then there was Ben Wallace prankSecretary of State for Defense of the United Kingdom, of Russian hoaxes posing as Ukrainian Prime Minister Denys Shmyhal.

It’s also worth noting that the UK is no stranger to the misinformation controversy, perhaps especially around Russia alleged interference in the 2016 Brexit referendum which saw the United Kingdom leave the European Union. A subsequent report revealed that the British government and intelligence agencies conducted no real assessment of Russia’s attempts to interfere with the referendum, despite the available evidence.

Russia and “hostile online warfare”

While today’s announcement applies to misinform all foreign actors, UK Digital Secretary Nadine Dorries specifically pointed to the recent “hostile online war” emanating from Russia.

“The invasion of Ukraine has once again shown how Russia can and will weaponize social media to spread disinformation and lies about its barbaric actions, often targeting the very victims of its aggression,” said Dorries. in a statement released by the Digital, Culture, Media and Sports Department. “We cannot allow foreign states or their puppets to use the internet to wage hostile online warfare unhindered. That is why we are strengthening our new internet security protections to ensure that social media companies identify and eliminate state-sponsored misinformation.”

This essentially sees the UK tightening the ties between two new bills which are currently before Parliament – the National Security Bill, which was introduced during the Queen’s Speech in May as a replacement for existing espionage laws, and the Online Safety Bill, which includes new rules on how online platforms should handle the questionable online content. Under the latter bill, which is expected to come into force later this year, online platforms such as Facebook or Twitter would be required to take proactive action against illegal or “harmful” content, and could face fines of up to £18 million ($22). million) or 10% of their worldwide annual turnover, whichever is greater. On top of that, government regulator Ofcom would have new powers to block access to specific websites.

Priority offense

As a “priority offence”, disinformation joins a host of offenses already covered by the Online Safety Bill, including terrorism, harassment and harassment, hate crimes, human trafficking, extreme pornography, etc.

With this latest amendment, social media companies, search engines and other digital entities that host user-generated content will have a “legal obligation to take proactive and preventative measures” to minimize exposure to misinformation. sponsored by the state that seeks to interfere with the UK.

This will in part involve identifying fake accounts that have been created by groups or individuals representing foreign states, for the express purpose of influencing democratic or legal processes. It will also include the dissemination of “hacked information to undermine democratic institutions”, which – while not entirely clear – may include specific content that has been surreptitiously obtained from the UK government or political parties. So this could mean that Facebook et al will be forced to remove content if it contains embarrassing revelations about prominent UK politicians.

But if we’ve learned anything over the past decade about managing user-generated content online, it’s that it’s incredibly difficult to do it at scale — and even then, it doesn’t. It’s often not easy to tell if a user is legit or a bad actor. employed by a foreign government. Facing the prospect of gargantuan fines, it’s a challenge that could see many legitimate online content or accounts caught in the crosshairs as internet companies struggle to comply with the legislation.