There’s been a particularly horrific video doing the rounds on TikTok. But while schools and parents have been warned to keep vigilant, and Prime Minister Scott Morrison has demanded TikTok remove the video, there’s very little the government can actually do to get rid of it, despite new laws about restricting violent content online passed in the wake of the Christchurch shooting last year.
That’s because, according to a spokesperson for the eSafety Commissioner, the video, which depicts a suicide, does not fit within the law’s definition of violent and abhorrent material.
It’s another reminder that both regulators and tech companies are, for different reasons, utterly useless at cleaning up the internet.
The Christchurch video
The most prominent recent example was that of the Christchurch massacre last year. The shooter wanted people to watch his attack, filmed like a video game, and livestreamed on Facebook. It quickly spread across the internet, on YouTube and Twitter, leaving the biggest, most powerful tech companies in the world playing catch-up.
A version of the video stayed up on Facebook for six hours. Many wildly irresponsible TV news companies broadcast grabs from the video, which made it harder for some tech platforms’ algorithms to figure out what to remove
The game
Australia’s only real legislative response to the Christchurch shootings was to pass laws restricting violent material on the internet. As the TikTok fiasco shows, they’ve been pretty ineffective, and generally do nothing to stop the spread of reactionary ideas.
As Crikey reported last year, they could not be used to remove the website selling a far-right computer game that allowed users to re-enact the Christchurch shooting.
QAnon
In August, Facebook finally began cracking down on groups and pages promoting QAnon, the sprawling conspiracy that claims US President Donald Trump is saving the world from a cabal of Satanic paedophiles. But according to the University of Tasmania’s Kaz Ross, a seasoned QAnon watcher, the purge had little impact on the community in Australia
“A number of the bigger Facebook groups didn’t openly identify as Q groups at that point, so weren’t affected,” Ross said
“Lots of smaller groups have continued, and even when the original group has been thrown off Facebook, they’ve managed to create new groups.”
But while much of the QAnon infrastructure did remain on Facebook, the threat was enough to drive many in Australia onto Telegram, an anonoymous, far more radical online echo chamber frequented by neo-Nazis and white supremacist types.
Child pornography
Big Tech platforms have a paedophilia problem. On YouTube, predators use comments’ sections to “advertise” and sexualise what might be seemingly innocent videos of children. Facebook is facing lawsuits from former moderators, in part over trauma caused by the company’s inability to get rid of child pornography. Recently, Australian Federal Police Commissioner Reece Kershaw accused Facebook of contributing to the rape and torture of children.
You can’t regulate for malice
The common thread across these disparate issues — suicide, terrorism, far-right propaganda, child pornography — isn’t the unwillingness of platforms to remove content that there is wide agreement needs to be removed.
Major social media platforms devote substantial resources (though, invariably, never enough to satisfy critics) to identifying and removing content both manually — a gruesome and deeply distressing job for the workers concerned — and using complex algorithms designed to identify content automatically.
Rather, it’s the willingness of large numbers of malicious online actors to defeat those attempts, often with endless variations of content presentation designed to prevent algorithmic blocks, variations often honed in copyright infringement, another area where platforms have worked to take down unwanted content.
The perpetrators might be terrorists, right-wing extremists or child abusers, or they might simply be trolls who derive shits and giggles from disseminating transgressive content for the specific reason that wider society wants to block it.
Stopping them is much harder than simply shutting down a site (e.g. 8chan) used as a meeting place for aberrant communities, or banning a high-profile individual from a platform (e.g. Milo Yiannopoulos). That process might be something akin to whack-a-mole — the trolls/criminals find another site to congregate at — but the lives of the moles are significantly disrupted. Short of shutting down Facebook/Youtube/Tiktok/Twitter/this week’s social media target, it’s impossible to regulate for stupidity and malice.
This is what we gave up for the internet — the ability of large corporations and governments to play gatekeeper on content. That’s now down to consumers themselves. And many consumers are deeply, deeply unpleasant people.
The internet gives them a platform like it gives the rest of us a platform.
Crikey is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while we review, but we’re working as fast as we can to keep the conversation rolling.
The Crikey comment section is members-only content. Please subscribe to leave a comment.
The Crikey comment section is members-only content. Please login to leave a comment.