The online information environment is getting demonstrably worse. As the tragic events of Bondi Junction unfolded over the weekend, one esteemed commentator said of X (formerly Twitter), “this site is a sink hole of agendas and prejudice. Its utility as a monitor of breaking events is gone”.
Twitter is not the only platform with problems. And critically, we cannot expect more from social media platforms in this country until there is proper pressure applied on their woeful attempts at transparency, realised through genuine accountability mechanisms with actual teeth.
The road to informational hell is paved with good intentions, and in the lightly regulated world of digital platforms, these intentions appear as “transparency theatre”. As social media companies face mounting questions about what they’re doing to address misinformation, much of the global public faces a feedback loop of corporate PR announceables on largely unauditable policy changes.
Big talk but light detail from YouTube
YouTube was featured in The Australian recently with a Google-sponsored piece about its efforts to remove “thousands and thousands” of harmful videos. These statements vividly reveal the “denominator problem” — the efficacy of YouTube’s removals means very little without insight into how many violative posts are circulating on the platform in the first place, or any metrics on internal content moderation processes. In its most recent Australian transparency report, Google claims it removed over 2,000 YouTube videos originating from Australian IP addresses for violating misinformation policies over a span of six months. We could guess the annual figure is around 4,000 — or 7,000 if the reported numbers for covid misinformation removals are included.
So the “thousands and thousands” figure appears pretty standard and unchanged from previous years. Is it enough, or statistically significant? Google doesn’t share much about the size of YouTube’s Australian market. We do know globally that 500 hours of video content is uploaded every minute to YouTube. If we had a better platform transparency environment, we could request metrics on local usage data to get a better picture of what the “denominator” is. We could also compel YouTube to reveal quantifiable results from its internal processes on content moderation, such as how much violative content is detected, who detects it (automated processes, content moderators, “trusted flaggers” or fact-checkers, regulators), and the final outcomes. Data like this is becoming routine under European legislation but remains out of reach for Australians.
Instagram ‘bins politics’
Meta recently announced an apparent pivot from politics. Instagram and Threads’ recommender systems will, by default, not recommend content deemed political. As Instagram’s chief said: “politics and hard news are important … But from a platform’s perspective, any incremental engagement or revenue they might drive is not at all worth the scrutiny, negativity (let’s be honest), or integrity risks”. As Tama Leaver has observed, a consequence of Meta putting political content in the “too hard basket” is that creators who rely on the platform for distribution will turn away from generating political content, “the definition of a political chilling effect”.
So the days of citizen journalists and political commentators may be economically numbered on Meta. There are other implementation questions — how does the platform determine what is “political”, and will we ever find out? Researchers in Europe may be able to interrogate the platform’s algorithms at some level through data access requests mandated by the Digital Services Act. In Australia — unsurprisingly — platforms cannot be compelled to provide this data.
Shifting from theatrics to metrics
When the government released an exposure draft to tackle misinformation last year, there was outcry. But the anguish over free speech missed arguably a far bigger target — whether digital platforms in this country can continue in the gentle terrain of self-regulation or if there should be public interest scrutiny on what decisions they make regarding online information and how they make those decisions. Free speech crusaders would have done well to realise that legislatively mandated transparency requirements on how platforms handle misinformation are actually a win for free speech.
So we are left with self-regulation with no real oversight and no real consequences for platform acts or omissions. Digital Industry Group Inc (DIGI), the industry body that wrote and administers the “Code of Practice” on misinformation and disinformation, appeared to accept this week that code signatories can mislead the public in their transparency reports so long as their statements are not “materially false” (disclosure: this author’s employer made the complaint to DIGI). Putting aside that Australian corporate law prohibits “misleading conduct” and presumably “material falsehoods” are a notch or three below that standard, interrogating these falsehoods requires providing proper data to regulators on platforms’ technical systems.
Until regulators have that data, we are marooned in the hellscape of corporate PR and willful ignorance about how any of these enormously influential companies’ online systems actually operate.
Crikey is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while we review, but we’re working as fast as we can to keep the conversation rolling.
The Crikey comment section is members-only content. Please subscribe to leave a comment.
The Crikey comment section is members-only content. Please login to leave a comment.