
This year is a landmark one for democracy, with more than 4 billion people in more than 60 countries eligible to vote in elections. There’s a grim poetry in this historic milestone coming at a point when our information ecology is at its most fractious.
At the end of last week, American artificial intelligence company OpenAI announced Sora, its text-to-video model. Sora, when it is publicly available, will allow users to generate lifelike video from text prompts — with example outputs on the website ranging from sweeping drone shots of a patch of rocky Californian coastline to a woman walking down a densely crowded street in Tokyo. These videos may not stand up to close scrutiny, but amid the endless firehouse of content on today’s internet, it’s hard to imagine them receiving that level of examination.
Sora is the latest in a legion of new generative-AI tools that have emerged from recent advances in neural network development, many spearheaded by researchers at OpenAI. These tools, now also offered by major big tech firms like Meta, Google and Microsoft, leverage vast reservoirs of computing power to enable millions of users to frictionlessly conjure text, video and audio from the digital ether. The outputs of these new platforms at this point have spread far faster than our ability to develop and implement systems to verify their artificial nature.
It’s a problem of scale: by bringing the marginal cost of producing content to near zero, the volume of that content will increase exponentially.
Most generative-AI deceptions exist on a harmless continuum from amusing to annoying. In March last year, a photo went briefly viral that depicted Pope Francis decked out in an alarmingly stylish Balenciaga puffer jacket in lieu of his usual papal vestments. It was an AI fake, generated with the latest release from generative-AI startup Midjourney.
But there are good reasons to be concerned about political impacts on states other than the Vatican. In 2023’s elections in Slovakia and Argentina, for example, deepfaked audio spread on social media depicting political candidates and government figures saying things they did not say. The actual lasting impact of these generative-AI interventions is hard to quantify, but it demonstrates an obvious point: if you make it vastly easier to fake images, audio and video, then bad actors will avail themselves of the opportunity.
Generative AI also poisons the well when it comes to things that did occur. Politicians now have a readymade excuse when confronted with video or audio evidence of misdeeds: it’s a deepfake. Last year, a Taiwanese lawmaker suggested a grainy video that purportedly depicted him engaged in an extramarital affair was AI-generated. In July, a politician from India’s ruling Bharatiya Janata Party mounted a similar defence when audio of him accusing his own political faction of corruption leaked online. Despite reporting, the actual truth of the matter in both of these examples remains unresolved.
There’s an argument to be made that generative AI is a symptom of a broader collapse in our traditionally truth-bearing institutions, rather than some new and unique problem for democracy. The past decade has seen numerous destabilising political events, with misinformation and disinformation blamed as the culprit. In 2016, Brexit and the election of Donald Trump led to an international discourse about fake news, social media “filter bubbles” and state disinformation campaigns. Populist anger at COVID-19 lockdowns and vaccinations was similarly blamed on online misinformation, with institutions like the World Health Organization mounting public information campaigns against the pithily named “infodemic”.
It may well be the case that this is a slow death for the existing media and political establishment — or “regime”, as the new torchbearers of free speech would say — under a technological onslaught that began in earnest when Google started indexing and ranking the web for public consumption. We might ask not why people are inclined to believe fake images and videos that cross their internet feeds, but instead why they distrust anyone telling them otherwise.
Sure thing. Now that generating and distributing any amount of nonsense is almost effortless and costs next to nothing, of course there’s going to be unfathomable amounts of it.
All the old arguements for free speech were of their time and cease to be relevant in these conditions. This is not free speech in the way it was understood. It is speech that has become so cheap it has finally lost all value. It is worthless, and any speech that still has value is lost in the vast ocean of crap. This is being deliberately exploited.
Hannah Arendt in The Origins of Totalitarianism wrote this, although she never had the pleasure of seeing what our latest technological cleverness could do to information:
‘In an ever-changing, incomprehensible world the masses had reached the point where they would, at the same time, believe everything and nothing, think that everything was possible and that nothing was true. Mass propaganda discovered that its audience was ready at all times to believe the worst, no matter how absurd, and did not particularly object to being deceived because it held every statement to be a lie anyhow. The totalitarian mass leaders based their propaganda on the correct psychological assumption that, under such conditions, one could make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism; instead of deserting the leaders who had lied to them, they would protest that they had known all along that the statement was a lie and would admire the leaders for their superior tactical cleverness.’
There’s a watered-down quote from Voltaire that goes ‘Those who can get you to believe in absurdities can also get you to commit atrocities.’ And we sit and watch as the means to get the gullible to believe whatever they’re fed grow more and more convincing.
And the flip side of that is the nihilism believing nothing and lapsing into a sort of negativity <insert nietzsche quote here> *goes to LLM*:
“Nihilism is… not only the belief that everything deserves to perish; but one actually puts one shoulder to the plough; one destroys.” – Friedrich Nietzsche, The Will to Power
I have to add this: Ironicaly, I was recalling that The Will to Power is not regarded as a trustworthy source of Nietzsches work:
On this, LLM says:
—
Nietzsche never published a book called “The Will to Power”. The work was compiled and published posthumously by his sister, Elisabeth Förster-Nietzsche, from his notebooks of the late 1880s.
Elisabeth was a staunch anti-Semite and German nationalist who later aligned herself with the Nazi party. There are strong accusations that she edited and arranged the notes to promote her own ideology, distorting Nietzsche’s original intentions.
Theoretically, wouldn’t artificial ‘intelligence’ be an improvement on the dearth of the natural stuff now?
Only if it was not the thing currently called artificial intelligence, which is a fairly silly name for a vast database of human ‘intelligence’ that the IT system samples, rearranges and spits out when prompted. It’s rather like reheated meals knocked together from a huge store of leftovers.
also, it’s a tool or amplification device. You can ask “how can we reduce carbon emissions from xxx”, or “how can I increase productivity on my marginal fracking wells”
Love the pic – Trump’s arm must be very very long!
There is a certain level of hysteria in the media about AI deceptions, deepfakes etc. Misinformation and disinformation have been ubiquitous since the dawn of humanity.
The oldest, greatest, and most skilled purveyors of disinformation and misinformation are and have always been religions and cults followed in recent centuries by extreme political movements followed by advertising and the media.
In recent years there has been a breakdown in trust of institutions formerly regarded as purveyors of truth – and not without good reason. People need to be encouraged to be more astute in being able to differentiate between fact and BS. Despite the howling of postmodernists, there are such things as objective facts, subjective opinions and nonsense.
Well…yes, but no. That completely misses the point. Until now it took some serious effort to produce and disseminate whatever someone wanted to say. Speaking to a crowd, with only your voice, takes some doing and the crowd cannot be very big and must be well behaved if they are all going hear, but that was just about the only way to do it before literacy was common. Until a few centuries ago, putting anything down in writing was seriously expensive and making a copy was almost as difficult and just as expensive. Things changed dramatically with printing presses and paper, but still there was a cost to producing and distributing that put some brakes on total chaos, and getting people to read what was produced took some doing. Radio was the next big thing, and its impact should not be understated. It had a huge impact on politics from the 1920s onwards; the first truly great exponents include a chap called Goebbels and his pal the Austrian corporal. But still, the floodgates had not fully opened. Even film and TV could not do that.
But today we are in a qualitatively and quantitatively different environment. Blaming people for lacking the ability to ‘differentiate between fact and BS’ is futile. The volume of BS easily swamps the facts. Producing and disseminating crap that for any normal person is objectively indistinguishable from genuine recordings is not only possible, it is easy and cheap for just about anyone. This works in tandem with the algorithms of anti-social media that rapidly and decisively separate people into isolated bubbles where no contrary opinions penetrate and then feed them increasingly concentrated and extreme versions of their own prejudices in order to maximise clicks and engagement, which is all that matters to those getting rich by it. It’s at least as addictive as pokie machines and just as pernicious. Many of those you believe should pick out the facts from the BS never get to see the facts at all in their little Fox News universe or whatever, so even if they are brilliant at identifying facts it cannot make a difference. All this is the greatest gift ever to all who are hostile to society, whether they are external enemies or acting from within.
You are right however about extreme religious and political movements (pretty much the same thing, from this point of view) as particularly skilled purveyors of disinformation and misinformation. They exploit it brilliantly for their own benefit. They recognise the importance of keeping all their followers on the same page and isolating them from any contrary opinions or ‘alternative facts’. Real facts and objectivity are irrelevant concepts for this purpose; all that matters is who is orthodox and who are heretics or infidels. Those in charge of the party / religion / cult have complete power of their followers. This more or less guarantees they must prevail over those who weakly blather on about free speech, toleration, respect for differences and so on, which can only lead to splits, confusion and disarray, all made much worse by anti-social media. It inevitably leaves everyone who tries to carry on way that helpless against those who band together, willingly or not (there is often coercion), to blindly follow one system of belief.