Can AIs Write op-eds?

September 03, 2024 22:08

A fortnight ago, a student asked me the following question, “Brian, what if one day, AIs could write better op-eds than you?”

The jaded and deluded self in me was immediately tempted to retort, “What, do you think I am genuinely replicable or imitable? Is my writing so fundamentally basic that it would be possible for a Generative AI to recreate its subtle idiosyncrasies and diction?”

Yet commonsense compelled me to look beyond my instinctive intuition. Indeed, I was drawn to the possibility that I might – indeed – be outdone by an AI. And hence I opted for the easiest solution and the best means of uncovering the truth: experimentation.

A few nights ago, I copied and pasted my English column on EJ Insight last week – on the predictability of politics – and asked ChatGPT to churn out an article that “broadly resembled the text I had sent you, but with the focus of the discussion being on whether AIs could replace columnists.” The end result began as follows:

“If you spoke with a journalist and a tech enthusiast in 2023, there would be a 75% chance that the former would express concern about AI taking over writing jobs, whilst the odds of the latter celebrating the same would be 90%. Roll back the clock by just a decade, and the odds would have been drastically different”…

The op-ed generated came across as somewhat confused and no less confusing, and perhaps a tad too meandering and equivocating for my liking (though one could also argue that this is just my my style – touche). Yet it was by and large grammatically correct, occasionally humorous, and rich in its range of vocabularies and syntactical structures. At first glance I would have been convinced this came from my hands (or furiously typing fingers), though a few points gave away the not-so-obvious: that the content was generated through AI.

The first was the absence of a bold and independent stance to the passage supplied. With the probabilities and occurrences of key words – in close succession – being a vital determinative factor in the generative process, it is only understandable that minority viewpoints, such as the assertions, “That all op-ed writers are evil.” or “That all AI should be banned given its danger.”, are not particularly well favoured. It would be unlikely for “If you spoke with a journalist” to be followed by “a sandwich caught in between two rocks in the Grand Canyon” – at least, as compared with the above manifestation; probability-based methods of generation would hence give rise to content that is bounded by the parameters of ‘reasonableness’.

The net result, however, was that the resultant op-eds tended to be anodyne and nondescript in their arguments and advocacy, consigned to a fate of irrelevance and forgettability. It was almost akin to reading the cookie-cutter op-eds to which one would be regularly exposed in certain countries, where folks are easily running out of ideas they want to articulate in public – in light of careerist considerations.

The second was a bizarre degree of repetition. The so-called “op-ed” had three paragraphs, with two out of the three sharing broadly the same point: “I can safely say, as someone who has dabbled in both journalism and AI, I have witnessed firsthand the growing fascination with AI’s potential to revolutionise the way we consume news and opinions.” It was unfortunate that the primary inference from my previous writings was that I am a “journalist” – the misfortune lies squarely and fairly with the misinterpretation of my occupation, as opposed to anything substantively untoward about journalism.

The same line cropped up again in the last paragraph. One would be forgiven for thinking that the AI is running out of – and has run out of – new ideas. Now in the event of an op-ed, especially one that is written last-minute, it would be deeply implausible and impractical for the same assertion to be repeated over and over again. Such rhetorical ploys often backfire, and do not add to the perceived credibility of the author. If anything, it counts against them – and that is why those who lean so heavily upon Generative AI would benefit from scrutinising more carefully the texts before their eyes.

The third and final reason concerns the question of stakes. The best columnists – whether they be Martin Wolf, Kishore Mahbubani, or Fiona Hill, if you will, – tend to be thinkers who have strong and well-formulated views on matters, given their personal involvement and interests in possessing a well-developed and -buttressed outlook.

AI has no skin in the game. Generative AI… machine-learning-backed (via reinforcement learning, even) AI has no skin in the game. As it stands, the privilege of having “skin in the game” is reserved only for humans or other sentient creatures, who all have something to lose, and something to gain, with both loss-gain functions bound inevitably by the fragile nature of our existence. We could easily have not survived – and that is why we cherish life so much.

The same cannot be said of AI agents, no matter how humane and compassionate they are trained to be. Fundamentally, AI agents have little skin in ‘the game’ – at least, in our game. To ask someone who has never experienced a particular injustice to opine extensively on said kind of justice would be a mistake. An AI, which has never directly sensed, and at best only indirectly ‘learnt’ through machine learning processes, cannot possibly care enough about their testimony and writings to revise, edit, editorialise, and curate the way they argue and convey their messages, in order to ensure that something meaningful follows from the content.

If AI ends up churning out something impactful thanks to sound prompting and training – that’s great. Yet there is no guarantee that the AI agent will ‘hit the right spots’ at the ‘right places’ – fancy seeing a “We will fight on the beaches”-level speech from an AI agent? I wouldn’t rate the odds, at least, not for now.

Assistant Professor, HKU