Surviving the era of automation
I spend an unhealthy amount of time answering emails.
So I breathed a sigh of relief when I was recently made aware of an application that could scan through thousands of my emails, devise a fairly accurate model that captures my idiosyncrasies, speech patterns, and even the different registers and diction I adopt in accordance with context (contrast a reply to a high school student seeking academic advice with an invitation for an interviewee – the vocabularies used would vary, even whilst the politeness remains the same, or so I’d hope). The application would draw upon a large language model (LLM) to compose emails on my behalf. It would run laps around its Google-equivalent.
The recommender very enthusiastically demonstrated the magical contraption – I knew his way of writing and speech fairly well, and was handed two emails to discern between: as it turns out, I was able to spot which of the two was written by him (and not his AI), but only in virtue of a slight typo and a clunky sentence structure that was by no means ungrammatical, but certainly aesthetically jarring. The imperfection of his contribution gave away its authenticity. In some ironic and twisted sense, the valourisation of authenticity now goes hand-in-hand with the worshipping of flaws, defects, and imperfections. Perhaps the perfect really is the enemy of the good, at least in this nascent sense.
Another email came through – this time round, it was from a student. They had wanted to ask me for advice about the academic job market, which was understandably suboptimal, everywhere, in year 2025. One of the various worries I have oft heard my students raise with me in class, is the possibility of their work being taken over by AI and automation. Those involved in producing and churning out works of cognitive labour appear to be particularly susceptible to replacement by AI – given technologist Hans Moravec's observation that reasoning requires limited computation, whilst sensorimotor and perception skills expend enormous computational resources.
We have already seen talk of AI self-programming and self-coding, in the absence of further external assistance by human agents. Through improvements to computing capacities, “unhobbling”, and streamlining of algorithms and bolstering of their efficiency, we could well be standing on the cusp of a series of orders-of-magnitude-in-scale leap forwards over the coming decade. A former OpenAI researcher has penned a highly eloquent and emphatic piece on this subject matter – though his geopolitical takes, I find questionable, to say the least.
What can’t be automated, then? What are the jobs that can meaningfully escape from the ever-expanding clutches of AI-driven automation?
Having thought long and hard about the matter, I came to two rudimentary and perhaps premature conclusions. The first is that networks matter. Networking is a distinctively human process – one that is built upon doctrines of trust, recognition, and reciprocal treatment. AI models may imitate and attempt to signal trust, but in the absence of real trust in others, as well as the cognitive and sentient states that are innately bound up with being trusted by others, it would be implausible just yet for us to trust the AI with which we interact.
We wouldn’t trust it on factual matters, given the possibility of AI hallucination. We would also be wary of trusting it on moral and normative matters, for we expect all sufficiently decent moral recommendations to be well justified and grounded in the experiences of agents that are similar to us. This is why we wouldn’t trust a chimpanzee to tell us how to run our families (though there are of course certain individuals who draw inference and insights from chimpanzees, perhaps too zealously). Trust is earned, individualised, and deeply transformative. It heightens and amplifies the testimonies of some, whilst subjugating and inhibiting the testimonies of others. In an era where the false and the true are increasingly entangled, the people are in need of trust – reasons for them to trust one another, benefits following on from trust, and exemplars of sound, trusting social relations with which they can build a community.
Hence trust-intensive jobs are highly unlikely to be automatable – at least in the short to medium run. To put 1 and 1 together, as sociologist James Coleman puts it, trust is an integral component of social capital, the currency of transaction and much-demanded resource vested in human networks and communities. From personal wealth advisors and relationship managers to inspirational intellectuals and politicians, certain types of individuals accrue influence and impact not purely because of what they say, but in fact predominantly due to whom they are, and how they say it. It is for this reason that corporations would pay upwards of 100,000USD just for one hour of a renowned retired statesman’s insights on a subject matter – what is said is far less important than who is saying it (and how they are delivering the message). Trust is easily lost, but extremely difficult to build up. The hope is that AI is far from arriving at this stage just yet – even despite contextual evidence suggesting that the Turing Test has already been passed by some advanced, cutting-edge Large Language Models.
The second is that humans must think, act, and speak with randomness, in order to beat AI. The way reinforcement, or even unsupervised, learning, takes place, renders it the case that most AI models operate in accordance with fairly strictly stipulated laws connecting inputs and outputs – and that are undergirded by the painstaking efforts of ensuring algorithmic precision. AI outputs tend to be rather “vanilla”: polished, solidly comprehensive, yet rarely impressive. To use the parlance of the ‘Gen Z’, AI image-editing and text-producing models are far too ‘normie’ – submissive to the whims of the times and the prevalent zeitgeist.
Normie and vanilla arts do not sell. They are not truly transformative and thus are widely disparaged as lacking in personality. As such, either we can train LLMs to be edgier (i.e. less capable and bent on blocking out ‘fringe’ noises; platforming and showcasing more anomalies and incorporating such outlying points into their generative processes), or we must empower the individuals who act as the ‘sources’ for much needed variance, to take up the mantle of injecting uncertainty and randomness into the end results. Such training can take place through cultivating cohorts of leaders who can confidently straddle and draw upon a multitude of disciplines and fields – in coming up with left-field positions and propositions that are rarely thought of or transcribed into the database for AI models.
Only through the enforced cultivation of randomness, can we truly nurture future generations that can withstand the onslaught of convergence- and conformity-driven AI learning processes and products. After all, I do not see my writings towards newspapers of repute as particularly AI-friendly and AI-automatable. I remain cautiously optimistic. Or are they?
-
Is certainty a sin? Brian YS Wong
A few weeks back, I watched one of the most widely anticipated releases of 2024 – Conclave, a riveting political thriller directed by Edward Berger. Without giving too much away, I would settle for
-
Why Carpe Diem Brian YS Wong
“Carpe Diem” – we are told. To seize the day, is a moral prerogative. We must expend each and every hour, minute, and second with due care and caution, paying conscientious heed to the fact that our
-
British doctor’s autobiography describes remarkable life in HK Mark O'Neill
Dr John Mackay arrived in Hong Kong in 1963 and has lived here ever since. For 30 years. he was one of the city’s most respected physicians in one of the largest medical practices and then chose a
-
To build Hong Kong into an AI training base Dr. Winnie Tang
Since 2022, the Government has introduced a number of measures to compete for talents, and this year's Policy Address has made more persistent efforts to nurture local and lure I&T talent from all
-
Correlation between emotional health and academic performance Dr. Winnie Tang
What can we learn from the latest research which shows a positive relationship between mental health and academic achievement? In view of the sharp increase in student suicide cases in recent years,