The trinity of ethical tech governance

January 15, 2025 22:27

I have had the pleasure of working on the piloting of two Master’s programmes at the University of Hong Kong – the MA in Philosophy, Politics, and Economics, and the MA in AI, Ethics, and Society. Undergirding both programmes is a crucial doctrine that has become increasingly salient in view of recent technological developments: in an age when technological growth and evolution runs rampant, ethics is more important than ever.

Indeed, as I argued recently in a speech delivered at a public conference, we must sincerely and seriously grapple with the trinity of ethical tech governance – the three “-ity” in authenticity, equity, and humanity. Only in adhering firmly to all three principles, can we seek to advance the interests of mankind – as well as non-human organic species on this very planet.

Authenticity matters. A few days ago, I was sent a viral video purportedly featuring Elon Musk commenting on the disparities and divergences in approach to natural disasters between China and the US. “Musk” was effusive and unequivocal in his praise for China, which “he” described to be highly efficacious and capable of delivering much-needed remedies – in contrast to the bureaucratically ossified and inept Californian government.

The catch, of course, was that the video was a deepfake. Musk never said any of these words. The rise of artificial intelligence (AI) and AI-based content creators has effectively enabled the proliferation of false media contents that are barely distinguishable – if at all – from actual contents. The lines between the real and the fake, and thus the real and the hyperreal, are increasingly blurred. Netizens are quick to consume and take for granted seemingly “well-made” and “well-produced” videos online – yet few, if any, observers have yet to devise a coherent and applicable framework guiding individuals in seeking truth from facts.

Reality, as Hannah Arendt aptly observes, is often unconvincing and unappealing. We are far more inclined to believe in what we would like to see, and wish were true. With the highly effective simulative technologies that media outlets are increasingly equipped with, our worst tendencies are thus rewarded – and our proclivity towards the dopamine rushes of hearing what we’d like to hear, is thus satisfied and matched by the increasingly sycophantic media to which we are exposed. We must remember the importance of authenticity – of fact-checking, of taking back benign control, and steering our learning capacities to investigate and uncover the actual truth, as opposed to piecemeal, fragmented quasi-truths. This is sadly easier said than done.

Equity is also vital. Our societies are increasingly predisposed towards leveraging algorithms and machine learning as the primary basis for resource and opportunity allocation. From insurance and pension calculations to assessing student competence and aptitudes, from gauging the appropriate responses to natural disasters and crises, to managing traffic, we are increasingly reliant upon algorithms to make the hardest-to-quantify, and hardest-to-account-for, decisions on our behalf.

Take the usage of algorithms in sentencing, for instance. AI is increasingly adopted in court rooms across the world, to allow for the ostensibly more “accurate” approximation of the appropriate sentencing lengths for criminals. Yet much of the input into the learning models – and the resultant algorithms, and subsequent refinements to them – largely features historical data that is skewed by racialised prejudices, amongst other factors. In the US, a black man is likely deemed as disproportionately likelier to re-offend, as a result of the higher arrest and prosecution rates, especially in lower levels of courts, and racial profiling-centric policing practices. In supporting the conclusion that a “harsher sentence” is perhaps “needed” to prevent recividism on the part of an African American convict, algorithms could well exacerbate the vicious cycle of incarceration-recidivism-incarceration that many in the black community must experience and suffer from in contemporary USA.

We ought to recognise that algorithms trained on imperfect – indeed, intentionally or systemically distorted, even – data, cannot be entrusted with substantive decisions that leave a lasting impact on their affected stakeholders. Yet under the guise of what I term “machine objectivity” – a faux and erroneous narrative that paints machine learning to be the key to “objectivity” (whatever that term is intended to mean, I can only fathom via conjecture) – we have seemingly come to tolerate and accept the inequities arising from algorithms. This simply cannot and ought not be.

Finally, we must not overlook our humanity. Humans can, in theory, distinguish themselves from machines in many fundamental ways – from our possession of emotions and judgments to our ability to connect and empathise with others genuinely; from our being unique personalities to our ability to maintain and embody relations with others through our dispositions, judgments, worldviews, and beliefs.

Yet technological unemployment remains a salient and real risk. There exists a non-trivial possibility that humans will, in certain respects and attributes, be increasingly replaceable and substitutable by AI-based automation.

It is high time that we connected with our roots and went back to what makes us truly special and sui generis. We must harness our individuality, embrace our agency, and live in defiance of the pressures to erase our edges and distinctiveness. Only by living out and celebrating our own quirks and flaws (whilst seeking to address them where possible), can we strive towards true flourishing in the era of hyper-technological growth.

Assistant Professor, HKU