Why should we care about the philosophy of AI?

Artificial intelligence (AI) is an increasingly ubiquitous field of study and subject of discussion and investigation across a variety of fields: from computer science to digital marketing and commerce, from governance to finance, AI has become the definitive pillar to the zeitgeist. There’s no escape from it - not just as a concept, but also on a substantive level: from the applications we use to the snazzier home appliances that we may have installed, AI is likely to play an ever-greater role in our daily lives, shaping decisions we make on the most minutiae of issues through to existential questions that affect the whole of humanity.
As a philosopher, I spend much of my time contemplating the normative, ethical, and conceptual implications of AI - for one, what is AI? For another, to what extent should AI be accorded the rights and duties that are usually applied to humans? What makes AI distinctive - if at all - from human beings, save from the differences in substrates and reasoning modus operandi between the two categories?
These are all incredibly interesting questions. In the following, I shall seek to offer a brief (and by no means exhaustive) survey of the key philosophical questions involving AI. The hope is that these questions could kickstart more serious rethinking and reflections over the way we approach the subject -- but also shed insight on how policymakers can wrestle with some of the ‘ickier’ questions involved.
The first, and foremost question, is the ethical implications of how we treat different kinds of AI. For AI that lacks the ability to recreate in full human functions and actions - that is, ‘weak AI’, or ‘Artificial Narrow Intelligence’ (AI) - this question yields relatively straightforward answers: we would treat weak AI as we would smartphones, computers, or snazzy watches fitted with health monitoring appliances and functions - whilst these appliances can imitate parts of what makes us “smart”, they are neither capable of developing their own consciousness (i.e. the ability to generate independent action-guiding preferences without external input) nor sentient (i.e. able to develop genuine feelings and emotional responses in reaction to external stimulus).
Yet what of strong AI that ticks both boxes, and that thus passes the Turing Test (some would argue that consciousness is sufficient for strong AI, and that sentience is a separate matter)? Should we treat such AI as silicon-based moral beings - identical to human beings in their rights, privileges, and duties, and different only in their foundational substrate? Should we accord to them legal status and the derived protections that we would expect from such status? How should this bear on the way we use AI to substitute for human labour in ‘dirty, dangerous, and demeaning’ (DDD) jobs? To what extent can the prescribed treatment of AI be compatible - or rendered compatible - with our long-standing social norms?
The second, and equally important concern, revolves around AI-human alignment. We have previously explored - briefly - ways by which AI may behave in ways that thwart or challenge human interests. Yet what remains missing here is a systematic study of the conditions under which divergence, or convergence, between AI and human interests arise. For one, it is relatively easy to input the command, ‘Uphold the interests of humanity’ into an AI system; it is much harder to get the AI system to behave in exactly the way by which we would expect it to maximise human interests. Does the AI prioritise particular groups of individuals on the basis of their average incomes, races, or genders? Does the AI behave as a utility-maximising machine, at the expense of side constraints, such as the constraint against killing or harming innocent individuals? How can we ensure that the AI does not generate results that transgress these core, inviolable moral commitments?
These questions are especially salient in light of the difficulty of codifying these limitations and restrictions upon AI action. It may be relatively easy for us to stipulate that they should not lie, cheat, scam, kill, or harm others. Yet what of cases where - short of our (as programmers) cognizance of the possible actions that AI could undertake, the AI opts for some inconceivable AND inconceivably bad ‘shortcuts’ or misinterpretations of our commands, e.g. “Cleaning the floor of a restaurant” by actively preventing patrons from entering into the physical space of the restaurant. Studying the philosophy of AI may not help us completely and robustly fill out the missing gaps, yet at least can alert us to the dangers of not understanding how AI responds to commands and instructions - and shed light on sub-areas for further exploration.
Finally, the advent of AI has incredibly real (and potentially pernicious) consequences for human society. AI-powered automation efforts will likely result in the displacement of large numbers of human workers, especially in lower-end white-collar sectors and professions, where it is relatively easy (per Moravec) to create commands and write algorithms that encapsulate the set of responses and actions required to respond to and accomplish the designated tasks in questions. Are the ‘AI revolution-displaced’ workers entitled to any compensation? How should we make sense of their plight? Would consumers and/or taxpayers have the duty to chip in and contribute towards a safety fund for them, in the event of their being laid off by automating employers? Through examining the intersection of the ethics of AI with areas such as labour economics, sociological theory, and political scientific analysis, we can (hopefully) come a step or two closer to finding out applicable and practical answers to the questions raised here.
In little under a week’s time, I’ll be co-teaching a core module in the Master of Arts in the field of AI, Ethics, and Society (MA-AIES), a pioneering course introduced at the University of Hong Kong - and a first-of-its-kind degree within Greater China.
The primary objective of the course is to equip talented, aspirational students - whether it be highly qualified field practitioners in coding and quantum computer, or lawyers and philosophers interested in normative questions concerning artificial intelligence - with the skills and understanding required to navigate the increasingly complex landscape of AI governance and regulation. The course is conceived of and steered by a highly seasoned team of academics at the HKU Department of Philosophy, and I’m most honoured to be able to learn from my many more esteemed and qualified colleagues, as well as the students I shall be working with over the upcoming year.
Here’s to more philosophy, and - specifically - philosophy of AI!
-- Contact us at [email protected]
-
Collaboration to Transform Waste into Resources Dr. Winnie Tang
Over the past decade, the amount of waste produced by Hong Kong residents has remained high. According to government data, the per capita municipal solid waste disposal rate per day in Hong Kong
-
Is certainty a sin? Brian YS Wong
A few weeks back, I watched one of the most widely anticipated releases of 2024 – Conclave, a riveting political thriller directed by Edward Berger. Without giving too much away, I would settle for
-
Why Carpe Diem Brian YS Wong
“Carpe Diem” – we are told. To seize the day, is a moral prerogative. We must expend each and every hour, minute, and second with due care and caution, paying conscientious heed to the fact that our
-
British doctor’s autobiography describes remarkable life in HK Mark O'Neill
Dr John Mackay arrived in Hong Kong in 1963 and has lived here ever since. For 30 years. he was one of the city’s most respected physicians in one of the largest medical practices and then chose a
-
To build Hong Kong into an AI training base Dr. Winnie Tang
Since 2022, the Government has introduced a number of measures to compete for talents, and this year's Policy Address has made more persistent efforts to nurture local and lure I&T talent from all