An AI apocalypse on the way?

September 01, 2022 08:30
Photo: Reuters

Artificial Intelligence – synthetic entities that possess ‘intelligence’, defined by some to denote the capacity to develop and adhere to complex commands, engage in some form of reasoning processes that are non-strictly-stipulated (even if closed-ended), and most certainly in ways that are autonomous or independent of constant input. As a certain founder of Alibaba put it, we could spell the short form of Artificial Intelligence, and we’d get the Chinese spelling for love. Fun facts.

On a more serious note – are we at the cusp of a grand catastrophe with AI? And should we fear the rise of the robots, as Amia Srinivasan warned (obliquely, for the piece had little to do with it) in her London Review of Books essay a while back?

Whilst AI has its risks, the broad submission I have here is that such risks are more manageable than we think they are: indeed, a better way of framing this is thus – AI’s risks most certainly cannot be neglected, but there are more tools at our recourse and to our avail, than we’d like to acknowledge or credit (or in fact do). In short, the hype about AI takeover, in a Robocop or Terminator-esque fashion, is overstated.

We must first differentiate between two types of AI. Artificial general intelligence (AGI) refers to AI that is capable of understanding or learning any and all task that a human being can. Within AGI, Strong AI possesses the ability to experience genuine sentience and consciousness, e.g. as opposed to act under the constant and contiguous influence of some external or further authority. In short, AGI allows for AI to act in ways that cannot be differentiated from human beings. AGI stands in stark contrast to narrow or weak AI (e.g. ANI, though this abbreviation is less frequently deployed).

Let us begin with the question of ANI. Now, many have raised the following questions: what if we programme ANI in ways that render them unduly lethal (e.g. in being non-discriminatory, even actively detrimental, in their selections of means of achieving certain algorithms, e.g. “Solving hunger or poverty in a region” as an input command would give rise to the conclusion, “I must kill or eliminate/displace all residents in this region to resolve hunger in full.”

The worries here concerning ANI interpreting commands wrongly and implementing highly disastrous policies (at huge costs) to achieve their programmed ends – whilst perhaps reasonably graspable and intuitive – are disproportionate. Firstly, AI scientists have laboured extensively in devising extensive sets of tests and counter-checks in the process of refining ANI, to account for and eliminate the very scenarios outlined here: scientists are no fools. Secondly, it is possible to stipulate, explicitly and comprehensively, the caveats and conditions that guide AI to act in particular ways. If we do not want ANI that inadvertently kills civilians, we can leverage other AI and existing conventional wisdom to craft more fine-grained and modular commands that they would then be bound to follow. Finally, whilst many have expressed reservations over the ongoing AI arms race between great powers such as China and the US – I would submit that it is precisely the high stakes and need for precision and accuracy involved in the process, that would motivate dominant governmental researchers and institutes to hold their AI and AI research teams up to snuff when it comes to AI governance and regulation.

Perhaps the charge here is that ANI could fall into the hands of malignant players – consider, for instance, terrorists or secessionists who seize the AI and exploit it for their own gains. What gives, in these cases? Yet this argument is far too generic: the same charge would be levied towards a whole bunch of other technologies -- e.g. guns, nuclear weapons, javelin, and beyond. The solution to possible breaches and abuses is not uncaveated paranoia, but the recognition that the risks of AI’s development must be, as they currently are, managed through law enforcement agencies and countries working in a forthcoming and multilateral manner (cf. nuclear non-proliferation). To equate the effects of ANI with disastrous deployment of it, and then to over-extrapolate the dangers of such abuse – or to posit that the rise of AI would inevitably come with the rise of bad actors wielding it – is akin to rejecting scientific progress on the front of vaccines for some terrorists might weaponise it to carry out biochemical warfare.

Of course, there is always a risk of ‘AI fallout’, in the same way that nuclear waste spillover is a real and pressing danger. Yet such dangers can be managed through the introduction of common protocol governing how transfer and sourcing of technologies and raw materials (e.g. semi-conductors) play out on the international level – such that, like refined uranium – AI does not easily end up in the hands of malicious actors.

The elephant in the room, of course, is the question of AGI – artificial general intelligence. Now that, my friends, is the subject of a piece I shall pen shortly… So stay tuned.

-- Contact us at [email protected]

Assistant Professor, HKU