An AI apocalypse on the way?
![Photo: Reuters Photo: Reuters](https://static.hkej.com/eji/images/2022/09/01/3234349_3c8ee5a0139d94574f8f6907e714653b.jpg)
Artificial Intelligence – synthetic entities that possess ‘intelligence’, defined by some to denote the capacity to develop and adhere to complex commands, engage in some form of reasoning processes that are non-strictly-stipulated (even if closed-ended), and most certainly in ways that are autonomous or independent of constant input. As a certain founder of Alibaba put it, we could spell the short form of Artificial Intelligence, and we’d get the Chinese spelling for love. Fun facts.
On a more serious note – are we at the cusp of a grand catastrophe with AI? And should we fear the rise of the robots, as Amia Srinivasan warned (obliquely, for the piece had little to do with it) in her London Review of Books essay a while back?
Whilst AI has its risks, the broad submission I have here is that such risks are more manageable than we think they are: indeed, a better way of framing this is thus – AI’s risks most certainly cannot be neglected, but there are more tools at our recourse and to our avail, than we’d like to acknowledge or credit (or in fact do). In short, the hype about AI takeover, in a Robocop or Terminator-esque fashion, is overstated.
We must first differentiate between two types of AI. Artificial general intelligence (AGI) refers to AI that is capable of understanding or learning any and all task that a human being can. Within AGI, Strong AI possesses the ability to experience genuine sentience and consciousness, e.g. as opposed to act under the constant and contiguous influence of some external or further authority. In short, AGI allows for AI to act in ways that cannot be differentiated from human beings. AGI stands in stark contrast to narrow or weak AI (e.g. ANI, though this abbreviation is less frequently deployed).
Let us begin with the question of ANI. Now, many have raised the following questions: what if we programme ANI in ways that render them unduly lethal (e.g. in being non-discriminatory, even actively detrimental, in their selections of means of achieving certain algorithms, e.g. “Solving hunger or poverty in a region” as an input command would give rise to the conclusion, “I must kill or eliminate/displace all residents in this region to resolve hunger in full.”
The worries here concerning ANI interpreting commands wrongly and implementing highly disastrous policies (at huge costs) to achieve their programmed ends – whilst perhaps reasonably graspable and intuitive – are disproportionate. Firstly, AI scientists have laboured extensively in devising extensive sets of tests and counter-checks in the process of refining ANI, to account for and eliminate the very scenarios outlined here: scientists are no fools. Secondly, it is possible to stipulate, explicitly and comprehensively, the caveats and conditions that guide AI to act in particular ways. If we do not want ANI that inadvertently kills civilians, we can leverage other AI and existing conventional wisdom to craft more fine-grained and modular commands that they would then be bound to follow. Finally, whilst many have expressed reservations over the ongoing AI arms race between great powers such as China and the US – I would submit that it is precisely the high stakes and need for precision and accuracy involved in the process, that would motivate dominant governmental researchers and institutes to hold their AI and AI research teams up to snuff when it comes to AI governance and regulation.
Perhaps the charge here is that ANI could fall into the hands of malignant players – consider, for instance, terrorists or secessionists who seize the AI and exploit it for their own gains. What gives, in these cases? Yet this argument is far too generic: the same charge would be levied towards a whole bunch of other technologies -- e.g. guns, nuclear weapons, javelin, and beyond. The solution to possible breaches and abuses is not uncaveated paranoia, but the recognition that the risks of AI’s development must be, as they currently are, managed through law enforcement agencies and countries working in a forthcoming and multilateral manner (cf. nuclear non-proliferation). To equate the effects of ANI with disastrous deployment of it, and then to over-extrapolate the dangers of such abuse – or to posit that the rise of AI would inevitably come with the rise of bad actors wielding it – is akin to rejecting scientific progress on the front of vaccines for some terrorists might weaponise it to carry out biochemical warfare.
Of course, there is always a risk of ‘AI fallout’, in the same way that nuclear waste spillover is a real and pressing danger. Yet such dangers can be managed through the introduction of common protocol governing how transfer and sourcing of technologies and raw materials (e.g. semi-conductors) play out on the international level – such that, like refined uranium – AI does not easily end up in the hands of malicious actors.
The elephant in the room, of course, is the question of AGI – artificial general intelligence. Now that, my friends, is the subject of a piece I shall pen shortly… So stay tuned.
-- Contact us at [email protected]
-
Irish whiskey aims to regain world crown Mark O'Neill
In the 19th century, Ireland produced 60 per cent of the world’s whiskey – but has lost that crown to Scotland. Now it has started the long march to win it back, in Hong Kong and around the world. At
-
AI cannot replace human teacher Dr. Winnie Tang
In her speech at the Esri Young Scholars Award Presentation Ceremony (“Esri Ceremony”) recently, the Permanent Secretary for Education, Michelle Li mentioned that the Government attaches great
-
Cooling the city with integration of science and technology Dr. Winnie Tang
Hong Kong is still enduring the heat during late October. Among us, the life of more than 200,000 residents living in subdivided units must be even more miserable. Under global warming, how to cool
-
The Perils of Insincere Flattery Brian YS Wong
I had the fortune of receiving an email, whose name shall not be specified. It was sent to me by a journal with a suspicious title and publisher, with three paragraphs of copious praise piled onto me
-
Hong Kong Ballet’s new Butterfly Lovers & Gala Kevin Ng
Hong Kong Ballet’s second programme this season was the premiere of “The Butterfly Lovers’ choreographed by Ricky Hu, the company’s choreographer-in-residence, and his wife Mai Jingwen. This new work