Should we fear Artificial General Intelligence?

A quick recap. We were last discussing the implications of the rise of AI – and the potential downside risks and dangers of such. We also dispelled relatively alarmist and uncaveated worries concerning the possible damage ANI would bring. ANI, to remind those of you who had not been following, is Artificial Narrow Intelligence – e.g. intelligence that lacked the ability to perform all intellectual tasks that human beings can. What’s now needed, perhaps, is a serious interrogation of the question, what if Artificial General Intelligence (AGI) were to emerge; should we be worried?
The first step to answering this question, is to figure out what on earth AGI in fact is. Philosopher John Searle proposed that strong AI (a subset of AGI) should be defined as AI that can ‘think’, have ‘a mind’, and ‘consciousness’. That is, the strong AI is not merely simulating our thinking, but is in fact capable of replicating it – or exceeding it; in contrast, weak AI can only simulate and behave as if it possessed the above traits. What is then ascribed to strong AI, of course, is a series of conjectural and quasi-speculative behavioural characteristics – it can learn, self-teach, self-regenerate, and develop ways of outsmarting human beings. Indeed, considering the performance of AlphaGo (a while back) and GPT-3 (a more recent innovation), it is not hard to see why the prospects of Strong AI could be daunting.
I thus want to divide the following exposition into two questions – first, is strong AI, were it to exist, capable of wreaking substantial havoc upon humanity (and will it?); second, how far are we from strong AI – is it even attainable, in the first place?
I ask the first question, for I’d like to engage with worst-case scenario planning and thinking: suppose it is indeed the case that AI can think, self-learn, evolve, debate, interrogate claims, create, imagine, and beyond – in accordance with its own volition. Suppose further that the Control Condition (e.g. that we can control and ensure alignment between the strong AI and ourselves) is not met – so we have no ability to dictate the behaviours of the Strong AI. Should we be fazed by such prospects?
I would posit not – many of the Strong AI’s concerns in relation to the continuity and persistence of the planet, the preservation of generally tenable living conditions, and the maintenance of world peace (such that more Strong AI could indeed be created down the line – assuming that these entities have a procreative drive) would point them towards preferring a world where humans remained alive, whether it be as collaborators, companions, or, indeed, as objects of imitation and conversational study. Obviously, there may come a day when humanity ceases to be useful to Strong AI – but that day is unlikely to arrive so long as we remain innovative and adept in our handling species that are, for one, structurally and systemically capable of far more than we are. This does not strike me as an impossibility.
Of course, there is also the worry that Strong AI would view domination or subjugation of us as aligned with the wider interests of their community – or planet Earth’s interests. Yet in face of this, surely it is possible for humanity to adapt, to adjust, and to rearrange our habits in ways that conform with the clearly delineable and trackable algorithms that guide Strong AI in their volitions and desires. Finally, and at the end of the day, it is not as if Strong AI were innately impervious and immune to physical damage. If push comes to shove, I do not envision Strong AI as being capable of outlasting nuclear destruction – that, in and of itself – strikes me as a reasonable failsafe.
Perhaps all of this sounds too outlandish to you, and you’re just looking for some commonsense check. So here we go – voices such as the former Baidu VP and Chief Scientist Andrew Ng, along with former President of Google China Lee Kai-Fu (had the pleasure of meeting him a few weeks ago in Hong Kong) remain fundamentally sceptical of the prospects for Strong AI/AGI. The reason for that is simple: the computing capacity and complexity required for AGI to hold are quite simply far beyond the contemporary par in AI development.
Even I, as a relative optimist, would put the time horizon for AGI at within around a few decades to within the century. There exists much work to be done prior to AI genuinely cultivating conscientiousness and becoming capable of reprogramming or re-arranging its composition and wills, in ways that would allow them to gain genuine independence in control from human owners, managers, and coders. Indeed, the possibility of us perishing in a nuclear war between the two largest powers in the world (I hope this would not occur in my lifetime) is likely to be much higher than that of us becoming enslaved under AGI. It’s important that we do not give into hysteria and blind panic.
-- Contact us at [email protected]
-
Which HK district is most defensive in home rental? Ben Kwok
Yes, the border is open. Yet, the business is not back. That has been reflected in the residential rental market in Hong Kong, where 90 per cent was below the pre-pandemic level, according to
-
Talent development takes years, so we should start now Dr. Winnie Tang
Recently, the fever of artificial intelligence-powered chatbot ChatGPT has swept around the world. Within just two months after its launch in last November, it reached 100 million users. But if you
-
Hong Kong needs a new approach to talent Daniel Cham
Hong Kong has arrived at a critical juncture for its talent market and its economic future. Due to several factors that include the pandemic, emigration, and an ageing population, Hong Kong has a
-
Hong Kong Ballet premieres Coco Chanel Kevin Ng
Hong Kong’s cultural scene is bouncing back. Just after the Hong Kong Arts Festival, Art Basel has returned after three years. And nearby at the Academy for Performing Arts, Hong Kong Ballet
-
How to position our AI supercomputer centre? Dr. Winnie Tang
Professor Sun Dong, Secretary for Innovation and Technology and Industry, said that the investment in the proposed AI supercomputer center would be “huge”, and "if you make reference to supercomputer
-
Hong Kong Ballet premieres Coco Chanel
-
Hong Kong needs a new approach to talent
-
How can we better serve travel-deprived consumers in digital age
-
Talent development takes years, so we should start now
-
Long-term benefits of ESG investment
-
Which HK district is most defensive in home rental?
-
Who should be bailed out next?
-
Getting geopolitics right