We have become familiar with face recognition features in our mobile handsets. Meanwhile, there are more and more instances where governments and corporations try to apply this technology to CCTVs and portable video-recording devices. For example, in places like the border departures hall at the Hong Kong International Airport, casinos in Macau, schools in Mainland China, concert venues, public housing estates, and even at road crossings, our faces are being scanned quietly. The application is getting ubiquitous.
While face recognition is fast becoming the latest weapon of authorities in various countries, the problems this causes has stirred much controversy.
Sure, law enforcement departments have found automatic face recognition technology helpful, making it easier to determine the suspects by comparing with their face database, and enhancing the efficiency in law enforcement.
But studies in the US and Europe have indicated that these technologies are unreliable. In other words, the algorithm could lead to discrimination.
Last year, the London Metropolitan police used face recognition several times in a trial to scan the faces of pedestrians, comparing those records with the archives on wanted criminals, in order to pick out suspects for body search operations.
The project emphasizes that people can refuse to be recorded. However, what is the outcome? In June this year, a middle-aged man deliberately hid his face by pushing up his collar when he moved past a surveillance camera, but this act was perceived by the police as a suspicious move. In the end, he was asked to cooperate in a second video-recording. Moreover, he was fined £90 for alleged ‘improper behavior’. In other words, his unwillingness to be scanned was considered a criminal act.
In the US, face recognition has led to misleading outcomes, triggered by racial and gender stereotypes, and has come under attack from human rights groups.
In June this year, San Francisco became the first city in the US to legislate against government’s use of face recognition as a tool for prosecution. And other cities are set to follow suit. For example, in Auckland, there are plans to introduce similar legislation, while in New York, people have come up with proposals to prevent landlords from applying such technology to monitor tenants.
In May this year, California passed laws to prevent anyone from using portable video-recording devices with face recognition technology.
At a Legislative Council meeting in Hong Kong on June 5, I enquired about the use of face recognition currently at government departments.
The government conveyed that, among the 39,000 CCTVs installed by various government departments, around 2200 portable video-recording devices are used by the police.
Authorities claimed that so far none of the government departments have purchased, introduced or used such automatic face recognition feature across their CCTV systems,
But does this mean that Hong Kong residents will not be subject to such surveillance, considering the introduction of a new generation smart ID cards, and the promotion of electronic ID（eID) usage for identity verification.
Following the July 1 clashes at the Legislative Council involving extradition bill protesters, some pro-establishment lawmakers have been calling for anti-mask laws. In the future, if smart lamp-posts and CCTVs are capable of tracking the identity of people by using face recognition features, who among the Hong Kong people will be bold enough to express their views in public?
I believe we cannot wait until we are subject to the sort of surveillance seen in China’s Xinjiang region. We must seriously review the risks associated with the use of face recognition technology, and the government should find out ways to evaluate the potential impact of such application before it is too late.
This article appeared in the Hong Kong Economic Journal on July 8
Translation by Jennifer Wong
[Chinese version 中文版]
– Contact us at [email protected]