Navigating AI risks in Asia

October 17, 2023 11:13

The adoption of generative artificial intelligence (AI) in businesses’ workforce and operations has grown exponentially and led to transformational opportunities. However, AI poses unique legal and ethical risks and challenges for businesses, which now face increasingly complex approaches to AI regulation. Navigating through the rapidly evolving AI regulatory landscape and establishing appropriate governance frameworks will be critical for successful AI adoption for businesses in Asia.

Evolving AI regulatory landscape

As AI adoption grows and its risks become more apparent, a worldwide trend towards greater regulation to address potential risks to individuals, businesses, and the general public has emerged. Multinational businesses are coming under greater scrutiny from direct regulation or guidance and compliance frameworks when implementing AI within their workforce, operations, products, and services.

In some Asian jurisdictions, such as Hong Kong SAR and Singapore, a more lenient, principles-based and sometimes voluntary approach to AI is preferred, as opposed to a direct and specific legislative framework.

For instance, Hong Kong SAR’s privacy commissioner published the Guidance on Ethical Development and Use of AI (August 2021), which sets out high-level ethical principles for the development and use of AI including accountability, human oversight, transparency, and fairness.
Singapore’s Model AI Governance Framework (updated in 2020) similarly emphasises that AI-made decisions should be explainable, transparent and fair and for AI systems to be human-centric. Singapore Monetary Authority of Singapore’s voluntary guidelines (February 2022) which regulates AI use in financial sector included FEAT Principles, short for “Fairness, Ethics, Accountability and Transparency.” Recently, Singapore’s privacy commissioner has initiated a public consultation on AI and personal data risks, to provide guidance to assist the public and businesses in protecting individuals’ data privacy when using AI.

In contrast, other Asian jurisdictions such as mainland China are adopting a more prescriptive legislative approach to AI regulation. The Cyberspace Administration of China (CAC) leads China’s efforts in developing a regulatory framework for AI. In mid-August 2023, the Interim Measures for the Management of Generative Artificial Intelligence Services came into force which requires all public interactive generative AI products and services to be subject to the CAC’s security assessment. It is likely that China will enact further detailed and specific AI legislation by the end of 2023.

In Europe, the EU’s proposed Artificial Intelligence Act (EU AI Act) seeks to introduce a first-of-a-kind regulatory framework for AI with extra-territorial reach, applying to both EU and non-EU persons providing, deploying, importing or distributing AI systems in the EU.

The EU AI Act classifies AI systems into categories and regulates them based on the associated degree of risk, ranging from prohibited and “high risk” categories to other categories and AI models presenting lower risks. “Prohibited Uses” include systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring (classifying people based on their social behaviour and socio-economic status). Maximum penalties for breaching the EU AI Act could lead to a fine of up to €40m or up to 7% of a company’s global turnover for the preceding year, whichever is higher.

AI’s role in the employment lifecycle

AI has the potential to materially impact the future of employment for businesses, bringing many positive opportunities such as greater efficiencies and reduced administrative burdens for employees.

While AI can be vastly applied across areas such as candidate screening and scoring in HR, businesses must consider legal risks, including potential bias in AI algorithmic decision-making and AI tools trained on inherently biased or discriminatory datasets, which may cause discriminatory outcomes, potentially violating a jurisdiction’s anti-discriminatory and workplace employment laws.

For this reason, many regulators are focusing on providing guidance to businesses on using AI to ensure adherence to principles of fairness, transparency, and explainability while ensuring human intervention or “circuit breakers” are implemented when AI software provides unsavoury or discriminatory outcomes. Consequently, AI governance and supply chain management procedures are crucial for businesses implementing AI solutions, as they help ensure the proper mitigation, detection and monitoring, and resolution of AI-related issues throughout the AI implementation lifecycle, encompassing employment and workforce management aspects.

Mitigating AI risks

Navigating AI governance within an evolving regulatory landscape is challenging and each company will have its own approach and considerations. Businesses need to navigate through legal, technical, and ethical considerations in designing and implementing their AI governance frameworks. While different jurisdictions may differ in their approaches to governing the use and deployment of AI, fairness, explainability, transparency, non-discrimination, accountability and governance, as well as ethics and human oversight are common themes that have emerged.

To help mitigate AI risks, businesses can adopt practical measures such as creating ethical guidelines to help personnel and suppliers to develop and use AI, undertaking risks assessments in designing and implementing AI use cases, and enhancing transparency by explaining the functions (and limitations) of AI model. Businesses can also develop policies and standards for data collection and usage, and ensure data is well-organised, anonymised and categorised for relevant AI training models, while staying compliant with consent and data privacy regulatory issues. Furthermore, it is crucial to incorporate human oversight and intervention in the AI decision-making and governance process.

To harness the full potential of AI, businesses need to remain up-to-date on the dynamic regulatory environment, alongside implementing measures that help identify and mitigate inaccuracies, bias, and other ethical concerns.

-- Contact us at [email protected]


Counsel and Head of Technology, Media & Telecommunications (TMT) – Hong Kong SAR, Linklaters