Failures of artificial intelligence have been widely reported. The reported cases might, in fact, be just the tip of an iceberg, with both regulators and customers struggling with AI’s uncontrollability.
When AI makes mistakes, it’s hard to trace. While many giant companies are stepping up investments in AI to gain an advantage, their data and algorithms are not disclosed to the public. The question is whether AI-generated applications have been misused or abused? It’s very hard for the public to tell.
For example, are commercial organizations using face-recognition technology as a surveillance tool? Will the recommendation system of online shopping platforms discriminate users in terms of gender or ethnicity when they offer discounts? Will private firms feel comfortable in ceding control on data privacy security to their business partners?
Rapid development of AI will therefore increase the challenge of risk control.
Some scholars at Massachusetts Institute of Technology are studying this issue. They try to build a ecosystem with decentralized AI applications using blockchain technology.
The distributed ledger technology inherent in blockchain and decentralized big data will enable the public to share their personal information through encryption and traceability.
Things like medical care data and financial data could then be shared more easily and safely.
I’ve met up with Chen Gang, party boss of the newly-established Xiong An New Area in China’s Hebei province.
Under China’s planning, Xiongan is going to be a showcase of smart city.
Cheng described the integration of AI and blockchain as the engine and brake system of an automobile. Indeed, who wants to sit in a racing car without a brake!
This article appeared in the Hong Kong Economic Journal on Nov 9
Translation by Julie Zhu
[Chinese version 中文版]
– Contact us at [email protected]