International Law Expert on AI Regulation: Doing Nothing Is Not an Option
As technology and artificial intelligence (AI) advance rapidly, states must take steps to help regulate algorithms and artificial intelligence developed by companies, believes Professor Simon Chesterman, Vice Provost of the National University of Singapore.
In what circumstances will the regulation of artificial intelligence depend? What are the possible solutions in order to find a balance between the security of citizens and the development of innovation? These and other questions were answered by S. Chesterman, a well-known international law expert, during the conference 鈥淭echnological Change and International Law鈥 held in Vilnius on September 5-6.
What threats do you think artificial intelligence could pose to humanity? And who should take steps to prevent those threats?
Some people worry about the so-called existential threats, for example, that AI could rise and kill us all. I think that threat has gone from science fiction to a possible concern for people, but so far, we don鈥檛 see any evidence that the AI we see deployed today is going in that direction. I think that if we had the prospect of creating an uncontrollable AI, we should hold back from doing that, but at the moment, we have not reached such a point. There are a lot of short-term concerns, however, that we should address right now, such as discrimination or bias. When making decisions, AI relies on data sets that contain human bias and may lack representation. So, I think organisations and governments should be very wary of using algorithms to give recommendations unless they are prepared to stand behind those recommendations. Because if those algorithms belong to private companies, we may not know what goes into them and what comes out of them. For those reasons, I think we need to be wary of relying on systems that we don鈥檛 understand.
This relates to a larger threat, which is that if we interact with AI more and more, it may change the way we think. I think we鈥檙e all aware of the changes social media causes to the young and other generations, so AI may also affect how we think about the world. If we rely on AI for very basic information, then 鈥 a good analogy would be smartphones, due to which we no longer need to memorise a lot of telephone numbers, which is very convenient. But instead of phone numbers, what if we begin relying on technology to form our opinions? Then we鈥檒l have a huge problem because we鈥檇 be no longer forming our own opinions, but something else would be shaping them.
So what should we do? Firstly, we should learn to understand it better. We should be more wary of deploying systems that we鈥檙e not familiar with and don鈥檛 know what the consequences will be. We should be clearer on responsibility when things go wrong with AI; for example, if a driverless vehicle crashes or an algorithm makes a racist decision, some individual or company should be held accountable for that.
In your conference paper, you mentioned that states, in order to avoid threats, sometimes put too many legal restrictions on the development of technology. How do you think it is possible for a balanced level of regulation to be achieved so that people are protected from risks, and the technology can still progress?
Yes, but it鈥檚 difficult. It depends on what your risk threshold is. For example, the EU (European Union) has regulated much more than some other countries because it sees risks that it doesn鈥檛 want to accept. Other countries, such as Singapore, look at the European approach and say, 鈥淲ell, that would restrain innovation; it would drive it elsewhere.鈥 And the Europeans respond something like 鈥淚n some areas, like real-time biometric surveillance, we don鈥檛 want innovation. We don鈥檛 want facial recognition to be all around.鈥 So, to conclude, it depends on the kind of risks each country sees and, to some extent, on their size. Because a big country or a group of countries like the EU is still an important market, even if it regulates strictly. So, if that means that Meta and Google have to jump through hoops to operate in the European Union, they will still do it. If a small country did something similar, they might say it鈥檚 not worth doing business there at all.
There are some choices to be made other than underregulation and overregulation, like, for example, do you view technological development generally, like the EU, where they cover all of AI under the same umbrella? Or, like Singapore and elsewhere, do you approach it as different sectors? Identify specific problems that you want to address rather than regulating technology as a whole? And I assume most countries will choose to do the latter.
Could you mention some positive examples of states that have a balanced AI regulation?
Again, it depends on what your objectives are. I think that the EU is still an important example of a group of states prioritising rights over uncontrolled technological advancement, and I think it鈥檚 really interesting to see where this goes. Singapore has made a very practical approach, where their goal was to develop some tools, such as 鈥淎I verify鈥, that enhance the self-regulation of companies. Many companies claim to have specific guidelines and standards around AI, so Singapore said, 鈥淎lright, here鈥檚 a tool with which you can measure if you鈥檙e living up to your standards.鈥 Plus, we have regulatory sandboxes 鈥 some governments will create a kind of virtual environment, for example, in the finance sector, in which you can run programs for your products and have minimal risk because it can鈥檛 escape the sandbox. And then there鈥檚 hard regulation. Some countries apply hard regulations on specific sectors, like medicine, finance, or transportation, so yes, I think Singapore is doing interesting things.
Another quite interesting case is China. For the most part, they鈥檙e looking at AI through a national security lens, and it鈥檚 had some interesting social policies that may at least interest the parents of teenagers around the world, like limiting the amount of time children play video games. So I don鈥檛 think any country has got it perfectly yet, but that鈥檚 natural in an early stage of technological evolution 鈥 you鈥檙e going to see all kinds of experimentation around the world. I think the one thing we鈥檙e going to all land off soon is realising that doing nothing isn鈥檛 an option.
Do you see any current challenges in the world that AI could help overcome?
Yes, AI is very good at optimising resources or workflows and trying to find the most efficient ways of processing vast amounts of data. There are some things that AI can do that humans can鈥檛 do, like drug discovery: antibiotics only really took off around the 1940s because we discovered them naturally. And now, with AI, you can rely not just on the soil naturally creating antibiotics, but you can do it virtually. You can experiment with every different combination of molecules, so in areas like that, AI has a huge potential. Also, modelling the climate and the weather is an incredibly complex process in which AI can play a role. Processing vast amounts of data is an area where it could really make a difference.
Speaking of international law, do you think that perhaps in the future, there will be an international organisation, like the United Nations, that helps create AI laws and regulations for all countries?
I think that would be very challenging because all of the power right now is in the hands of companies. Companies make money, and they don鈥檛 necessarily want regulations. Meanwhile, states also have power, but they鈥檙e wary about their overregulation or underregulation risks. There鈥檚 virtually no power at the international level other than the power states willingly give to international organisations. So yes, you could imagine a scenario in which states all agree to give power to an international organisation, but why would they do that? For example, if you look at the tension between the US and China right now, why would the US want to be governed in the same way that China is governed, and vice versa? So, it is possible but not likely. These scenarios are only possible under two circumstances: if something is extremely uncontroversial, like postal standards, or extremely dangerous, like nuclear bombs.
The conference 鈥淭echnological Change and International Law鈥 was organised jointly by the European Society of International Law (ESIL), 91桃色 (VU) and VU Faculty of Law. ESIL conferences are held in different European cities each year, and this year was the first time they took place in Lithuania.