By Dr Migle Laukyte
When we talk about artificial intelligence (AI) and robotics, we usually try to strike a balance somewhere between over-optimism—hoping that AI will solve all our problems that we haven’t yet found solutions to, ranging from climate change to traffic congestions in big cities—and fervent pessimism, fearing that AI will monitor us, take our jobs, and finally kill us, either because of a bug in the software or because AI will evolve to the level of superintelligence and simply see no need to keep humanity alive (Bostrom 2014).
This ‘emerging issues’ contribution briefly looks at a few established points in our discussion on AI and the future of robotics, laying out possible future scenarios based upon those concrete established points, and then trying to imagine what current developments in AI and robotics could point to.
(1) Established point. AI has many forms and even more applications that range from simple algorithms recognizing your fingerprint on your smartphone to the Sophia humanoid robot, with its surprisingly human-like communication skills.
Future scenario. An even greater part of human life becomes AI-based, with such expected developments as the Internet of Things (IoT), the Industrial Internet of Things (IIoT), and smart environments. So if today we mainly rely on our smartphone, computer, or smart watch for intelligent tasks, tomorrow we will discover smart behaviour in our household appliances, umbrellas, houses, sidewalks, backpacks, and other wearables and nonwearable items we use every day.
(2) Established point. AI and the quality of its applications and services is dependent upon data—all sorts and kinds of data, and the more data the better, all the more so if it is personal data. Many issues therefore come up relating to the need to protect personal data within this flow, and to ensure that control in deciding how such data is used ultimately rests with humans.
Future scenario. Personal data is indeed under threat, so much so that our environment is becoming smarter and increasingly aware of us, recording and listening to us around the clock. At the same time, however, as David Heiner of Microsoft suggests (OECD 2018), we should remain cognizant of the fact that AI can also be used to make our data safer, an idea that offers a paradigm shift from viewing AI as a threat to privacy to AI being a tool of privacy protection.
Furthermore, privacy may become a concern in relation not only to humans but also to robots. Indeed, there is a question as to whether the currently accepted conception of human privacy won’t be too narrow as we deal with the developing use of autonomous and intelligent robots and the previously mentioned IoT (smart or less so). The single “bundle of sticks” making up the human right to privacy could branch into two related bundles, considering that (a) robots can store data regarding more than a single person, and that this data will become increasingly entwined with the robot’s personal data, and (b) robots will have nonhuman personal data of their own.
(3) Established point. It is only a matter of time before self-driving cars and drones will become the standard mode of transportation on our roadways and skies, providing more traffic safety while also improving logistics and delivery services. There are already places where it is permitted to test self-driving cars, such as Pennsylvania in the US, and in a couple of years’ time Volvo is set to implement Nvidia driverless technologies within its vehicles (Porter 2018). However, this raises the issue of liability when accidents do happen, which is going to be a challenge in that future scenario.
Future scenario. It is quite unlikely that the use of autonomous technology in transportation will be confined to self-driving cars and drones: we should expect to see it extended to a range of other means of transportation, such as aircraft. Thus Airbus, for example, is already envisioning autonomous skies with “self-piloting urban air mobility vehicles, cargo drones and more autonomous commercial aircraft”.
The same applies to trains and mass transit. Thus, the mining company Rio Tinto (Australia) has already tested an autonomous train carrying iron ore (Sankaran 2018), and it won’t be long before we have a passenger version of this technology. So, too, the German city of Potsdam has just launched Combino, the world’s first autonomous tramway, whereas Tesla, Otto, and other companies are already developing driverless trucks.
Also part of this transformation is the shipping industry. By the same token, BMW has just unveiled its first autonomous driverless motorbike (BMW R 1200 GS), while on the roads of Andorra MIT has tested a self-driving shared-use bicycle called PEV (Persuasive Electric Vehicle). The problem of liability, then, concerns not only self-driving cars but a full panoply of means of transportation where primary control will no longer be entrusted to humans.
(4) Established point. Although the EU has advanced a solution for according a specific kind of legal personhood to autonomous and intelligent robots, proposing to view them as “electronic persons” (European Parliament Resolution 2017), this idea was rejected by many leading experts and researchers, many of whom signed a famous open letter to the European Commission (Open Letter 2018).
Future scenario. Like many legal concepts, that of legal personhood has never been stationary. This fact can be appreciated in the development of corporate legal personhood, in the nonhuman personhood recognized in the animal rights debate, and in the legal personhood ascribed to rivers and other natural resources, as in the case of the Whanganui River in New Zealand and Ganges and Yamuna rivers in India (O’Donnell and Talbot-Jones 2018).
Clearly, there can be no consensus yet on any legal personhood for advanced, autonomous intelligent robots, but that seems to be the general historical trend, towards an ascription of legal personhood of some kind to an increasingly wider and more diverse range of entities. More than that, there is no doubt that we are soon going to find ourselves having to work out the same problem as concerns artificial entities and whether, or how, they ought to be included in the cast of characters endowed with legal personhood understood in one way or another.
So, for example, it is easy to envision a future in which we will be asking whether an ethics-compliant software architecture could make robots more acceptable as (electronic) persons. Work is already in progress. Thus, for example, the Spanish company Acuilae has just launched a system called ETHYCA, the first in its kind, which can build ethical and moral values into AI-based systems (Bruni 2018). If we consider the pace of technological development, it should not be long before we see this ethical architecture being applied to other tools and machines and then to autonomous and intelligent robots—at which point we will no longer be able to avoid the question of the legal personhood ascribable to these robots.
As we have seen, AI raises many questions, and accompanying challenges abound: challenges to the very concepts of law, such as legal personhood; to human rights in general, and to privacy in particular, including data protection; to liability regimes, and insurance models. These are but a few examples. Governance and regulatory questions are becoming increasingly urgent to deal with, with the race to find the best answers already begun.
GSDM would be pleased to advise and assist clients with technical guidance, feedback or fresh ideas on these and related issues.
Dr. Migle Laukyte is an Associate of GSDM, and CONEX-Marie Curie Research Fellow at the Universidad Carlos III de Madrid (Spain). She specialises in law and Artificial Intelligence related topics.