Drones, Autonomy and Drone swarms: Myths and Reality about Autonomy (Part 1)

Drones,-Autonomy-and-Drone-swarms-Myths-and-Reality-about-Autonomy-part-1

Drones no longer represent a novelty in the field of threat, risk and vulnerability management: indeed, drones have been used both as a cause and as a solution to these challenges.  Many examples abound: drone disruption to London Gatwick airport between 19 and 21 December 2018 caused the airport to close for over 33 hours with significant economic loss, reputational damage and disruption to airlines and passengers. In contrast, drones are increasingly becoming vital emergency response tools, such as for the delivery of medication during the Covid-19 health pandemic as well as for search, rescue and data collection during the recent devastating chemical explosion in Beirut, Lebanon.

There are a few issues though that need particular attention when we talk about the use of drones, one of which is the increasing autonomy of these tools. The current post is dedicated to giving a brief theoretical introduction to autonomy within the field of Artificial Intelligence (AI), whereas in a forthcoming post we will address specific issues related to autonomy of drone-based technologies and explore some of their most urgent and controversial aspects.

Autonomy is one of the distinguishing features of AI: others are intelligence, sociability and ability to interact with the environment, and other features. Autonomy means that the tool has a certain level of freedom from a human or non-human operator: however, such a definition is very vague, as even an automatic coffee pot, if programmed to prepare coffee at 7 am in the morning, switches on and makes coffee on its own. Indeed, in this example we are not talking about an autonomous coffee pot: we are talking about an automatic coffee pot. There is a big difference between these terms: automation refers to the limited spectrum of possible actions that a tool can undertake, whereas autonomy in its “purest” sense stands for (almost) complete freedom of action, similar to the one we, as people, have. If an automated tool - coffee pot - can only make coffee (cappuccino, espresso, latte, …) - me, as a person, can make coffee, mojito, milkshake, scrambled eggs, a bed, a call or a combination thereof (and this is not a complete list of what I could do!).

Therefore, if we talk about autonomy within the field of AI, we should set aside the idea of (human-like) autonomy and focus instead on automation: the AI that we have today is far away from its original idea of a computational twin of the human brain. Today’s AI is represented by sophisticated tools built for specific fields of human activity, many times exceeding some human capacities (e.g. calculation), but also many times lagging behind others (e.g. facial recognition), and therefore quite distant from the human-like autonomy.

It would be more correct to talk about automation, although the terminological confusion is still there and continues to flourish. However, if we look, for example, at autonomous vehicles, we find 6 levels of automation, which range from 0 level, that is our cars—or rather the cars of our parents!—to full automation, that is completely self-driving cars that do not need any human intervention to take us from one place to another:  truth to be said, fully automated vehicles do not exist (level 5), yet we have already reached high automation of level 4 (Waymo). Consequently, we could say that from a technological point of view high levels of automation/autonomy of tools is feasible and, although we haven’t yet succeeded in building fully automated tools, that might be either a question of time or a question of opportunity.

It is not merely a question of technological feasibility, however, as it is also a question - perhaps more importantly - of normativity which is one of the reasons why a full technological automation/autonomy might be lurking behind. From this perspective we could talk about policy-based automation/autonomy that has been under scrutiny by decision makers and legislators. For instance, the EU High-Level Expert Group on AI stressed that one of the requirements of trustworthy AI is human agency and oversight. This can be exercised by adopting different methodologies known as human-in-the-loop (HITL), human-on-the-loop (HOTL) or human-in-command (HIC), besides the oversight that could be exercised by public administration in particular circumstances. This means that a full automation—or level 5 automation in terms of self-driving vehicles—is not a politically or legally desirable goal. This is due to a variety of reasons, such as problems of liability attribution and distribution, consumer protection, etc. Policy makers are suggesting that the human should be always around, always reachable, always accountable and present to undertake control.

In the following blog we will see where the drone technology stands as concerns the technological and policy-based automation debate: how has the drone industry taken this dichotomy of automations?

Migle Laukyte is an associate of GSDM specialising in ethical and legal questions related to Artificial Intelligence, robotics and other disruptive technologies. Since 2020 she has been appointed as the Tenure Track Professor of Cyberlaw and Cyber Rights at the Pompeu Fabra University (Barcelona, Spain).

Share this:

Scroll to Top