Copy
Dear Colleague

In this months newsletter
  • Use of drones and AI in violation of international humanitarian law and human rights
  • The influence of AI and machine learning on humanitarian decision-making capacities
  • The Ethics of Artificial Intelligence
"On March 5, 2016, 21 family members sat down to dinner in West Mosul, Iraq. None of them knew that at that moment, their neighbourhood was in the cross hairs of the American military." According to a New York Times investigation in December 2021, "The U.S. airstrike killed them all."

A growing number of civilian deaths are being caused by Unmanned Combat Ariel Vehicles (UCAVs) – also known as drones – which can go well beyond battlefield reconnaissance and have capacities to identify and destroy targets without any human intervention.

No longer can one presume that drones are always controlled by ‘pilots in cockpits’ thousands of miles away. Artificial Intelligence has made it possible for killer drones not to be dependent upon human control.
Anti UCAV Defence System

From a humanitarian perspective, the immediate implications and consequences of killer drones have been ably discussed by Sandra Krähenmann, Geneva Call, and George Dvaladze, Geneva Academy of International Humanitarian Law and Human Rights (Humanitarian concerns raised by the use of armed drones)
In his 2013 report ...On the protection of civilians, the United Nations Secretary General noted in relation to the proliferation of the use of armed drones by all Parties to a conflict that '[a]s the ability to conduct attacks increases, so too does the threat posed to civilians.' (Report of Sec Gen S/2013/689)

A growing array of counter-measures against such drones are put in place to deal with this new threat, which in turn lead to further humanitarian challenges. Though these developments are only nascent, the proliferation of the use of armed drones by all parties to a conflict and the adoption of counter-measures are likely to continue and pose significant challenges for the safety of the civilian population. (AH Michel - Counter Drones System Report)
However, the implications of Artificial Intelligence from a longer-term humanitarian perspective goes well beyond the impact of drones. AI also needs to be taken into account as one begins – as we say at Humanitarian Futures – to plan from the future.

The work of Stuart Russell, professor of computer science at UC Berkeley and co-author of the standard textbook on AI, Artificial Intelligence: A Modern Approach, offers a very good starting point. In the British Broadcasting Corporation’s Reith Lectures this past December, Russell focused on a dilemma that inevitably will confront those with humanitarian roles and responsibilities, namely, can one trust AI not to confuse the values and decision-making capacities of human beings?

Similar concerns were raised by the International Committee of the Red Cross when it noted a few months before, in March 2021 (Artificial intelligence and machine learning in armed conflict: A human-centred approach) that
over-reliance on the same algorithmically generated analyses, or predictions, might also facilitate worse decisions or violations of international humanitarian law and exacerbate risks for civilians, especially given the current limitations of the technology, such as unpredictability, lack of explainability and bias.
And, yet, there is a paradox. In a September 2021 study by researchers from Geneva Call and the Geneva Academy of International Humanitarian Law and Human Rights (Algorithmic Risk Assessments Can Alter Human Decision-Making Processes in High-Stakes Government Contexts) they concluded that decision-makers all too often used algorithmic risk assessments in ways that did not take them out of their comfort zones. In other words, possible benefits from algorithmic analyses were discarded:
When participants in that 2021 experiment were presented with the predictions of risk assessments, they became more attentive to reducing risk at the expense of other values. This systematic change in decision-making counteracted the potential benefits of improved predictions. Even if an algorithm made accurate predictions, it may not improve how government staff made decisions. Instead, tools like risk assessments could generate unexpected shifts in the normative balancing act that is central to decision-making in many areas of public policy. 
All of this is not to ignore the potential benefits of AI as well as its downsides. Yet, to ensure that such transformative technologies stay within acceptable boundaries and do not create crises or hinder solutions, the global community needs to come to some agreement on the acceptable use of AI – that, for example, it will not lead to violations of international humanitarian law and exacerbate risks for civilians.

Such threats have not gone unnoticed. In the first global agreement on the Ethics of Artificial Intelligence, 193 members of UNESCO on 25 November 2021 concluded that
we see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable Artificial Intelligence technologies… to name a few. Until now, there were no universal standards to provide an answer to these issues.
It is a start. Yet, reflecting back on drones’ record of civilian deaths, AI might even become perpetrators of humanitarian crises – unintentionally or intentionally.

Much more is needed to ensure that human beings do not lose control over algorithmic systems that can destroy the fundamental principles of humanitarianism.

Sounds like science fiction? Well, listen to Stuart Russell on BBC Sounds.
 
Let’s hope for a positive and fulfilling new year for all those with humanitarian roles and responsibilities!

With best wishes from
 
The Humanitarian Futures team


 Visit the Newsletter Archives

Forward Forward
Tweet Tweet
Share Share
Share Share
Email
Website
LinkedIn
Twitter
Our mailing address is:
Humanitarian Futures
Warwick Square
London, London SW1V 2AB
United Kingdom

Add us to your address book


If you would like to change how you receive these emails:
You can update your preferences or unsubscribe from this list.