drone

When or if something goes wrong with AI technology where should accountability lie? Should it be with the developer, the owner of the technology or with the end user?

Most modern AI systems (with their layers of interconnected nodes that are designed to process and transform data in a hierarchical manner) prevent anyone from establishing the exact reasoning behind a system’s decisions or predictions. Unfortunately, this makes it almost impossible to assign legal responsibility when errors are caused by the system or in the event of an accident.

Unease about AI in defence

The increasing use of autonomous/unmanned vehicles (i.e. Drones) in both commercial and military settings, has led to a heated debate as to how legislation is going to keep up with the rapid advances in technology we are seeing, and the ethical implications of their use by the military.

The delegation of tasks or decisions to AI systems could lead to a ‘responsibility gap’ between systems that make decisions or recommendations and the human operators who are responsible for them. Crimes may go unpunished, and we may eventually find that the structure of the laws of war, along with their deterrent value, will be significantly weakened if lawmakers cannot reach an agreement on some form of universal legislation.

Bias and discrimination

Despite all the media hype, we are nowhere near a world in which AI is thinking and making decisions completely of its own accord. The current reality is that AI systems are only as good as the data they are trained with. Whilst machine learning offers the ability to create incredibly powerful AI tools, it is not immune to bad data or human tampering for example incomplete or flawed training data, limitations with technology, or simply usage by bad actors. It’s also all too easy to embed unconscious biases into decision-making, and without legislation to address how these biases can be avoided or mitigated, there is a risk that AI systems will perpetuate discrimination or unequal treatment.

How can these issues be alleviated? Industry experts have been looking at the idea of an ‘ethics by design’ approach to developing AI. This means considering questions such as: Could legal responsibility be moved up the chain to the developer? Should there be rules of development as well as rules of engagement?

Where do we go from here?

In 2021, the European Commission proposed the first-ever legal framework on AI, which addresses the risks of Artificial Intelligence. The proposed regulation aims to establish harmonised rules for the development, deployment, and use of AI systems in the European Union. It outlines a legal framework which proposes a risk-based approach that separates AI into four categories: unacceptable risk, high risk, limited risk and minimal risk. Each category is subject to different levels of regulatory scrutiny and compliance requirements.

This innovative new framework led to the 2022 proposal for an ‘AI Liability Directive’, which aims to address the specific difficulties of legal proof and accountability linked to AI. At this stage the directive is no more than a concept, however, it offers a glimmer of hope to legal professionals and victims of AI-induced harm, by introducing two primary safeguards:

1. Presumption of Causality: If a victim can show someone was at fault for not complying with relevant obligations, and that there is a likely causal link with the AI’s performance, then the court can presume that this non-compliance caused the damage.

2. Access to Relevant Evidence: Allows victims of AI-related damage to request the court to disclose information about high-risk AI systems. This should help in identifying the person/persons that may be held liable and potentially provide insights into what went wrong.

You might take the view that this new conceptual legislation won’t be able to solve all the legal issues. However, it’s a step in the right direction.

There are also policy papers, for example, the US Department of Defence Responsible Artificial Intelligence Strategy and Implementation Pathway 2022 and UK 2022 Artificial Intelligence (AI) Strategy.

These documents provide important guidance to tech developers and their military end users, on adhering to international law and upholding ethical principles in the development and use of AI technology across defence. They present opportunities for data scientists, engineers and manufacturers to consider using ‘ethics by design’ approaches when creating new AI technology. The aim: is to align the development, with the related legal and regulatory frameworks, thus ensuring that AI and autonomous systems are developed and deployed in defence in ways that are safe, effective, and consistent with both legal and ethical standards.


About the author

Yasmin Underwood

Yasmin Underwood is a defence consultant at Araby Consulting and a member of the National Association of Licensed Paralegals (NALP), a non-profit membership body and the only paralegal body that is recognised as an awarding organisation by Ofqual (the regulator of qualifications in England). Through its Centres around the country, accredited and recognised professional paralegal qualifications are offered for those looking for a career as a paralegal professional.

Web: http://www.nationalparalegals.co.uk
Twitter: @NALP_UK
Facebook: https://www.facebook.com/NationalAssocationsofLicensedParalegals/
LinkedIn – https://www.linkedin.com/company/national-association-of-licensed-paralegals/


Read more of our articles here.