Autonomy is the answer, but what was the question?
Date published
Free to read from
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Publisher
Department
Type
ISSN
Format
Citation
Abstract
In recent years aspirations regarding the implementation of autonomous systems have rapidly matured. Consequently, establishing the assurance and certification processes necessary for ensuring their safe deployment, across various industries, is critical. In the United Kingdom Ministry of Defence distinctive duty holder structures - formed since the publication of the Haddon Cave report in 2009 - are central to risk management. The objective of this research is to evaluate the duty holder constructs suitability to cater for the unique merits of artificial intelligence-based technology that is the beating heart of highly autonomous systems. A comprehensive literature review examined the duty holder structure and underpinning processes that form two established concepts: i) confirming the safety of individual equipment and platforms (safe to operate); and ii)the safe operation of equipment by humans to complete the human-machine team (operate safely). Bothtraditional and emerging autonomous assurance methods from various domains were compared, includingwithin wider fields, such as space, medical technology, automotive, software, and controls engineering. Thesemethods were analysed, adapted, and amalgamated to formulate recommendations for a single militaryapplication. A knowledge gap was identified where autonomous systems were proposed but could not be adequately assured. Exploration of this knowledge gap revealed a notable intersection between the two operating concepts when autonomous systems were considered. This overlap formed the development of a third concept, safe to operate itself safely, envisioned as a novel means to certify the safe usage of autonomous systems within the UK's military operations. A hypothetical through-life assurance model is proposed to underpin the concept of safe to operate itself safely. At the time of writing the proposed model is undergoing validation through a series of qualitative interviews with key stakeholders; duty holders, commanding officers, industry leaders, technology accelerator organisation leaders, requirements managers, system designers, Artificial Intelligence developers and other specialist technical experts from within the Ministry of Defence, academia, and industry. Preliminary analysis queries whether a capability necessitates the use of autonomy at all. Recognising that some autonomous systems will never be certified as safe to operate themselves safely, voiding ambitious development aspirations. This highlights that autonomy is simply one of many tools available to a developer, to be used sparingly alongside traditional technology, and not a panacea to replace human resource as originally thought. This paper provides a comprehensive account of the convergence between safe to operate and operate safely, enabling the creation of the safe to operate itself safely concept for autonomous systems. Furthermore, it outlines the methodology employed to establish this concept and makes recommendations for its integration within the duty holder construct.