Files
Abstract
In recent years, autonomous systems have rapidly matured. Consequently, establishing the assurance and certification process necessary for ensuring the safe deployment of autonomous systems, across various industries, is critical. In the military context, distinctive duty holder structures have managed risk management since the publication of the Haddon Cave report in 2009. An objective of this research is to evaluate the duty holder constructs suitability to cater for the unique artificial intelligence-based technology that is the beating heart of any autonomous system.
A comprehensive literature review examined the duty holder structure and underpinning processes and artifacts that lead to two extant concepts: i) confirming the safety of individual equipment and platforms (safe to operate); and ii) the safe operation of equipment by humans to make up human-machine teams (operate safely). It also compared both traditional and emerging autonomous assurance methods from various domains, including space, medical technology, automotive, software, and controls engineering. These methods were analysed, adapted, and amalgamated to align with military requirements. A knowledge gap was identified regarding cases where autonomous systems were proposed but could not be adequately assured. Exploration of this knowledge gap revealed a notable intersection between the two concepts in the context of autonomous systems. This overlap led to the development of a third concept, 'safe to operate itself safely', envisioned as a novel means to certify the safe usage of autonomous systems within the UK's military operations.
A hypothetical through-life assurance model that underpins the concept 'safe to operate itself safely' is proposed. Currently, the proposed model is undergoing initial validation through a series of qualitative interviews with prospective requirements managers, designers, developers and other technical experts from stakeholder organisations. Furthermore, the interviews query whether a capability necessitates the use of autonomy. This recognises that some autonomous systems can never be certified as safe. It also highlights that autonomy is one of many tools available to a developer and is not a universal answer to every woe. Subsequent further testing is planned within fictional scenarios during focus group sessions over the course of the next two years.
This paper/presentation provides a comprehensive account of the convergence between 'safe to operate' and 'operate safely,' enabling the creation of the 'safe to operate itself safely' concept for autonomous systems. Furthermore, it outlines the methodology employed to establish this concept and makes recommendations for its integration within the duty holder construct.