Files

Abstract

Intelligent autonomous systems (IAS) are set to become a feature of future defence programmes, and their introduction will pose challenges to traditional systems engineering and acquisition practice. Whilst the need for operational and technical assurance will endure, the need to manage programmes that deliver iteratively and which are continually evolving requires fresh thinking. New increments may well be undergoing acceptance testing against a backdrop of continual developments to higher level concepts of operation – there may be no stable baseline. Furthermore, the unbounded and potentially non-deterministic nature of IAS means testing alone is unlikely to provide satisfactory assurance, especially for systems that are able to learn from previous mission data. Lastly, at the very core of what is referred to as human-autonomy teaming is a notion of trust by the operator of the IAS, a trust which builds through development, integration, training and deployment, implying a blurring of boundaries between that which is technical assurance and that which is operational. In response to these challenges, QinetiQ are investing in UK test and evaluation infrastructure and developing, with partners, approaches that will mitigate the risks posed by these new technologies. As outlined in this paper, these include distributed live, virtual and constructive facilities, brokered policy enforcement by software and developing a body of trusted software components. The paper argues how these developments address the identified challenges, highlighting remaining gaps and drawing on evidence from ongoing UK MOD research and development; it concludes with the approaching investment and programmatic choices that will need to be made to ensure best-for-enterprise outcomes.

Details

PDF

Statistics

from
to
Export
Download Full History