Files
Abstract
There has been a marked increase in the interest for more capable optionally-crewed naval platforms over the last few years. These complex platforms will need to perform continuously in dynamically demanding scenarios, for extended periods. The capabilities of on-board digital technology and “edge processing” have also advanced considerably in recent years, providing more data and potentially previously unavailable analytically-derived information to be exploitable. These drivers have led to the development of a number of autonomous technology concepts. Artificial Chief Engineer® (ACE), an example of such technology, is a machinery control system concept that manages the power and propulsion systems and auxiliaries, targeted for uncrewed and lean-crewed vessels. The development of ACE has been previously focussed on the ability to operate at high levels of autonomy. This paper describes the development of the ACE technology in the context of variable autonomy and its teaming with human operators.
There are multiple options when it comes to allocating the autonomy level of a system operation. These options may be dependent on the specific decisions to be made, the subsequent authority of these decisions, and the role of the technology and human(s) involved, among other factors. A few key questions are raised which include: (1) How could the best traits of the human and machine be exploited to result in the most harmonious, successful operation? (2) How could trust be built in the operation models that combine both human and machines where machines are not only operating in subservient roles? (3) What are the pertinent areas in human-machine teaming that should be prioritised, or treated with additional caution (e.g., due to higher risks) to ensure advancement in the area will be beneficial instead of causing detrimental effects?
As part of DSTL’s Intelligent Ship Phase 2 programme, operating models with variable autonomy will be explored between ACE and its human operators. Starting with full autonomy and authority to make decisions, the interactions between ACE and its human operator(s) are investigated by means of discussions/feedback from a representative group of users made available through the programme. Operation of ACE at lower levels of autonomy is then explored, with human operators more involved in the decision-making process. It is expected that these human-machine teaming evaluations will provide guidance on the vital factors that need to be taken into consideration for the successful teaming between human operators and technologies like ACE. This paper concludes by identifying the high-risk or high-priority areas for future work and extracting insights gained from the direct feedback and lessons learnt from the programme. It is hoped that the work will also contribute to building trust in machines, leading to enhanced decision-making in our future missions