Files
Abstract
The recent development in the field of autonomous navigation at sea is moving the focus from theoretical work and lab experiments to industrial solutions for the market. The move from the lab to the industry raises the question of the ability to deliver a consistent, dependable autonomous navigation capability to shipowners. One aspect is the hardware architecture, the dependability of which is traditionally ensured by applying long-standing, mature norms and guidelines (IEC61508, class regulations and guidelines). Another is the confidence that can be placed in the software. The three questions are:
- how do the algorithms perform in terms of computational time, relevance of the proposed routes, repeatability, accuracy? This question is addressed with quantitative performance evaluations (see paper by Unige, SafeNav session).
- have the algorithms been implemented correctly in the embedded software that will be deployed onboard? The answer is more of a Boolean type, and the related industry standard is software tests and Continuous Integration (CI).
- how does the system perform as a whole, once the software has been deployed and integrated with other soft- and hard-ware? This is addressed with Hardware-In-the-Loop (HIL) testing and the well-known industrial processes of Factory Acceptance Tests (FAT), Harbour Acceptance Tests (HAT) and Sea Acceptance Tests (SAT).
We have focused on the second aspect of software dependability, applied to a Decision Support System (DSS) whose role is to raise alerts and issue COLREG-based recommendations whenever a hazardous situation is identified at sea. We have implemented a Software-In-the-Loop (SIL) environment that can be used with CI tools to validate automatically that the recommendations and alerts issued by the DSS are consistent with the COLREG.
A major difference with simulations used for assessing the overall performance of algorithms (e.g., with Monte Carlo methods), is the need for exact repeatability: in a software-test context, developers need to be able to replay test scenarios exactly, to investigate why tests pass or fail. In this paper we present the test environment and discuss the constraints in terms of configuration management induced by the use of simulation in a CI context, including for the multi-agent simulator and the parameterization of test scenarios.