000011155 001__ 11155 000011155 005__ 20241022163554.0 000011155 0247_ $$2doi$$a10.24868/11155 000011155 037__ $$aGENERAL 000011155 245__ $$aContinuous integration for the development of a COLREG-compliant decision support system 000011155 269__ $$a2024-11-05 000011155 336__ $$aConference Proceedings 000011155 520__ $$aThe recent development in the field of autonomous navigation at sea is moving the focus from theoretical work and lab experiments to industrial solutions for the market. The move from the lab to the industry raises the question of the ability to deliver a consistent, dependable autonomous navigation capability to shipowners. One aspect is the hardware architecture, the dependability of which is traditionally ensured by applying long-standing, mature norms and guidelines (IEC61508, class regulations and guidelines). Another is the confidence that can be placed in the software. The three questions are: - how do the algorithms perform in terms of computational time, relevance of the proposed routes, repeatability, accuracy? This question is addressed with quantitative performance evaluations (see paper by Unige, SafeNav session). - have the algorithms been implemented correctly in the embedded software that will be deployed onboard? The answer is more of a Boolean type, and the related industry standard is software tests and Continuous Integration (CI). - how does the system perform as a whole, once the software has been deployed and integrated with other soft- and hard-ware? This is addressed with Hardware-In-the-Loop (HIL) testing and the well-known industrial processes of Factory Acceptance Tests (FAT), Harbour Acceptance Tests (HAT) and Sea Acceptance Tests (SAT). We have focused on the second aspect of software dependability, applied to a Decision Support System (DSS) whose role is to raise alerts and issue COLREG-based recommendations whenever a hazardous situation is identified at sea. We have implemented a Software-In-the-Loop (SIL) environment that can be used with CI tools to validate automatically that the recommendations and alerts issued by the DSS are consistent with the COLREG. A major difference with simulations used for assessing the overall performance of algorithms (e.g., with Monte Carlo methods), is the need for exact repeatability: in a software-test context, developers need to be able to replay test scenarios exactly, to investigate why tests pass or fail. In this paper we present the test environment and discuss the constraints in terms of configuration management induced by the use of simulation in a CI context, including for the multi-agent simulator and the parameterization of test scenarios. 000011155 7001_ $$aAgeneau, Q$$uSirehna 000011155 7001_ $$aNulac, G$$uSirehna 000011155 773__ $$tConference Proceedings of iSCSS 000011155 773__ $$jiSCSS 2024 000011155 8564_ $$uhttps://library.imarest.org/record/11155/files/.pdf$$9e7a66e1b-ba0a-4968-82e5-56b4b048ee0e$$s2827343