Author:
Ebert Christof,Weyrich Michael
Abstract
<div class="section abstract"><div class="htmlview paragraph">Test of autonomous systems is mostly brute force and ad-hoc thus being neither efficient nor transparent. Though requirements invite for a situational transparency, a framework is missing to judge quality of requirements and derived test-cases. Practical challenges are state explosion, difficulty to derive corner cases, no systematic safety of the intended functionality as specified, lack of accepted KPI, etc. Maintaining a valid safety case is hardly possible with such adaptive systems and continuous software updates. To achieve trusted autonomous vehicles, test cases must be generated automatically while at same time providing coverage (e.g., indicating progress with KPI), efficiency (e.g., limiting the amount of regression testing) and transparency (e.g., showing how specific corner cases are tested in case of accidents). This paper provides a method for automatically generating test cases for AI-based autonomous systems and compares it with existing testing methods. A case study is provided to show how multiple testing methods are combined to facilitate AI-based testing.</div></div>