Zum Hauptinhalt springen

Trustworthy Behavior for Autonomous Vehicles

How digital twins can reduce field testing for autonomous vehicles

 

In the light of the upcoming Artificial Intelligence Act (AI Act) by the European Commission, Deloitte has partnered with fortiss – the Bavarian state institute of research for software intensive systems – to assess the robustness of the application of AI in a collision avoidance use-case for autonomous driving by identifying risk factors and executing countermeasures. In the whitepaper, we outline the challenges and results.

The Deloitte framework for Trustworthy AI outlines the criteria that AI algorithms must meet to earn society's trust. Making an application more trustworthy is especially relevant in safety-critical applications such as autonomous driving and remains one of the unsolved challenges when it comes to AI-based algorithms used in autonomous vehicles (AVs). Deloitte highlights which factors are essential for developing Trustworthy AI algorithms using supervised or unsupervised learning paradigms.

In this paper, we have investigated reinforcement learning as a well-known learning paradigm that has gained traction in recent years. It allows systems to learn complex driving behavior offline in the simulation, reducing the computational requirements of the decision-making component (planner) on the vehicle. 

Trustworthy Behaviour for Autonomous Vehicles

 

Leider ist dieser Inhalt aufgrund Ihrer Cookie-Einstellungen nicht verfügbar. Um den Inhalt anzusehen, schalten Sie bitte alle Cookie-Kategorien in den Cookie-Einstellungen auf aktiv.

 

Collaboration between Deloitte and fortiss

 

Deloitte cooperated with fortiss to investigate the extent to which the Trustworthy AI Framework is a viable and complete framework for use in the context of reinforcement learning, and to identify potential risk factors that may occur at different stages of the model lifecycle. For this use case, in our whitepaper we defined and evaluated two types of reward specifications for reinforcement learning that model behavioral specifications within the ODD. In our test drive, we created challenging scenarios for the AV by placing obstacles in different locations at different distances from the AV.

We used uniform sampling, which includes not only the distances and sizes of the surrounding obstacles, but also the initial starting position and velocity of the AV, for complete coverage of the parameter scenario. For a more realistic application, we also performed variation sampling by manually defining the characteristics of the scenarios. Under these conditions, we evaluated the training performance in the real world.

We need a better understanding of how to apply the priority of behavior requirements to the reward specification to determine how trustworthy the system really is. We have also investigated two approaches to defining scenario sets with different benefits in terms of ODD coverage. A cross-evaluation of these approaches to scenario generation shows that the highest generalization is achieved when a mixed set of scenarios is used during model training. Our results emphasize the importance of incorporating architecture-specific effects on system behavior into the development process of the behavior planner. We believe that this finding requires further analysis in the future before we can incorporate it into the Deloitte Trustworthy AI framework.

For more information on trustworthy behavior for autonomous vehicles, download the whitepaper here. 

Inhaltlicher Ansprechpartner

 

Dr. David Dang

Manager

+4989290366385

dadang@deloitte.de

Fanden Sie dies hilfreich?

Vielen Dank für Ihr Feedback

Wenn Sie helfen möchten, Deloitte.com weiter zu verbessern, füllen Sie bitte folgendes aus: 3-min-Umfrage