top of page

Country

Mr. John Smith

Job title

Company

People

Driving automation requires solving multiple problems in a virtually infinite context of possible scenarios and interactions, with the highest safety standards. Only recent advances in Artificial Intelligence (AI) suggest that this is possible. However, the intrinsic characteristics of current AI systems (e.g., opacity, unpredictability, bias, complexity) involve risks not only to safety, but also to fundamental rights. The expert group on AI (AI HLEG) appointed by the European Commission proposed a comprehensive framework with seven key requirements for AI systems to be considered as trustworthy. Considering the importance of AI in driving automation, these requirements need to be integrated into sector-specific requirements. From a review of state-of-the-art research, in this presentation we identify the maturity level of the different requirements and outline the main challenges to be addressed in the future to ensure that AI systems embedded in autonomous vehicles can be developed in a trustworthy way.

Trustworthy artificial intelligence requirements in the autonomous driving domain

EB2012-IBC-004 • Paper • EuroBrake 2012 • IBC

DOWNLOAD PAPER PDF
DOWNLOAD POSTER PDF
DOWNLOAD SLIDES PDF

Sign up or login to the ICC to download this item and access the entire FISITA library.

Upgrade your ICC subscription to access all Library items.

Congratulations! Your ICC subscription gives you complete access to the FISITA Library.

BUY NOW

Retrieving info...

Available for purchase on the FISITA Store

OR

bottom of page