AI models have already entered autonomous and networked driving very successfully. However, AI models are usually black-box models which, although mostly executing correct decisions, here in terms of the driving behaviour, do not make the decision making process sufficiently transparent. This is in particular problematic in the context of autonomous driving since the reasons for making wrong decisions cannot be appropriately investigated and moreover, in the case of an accident due to an autonomous vehicle one cannot reasonably explain it to the public. Another weakness of AI models is hat they require a huge amount of data whose collection is time- and resource-consuming since simulated data are not yet sufficiently generalizable. Due to the complexity of traffic situations, one cannot assume that all relevant situations are covered by even very huge data sets. Through the behaviour of the agents in the driving scenarios, the real data implicitly contain knowledge, for example in the sense of traffic rules or norms of behaviour, but these sources of knowledge are not yet integrated explicitly into the AI models.
The goal of AI knowledge is to explicitly integrate knowledge from different sources (e.g., mathematical and physical knowledge, traffic rules, norms of behaviour) into existing AI models for autonomous and networked driving in order to increase their functional performance, to enable to check if the AI decisions are reasonable and to validate the AI predictions. In addition, the data efficiency has to be increased by using synthetic data more effectively, reducing the required amount of real data. Another goal apart from integrating existing formalized knowledge into the AI model is to extract yet unknown types of knowledge from trained AI models.
OFFIS will examine how formalized knowledge can be integrated into the training process of the AI resp. into the AI components and will identify and formalize relevant mathematical and physical knowledge. OFFIS will also be involved in checking whether AI predictions are reasonable on a functional scale, i.e., using the sources of knowledge to explain why the AI has made the respective decisions, and detect improper generalizations. Moreover, OFFIS will take part in identifying and improving goodness criteria for the evaluation and validation of driving scenarios, for example in terms of criticality, and generate critical scenarios by simulation.