Surveillance systems analyze and present vast amounts of heterogeneous sensor data. In order to support operators while monitoring such systems, the identification of anomalous behavior or situations that might need further investigation may reduce operators’ cognitive load. Bayesian networks can be used in order to detect anomalies in data. In order to understand the outcome generated from an anomaly detection application based on Bayesian networks, proper explanations must be given to operators.
This paper presents the findings of a literature analysis regarding what constitutes an explanation, which properties an explanation may have and a review of different explanation methods for Bayesian networks. Moreover, we present the empirical tests conducted with two of these methods in a maritime scenario. Findings from the survey and the experiments show that explanation methods for Bayesian networks can be used in order to provide operators with more detailed information to base their decisions on.
[CD-ROM]