Making artificial intelligence explainable
A look inside neural networks
Artificial intelligence (AI) is already firmly embedded in our everyday lives and is conquering more and more territory. For example, voice assistants are already an everyday item in many people’s smartphones, cars and homes. Progress in the field of AI is based primarily on the use of neural networks. Mimicking the functionality of the human brain, neural networks link mathematically defined units with one another. But in the past it was not known just how a neural network makes decisions. Researchers at the Fraunhofer Institute for Telecom-munications, Heinrich Hertz Institute, HHI and Technische Universität Berlin have developed a technology that reveals the criteria AI systems use when ma-king decisions. The innovative Spectral Relevance Analysis (SpRAy) method ba-sed on Layer-wise Relevance Propagation technology provides a first peek insi-de the “black box”.
Today it’s almost impossible to find an area in which artificial intelligence is irrelevant, whether in manufacturing, advertising or communications. Many companies use learn-ing and networked AI systems, for example to generate precise demand forecasts and to exactly predict customer behavior. This approach can also be used to adjust regional logistics processes. Healthcare also uses specific AI activities, such as prognosis genera-tion on the basis of structured data. This plays a role for example in image recognition: X-ray images are input into an AI system which then outputs a diagnosis. Proper detec-tion of image content is also crucial to autonomous driving, where traffic signs, trees, pedestrians and cyclists have to be identified with complete accuracy. And this is the crux of the matter: AI systems have to provide absolutely reliable problem-solving strate-gies in sensitive application areas such as medicinal diagnostics and in security-critical areas. However, in the past is hasn’t been entirely clear how AI systems make decisions. Furthermore, the predictions depend on the quality of the input data. Researchers at the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, HHI and Tech-nische Universität Berlin have now developed a technology, Layer-wise Relevance Prop-agation (LRP), which renders the AI forecasts explainable and in doing so reveals unreli-able problem solution strategies. A further development of LRP technology, referred to as Spectral Relevance Analysis (SpRAy), identifies and quantifies a broad spectrum of learned decision-making behaviors and thus identifies undesirable decisions even in enormous datasets.
In practice the technology identifies the individual input elements which have been used to make a prediction. Thus for example when an image of a tissue sample is input into an AI system, the influence of each individual pixel is quantified in the classification results. In other words, as well as predicting how “malignant” or “benign” the imaged tissue is, the system also provides information on the basis for this classification. “Not only is the result supposed to be correct, the solution strategy is as well. In the past, AI systems have been treated as black boxes. The systems were trusted to do the right things. With our open-source software, which uses Layer-Wise Relevance Propagation, we’ve succeeded in rendering the solution-finding process of AI systems transparent,” says Dr. Wojciech Samek, head of the „Machine Learning” research group at Fraunho-fer HHI. “We’re using LRP to visualize and interpret neural networks and other machine learning models. We use LRP to measure the influence of every input variable in the overall prediction and parse the decisions made by the classifiers,” adds Dr. Klaus-Robert Müller, Professor for Machine Learning at TU Berlin.
Unreliable solution strategies
Trusting the results of neural networks necessarily means understanding how they work. According to the research team’s tests, AI systems don’t always apply the best strategies to reach a solution. For example, one well-known AI system classifies images based on context. It allocated photographs to the category ‘Ship’ when a large amount of water was visible in the picture. It wasn’t solving the actual task of recognizing images of ships, even if in the majority of cases it picked out the right photos. “Many AI algorithms use unreliable strategies and arrive at highly impractical solutions,” says Samek, summa-rizing the results of the investigations.
Watching neural networks think
The LRP technology decodes the functionality of neural networks and finds out which characteristic features are used, for example to identify a horse as a horse and not as a donkey or a cow. It identifies the information flowing through the system at each node of the network. This makes it possible to investigate even very deep neural networks.
The Fraunhofer HHI and TU Berlin research teams are currently formulating new algo-rithms for the investigation of further questions in order to make AI systems even more reliable and robust. The project partners have published their research results in the journal Nature Communications (see link below).