Fujitsu Laboratories and Hokkaido University in Japan have announced the development of a new technology based on the principle of ‘Explainable AI’.
This automatically presents users with steps needed to achieve a desired outcome based on AI results about data, for example, from medical check-ups.
‘Explainable AI’ represents an area of increasing interest in the field of Artificial Intelligence and Machine Learning.
While AI technologies can automatically make decisions from data, ‘Explainable AI’ also provides individual reasons for these decisions – this helps avoid the so-called ‘black box’ phenomenon, in which AI reaches conclusions through unclear and potentially problematic means.
While certain techniques can also provide hypothetical improvements one could take when an undesirable outcome occurs for individual items, these do not provide any concrete steps to improve.
For example, if an AI that makes judgements about the subject’s health status determines that a person is unhealthy, the new technology can be applied to first explain the reason for the outcome from health examination data like height, weight and blood pressure.
Then, the new technology can additionally offer the user targeted suggestions about the best way to become healthy, identifying the interaction among a large number of complicated medical check-up items from past data and showing specific steps to improvement that take into account feasibility and difficulty of implementation.
Ultimately, this new technology offers the potential to improve the transparency and reliability of decisions made by AI, allowing more people in the future to interact with technologies that utilize AI with a sense of trust and peace of mind.
Further details were presented at the AAAI-21, Thirty-Fifth AAAI Conference on Artificial Intelligence opening from Tuesday, February 2.Click below to share this article