Researchers at Monash University have released a paper which explores how artificial intelligence (AI) makes decisions, as well as developing a framework which helps to provide better transparency for the decision-making processes of AI.

AI has a considerable impact on peoples’ everyday lives. From autonomous driving to telehealth, AI algorithms are providing the core technology for autonomous decision-making platforms that influence human behaviour and decision-making. 

With so much reliance on AI technology throughout our daily lives, it’s essential to understand what information is powering AI-based decision systems and in turn influencing the technology we have come to rely on. 

Researchers at Monash University’s Faculty of Information Technology (IT) have published a research paper which explores explainability in AI and how the required information can be more transparent to humans. 

This project is a collaboration between Associate Professor Carsten Rudolph and Fariha Jaigirdar from the Department of Software Systems and Cybersecurity, Associate Professor Gillian Oliver from the Department of Human Centered Computing, Professor of Practice in Digital Health Chris Bain, and Professor David Watts from La Trobe University Law School.

Associate Professor, Carsten Rudolph, from the Department of Software Systems and Cybersecurity in the Faculty of IT, explains why AI-based decision-support systems and their in-depth explanation of knowledge is vital.

“Explainable AI seeks to provide greater transparency into how algorithms make decisions. The reason this is so important is because the decisions which AI systems make could ultimately lead to an incorrect medical diagnosis or a pedestrian being struck by a driverless car. Even if the overall system might be working perfectly, it is essential to know the very root of performing the decision, especially when the decision or prediction is crucial,” Professor Rudolph said. 

The team of researchers propose a universal framework that can demonstrate how the information fed into an AI system is secure and authentic, thereby providing end-users with sufficient information on AI decision-making systems. 

“We are ultimately trying to achieve better explainability, both from humans and security-aware aspects of AI systems. In our paper, we suggest the adoption of a human-centric perspective, which is all about looking at and understanding the data used for decision generation in AI-based systems,” said PhD Researcher, Fariha Jaigirdar.

Researchers are advocating for the relationship between input data, training data, and resulting classifications, as well as the origins of these various inputs, to be more obvious and transparent to the human user. 

For the user, explainable AI is just one element in the complete provenance track of the data. Systems need to provide an accessible way to present this context information in a way that empowers users to make well-informed decisions in their interactions with the systems. 

Maintaining the transparency and explainability of AI-based systems is especially crucial when it comes to the secure and legal aspects of data that is required in critical scenarios. It’s intended that this research paper and suggested framework will identify essential points of consideration for the future design of AI decision-making systems. 

To view the research paper, click here.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

©2021 Smart Cities. All rights reserved

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Log in with your credentials

Forgot your details?