University of Waterloo researchers have developed a way to reduce bias in machine learning-generated decision-making and knowledge organization.
Led by Dr. Andrew Wong, a distinguished professor emeritus of systems design engineering at Waterloo, the research team has built an innovative model that aims to enhance trust and reliability in explainable artificial intelligence (XAI).
Traditional machine learning models often yield biased results, favouring groups with large populations or being influenced by unknown factors, and take extensive effort to identify from instances containing patterns and sub-patterns coming from different classes or primary sources.
Wong and his team's new XAI model called Pattern Discovery and Disentanglement (PDD) untangles complex patterns from data to relate them to specific underlying causes unaffected by anomalies and mislabeled instances.
“With PDD, we aim to bridge the gap between AI technology and human understanding to help enable trustworthy decision-making and unlock deeper knowledge from complex data sources,” said Dr. Peiyuan Zhou, the lead researcher on Wong’s team.
The medical field is one area where there are severe implications for biased machine learning results. Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes. This inherent bias and pattern entanglement leads to misdiagnoses and inequitable healthcare outcomes for specific patient groups.
Go to New model reduces bias and enhances trust in AI decision-making and knowledge organization for the full story.