Wednesday, June 28, 2017
A doctoral candidate at Waterloo Engineering is ahead of the pack as he works on ways to unravel how powerful artificial intelligence (AI) computer programs make their decisions in fields including medical diagnosis and autonomous vehicles.
Devinder Kumar and colleagues at the Vision and Image Processing (VIP) Lab are among only a small group of researchers world-wide who are working on the ‘explainability problem,’ which is considered a crucial nut to crack if deep-learning AI systems are to gain regulatory approval and win the trust of users.
Click here for the full story.