I was intrigued by a couple of videos that I came across today. These videos concern the merits and accomplishments of Artificial Intelligence (AI) systems. They provide an interesting contrast in approaches that we may adopt about the increasing role of AI in our lives.
First, consider a recent TEDx talk by robotics researcher Peter Haas. Haas supplies an example of how "black box" AI systems sometimes work in ways that, upon inspection, are clearly wrong or infelicitous. The example he gives is that of a system that misclassifies a dog as a wolf because it has the impression that any photo with snow in it means "wolf."
He asks the pertinent question: Should we adopt and trust systems when we have little idea of what they are really doing? We are already trusting them to make legal assessments and drive cars, so the problem is already pressing. Haas urges caution.
Another article concerns how an AI system learned to play the 80s video game Qbert. Basically, the object of the game is for the character to jump on each block on the stage without getting "eaten". As each stage is cleared, the player moves up to the next level, which gets harder and harder but makes more points available.
Researchers from the University of Freiburg in Germany allowed their system to learn to play the game more-or-less from scratch. It came up with an unexpected and novel strategy that involved jumping around seemingly at random upon clearing the first stage, thus racking up millions of points. Really, the system had located and exploited a bug in the original design.
In the video below, the novel strategy starts at around the 20s mark.
Here, the AI has made an interesting discovery that no human would likely ever have conceived.
This pair of observations illustrates a persistent problem, namely the dilemma of progress. When considering adoption of a new technology, where the outcome is uncertain, we have roughly two options.
The first is to proceed at once, hoping to gain the benefits promised by the new technology as soon as possible. If things go wrong, we may hope to mitigate our losses.
The second strategy is to proceed cautiously, study the new technology, and adopt it only if and when we are satisfied that its benefits are worth its risks. However, if the technology is beneficial, then this strategy could mean substantial delays before those benefits are realized.
My point is not that one strategy is better than the other but that both are reasonable in a general way in the face of uncertainty. AI is an example of this dilemma: Its potential for benefit or harm are becoming more evident all the time. We face the matter of deciding on which approach to pursue in its adoption.