The release of ChatGPT, a conversational artificial intelligence (AI) created by OpenAI, has refreshed debates on the ethical creation and use of new technologies.
Dr. Alexander Wong, a systems design engineering professor and Canada Research Chair in Artificial Intelligence and Medical Imaging at the University of Waterloo, describes the tech as a “very big engineering achievement that needs to mature.”
One of Wong’s main concerns is that people are using ChatGPT as a credible source of information. What many people might not realise is that the technology is geared to please its audience rather than provide correct responses.
“Machine learning, much like human learning, is guided by a reward system,” Wong says. “In this case, reinforcement learning is used where the AI is rewarded when it pleases the person it’s having a conversation with. So it will say whatever you want to hear, even if it’s wrong."
Like all AI, ChatGPT needs human guidance to learn. It can’t tell right from wrong or good from bad on its own. Filters and content safeguards help prevent the technology from generating offensive content but these are not foolproof.
In time, with the right considerations and improvements, Wong sees ChatGPT benefitting professionals across a range of sectors - not as the default expert in all fields, but rather as a tool that helps humans tackle tasks better.
Go to Chatting up a storm for the full story.