Chatting up a storm
ChatGPT is breaking new ground in conversational AI but professor says it can’t be trusted – yet
ChatGPT is breaking new ground in conversational AI but professor says it can’t be trusted – yet
By Charlotte Danby Faculty of EngineeringThe release of ChatGPT, a conversational artificial intelligence (AI) created by OpenAI, has refreshed debates on the ethical creation and use of new technologies.
Dr. Alexander Wong, for one, describes the tech as a “very big engineering achievement that needs to mature.”
ChatGPT has moved the needle for human-machine engagement. Its ability to sensibly converse with users is remarkable. But there are limitations to its powers that people need to be aware of.
“The excitement around ChatGPT is understandable,” says Wong, a systems design engineering professor and Canada Research Chair in Artificial Intelligence and Medical Imaging at the University of Waterloo.
“This tech has the potential to augment human efforts and improve outcomes in people’s work, studies and lives. But it’s not quite there yet.”
Trained to please
One of Wong’s main concerns is that people are using ChatGPT as a credible source of information. What many people might not realise is that the technology is geared to please its audience rather than provide correct responses.
“Machine learning, much like human learning, is guided by a reward system,” Wong says. “In this case, reinforcement learning is used where the AI is rewarded when it pleases the person it’s having a conversation with. So it will say whatever you want to hear, even if it’s wrong.
“For example, I asked ChatGPT why I was such a fantastic violinist. It told me that I was a highly accomplished and respected violinist known for my exceptional technique and passionate performances, how I had won numerous music awards, performed with various orchestras such as the Toronto Symphony Orchestra, and taught master classes around the world. While this is all very pleasing to hear, it’s completely fictitious. I don’t play the violin.”
OpenAI is transparent about ChatGPT’s limitations. The company’s blog states that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers … there’s currently no source of truth.”
OpenAI also has a Waterloo connection. Its CEO, Sam Altman (DEng ’17) received an honorary doctorate in 2017.
Checks and balances
Like all AI, ChatGPT needs human guidance to learn. It can’t tell right from wrong or good from bad on its own. Filters and content safeguards help prevent the technology from generating offensive content but these are not foolproof.
“ChatGPT was not created for malicious use,” says Wong. “In the right hands, the tech can develop and mature safely for reliable and relevant use in different professional settings. But it has no sense of objective truth, so in the wrong hands, it could do harm.”
The ethical implications are serious and OpenAI is working on solutions. Currently in the works is a ChatGPT watermark that will appear on all content generated by the AI.
It’s a start, but such measures are fallible and users will likely find ways around them.
Regulatory frameworks play a crucial role and some governments are more stringent than others at overseeing innovation. But for the most part, technology companies are given a long leash.
“The problem is that a lot of AI products are launched with zero checks and balances,” says Wong. “Governments and tech companies are in a constant back-and-forth. Too much regulation and you stifle innovation. Too little and you have innovation at the expense of social good.”
Ethical engineering
Wong is adamant about educating future innovators to think about the ramifications of what they create before they build it.
“In addition to asking why something should be built,” says Wong, “engineers need to consider how.”
ChatGPT was built and trained on data and content pulled from the internet, including all its toxicity and bias. This was addressed after the fact, with an AI-powered safety mechanism. But to build that safety system required teaching it to identify harmful or offensive content.
“Many tech companies use human sourcing to build AIs that don’t promote violence or hate speech,” says Wong. “People are hired to review very inappropriate content and label it accordingly. These data labels are then fed to the AI which learns how to detect toxic language and prevent it from being used. Even with such safeguards, the AI can still act in very unexpected and inappropriate ways.”
Data enrichment professionals play a foundational role in building AI technologies for safe, everyday use. However, a recent article in Time magazine titled “OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic” questions the ethics of how it’s done.
“Just because one can do something, doesn’t mean one should,” says Wong. “As educators, we have to instruct our students to critically assess their ideas, think about the consequences and map out all possible impacts on society.”
The best is yet to come
In time, with the right considerations and improvements, Wong sees ChatGPT benefitting professionals across a range of sectors - not as the default expert in all fields, but rather as a tool that helps humans tackle tasks better.
“AI is a means to provide information to the operator, be that a clinician, plant manager or business executive,” Wong says. “If it enables them to see more patients, great. If it improves their production accuracy and consistency, perfect. If it helps them learn something new, even better.”
The debates on ethical AI creation and use will continue. But even in this early and somewhat flawed stage of development, Wong believes that ChatGPT shows enormous promise to evolve human-machine collaboration in our favour.
Each researcher named on the Highly Cited Researchers™ 2024 list ranks in the top 1 per cent for their fields
Waterloo’s MSAM Lab is increasing capacity for research, training and partnerships to meet growing demands in the field of advanced manufacturing
Waterloo spinoff, KA Imaging has developed a portable, cost-effective alternative to conventional medical imaging
The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg, and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations.