Take a look under the hood of an autonomous car
Building an artificial brain that will one day replace human drivers is an incredibly complex technical challenge, say the researchers leading Waterloo’s autonomous vehicle project
Building an artificial brain that will one day replace human drivers is an incredibly complex technical challenge, say the researchers leading Waterloo’s autonomous vehicle project
By Brian Caldwell Faculty of EngineeringThe basic idea is simple enough: in fully autonomous vehicles of the future, computer systems will replace human drivers.
But as front-line researchers working to develop reliable artificial brains to do that job are quick to point out, actually pulling it off, especially when safety is paramount, is an incredibly complex technical challenge.
“Making an autonomous train system is relatively straightforward because you know it is always going to be on tracks,” says Sebastian Fischmeister, a University of Waterloo engineering professor who specializes in safety-critical systems. “But a car can drive anywhere and that’s why it is so much tougher.”
For all their faults and weaknesses, the fact is people are pretty good at perceiving the world around them, making predictions and decisions based on sensory input, and taking appropriate action by hitting the brakes or turning the steering wheel.
They see, they hear, they make judgment calls and they do the right thing in the overwhelming majority of situations.
Statistics show, for instance, that about 100 million miles of road are travelled for every traffic death in the United States and that distance is even greater in Canada.
Krzysztof Czarnecki, co-leader of an autonomous vehicle project at Waterloo, says those numbers “set the bar very high” given that the brains in self-driving cars will, at minimum, have to be as competent as humans to get the green light from authorities and gain acceptance from users.
Reaching that level, experts agree, will require heavy reliance on a branch of artificial intelligence (AI) known as machine learning to build on-board computer brains nimble enough to handle almost limitless variables on the open road.
Step one in the development process, the focus of massive investment around the world, involves perception of the surrounding environment via cameras, laser devices and other sensors. They act as artificial eyes sending information to a self-driving car’s computer systems.
A painstaking AI technique known as supervised learning - often compared to teaching children to recognize words or objects with flashcards - is used to train the software to interpret and understand data received via that hardware.
In effect, an artificial car brain is shown hundreds of millions of images in which items have been labelled and depicted from different angles, in different weather conditions and so on. By processing those images, advanced algorithms eventually learn to recognize objects on their own.
The research team at Waterloo is now going through that laborious training process with Autonomoose, the nickname for a Lincoln sedan that was one of the first highly automated vehicles approved last fall for testing on public roads in Ontario.
Some studies have shown that recognition algorithms trained through machine learning can outperform people when it comes to identifying traffic signs.
Other research has demonstrated, however, that such systems can also be tricked into misidentifying items – by putting particular eyeglasses on faces, for example – for reasons that aren’t well understood.
“It has learned behaviour, but the problem is you don’t really know what it has learned or exactly how it has learned it,” says Fischmeister, who is skeptical fully autonomous cars will be on the road in the next few years.
State-of-the-art automation features in consumer vehicles currently include the ability to follow other vehicles at a safe distance, remain centred in lanes and park themselves.
Research vehicles such as the Autonomoose now require a trunkful of computer equipment, but the artificial brain in production cars is expected to have been scaled down enough within a few years to fit in a large briefcase.
Even more challenging than perception and recognition, and potentially more problematic, is step two in the development process: using a method known as reinforcement learning to train computer systems in autonomous vehicles to assess what is happening around them and make safe, sound decisions based on that information.
At the heart of that technique are mountains of data collected by driving millions and millions of miles - the more the better - on both real and simulated roads.
That approach is essential because it is considered impossible using traditional, deterministic programming to tell the system in control of a self-driving car what to do in every potential circumstance.
“The world is open-ended,” says Czarnecki, an electrical and computer engineering professor. “Things can happen that nobody has thought about and that have never happened before.”
The alternative to vainly trying to give a car a complete set of rules to follow is allowing it to drive, either in the real world or a virtual environment, and learn from its mistakes, and its successes, through a system of points and penalties.
Good driving is rewarded, bad driving is punished, and the car brain is programmed to seek as many points as possible. With practice, it improves.
General rules such as observing the speed limit and avoiding other vehicles are specified, but much of a vehicle’s behaviour would be determined by that kind of reinforcement - basically, learning through trial, error and experience.
“The first two steps, understanding the world and making decisions, are extremely difficult,” says Czarnecki. “That’s where machine learning comes in. It has allowed us to make tremendous progress.”
One key area of research at Waterloo, for instance, is focused on using AI to build on-board systems capable of operating in all kinds of Canadian weather, an especially challenging problem with endless variability.
A second team on the Autonomoose project is developing a computer model of the ring road on the Waterloo campus. It’ll eventually have virtual cyclists, virtual pedestrians and other complexities to train its computer system in a variety of dicey situations.
Simulation allows researchers to intentionally create danger, including crashes, to speed up the learning process, but gaining enough confidence in Autonomoose to actually drive it in autonomous mode on a public road – a quiet, two-kilometre loop in an industrial park – isn’t expected to happen until this fall.
Researchers would be thrilled to build on that milestone, likely to be a first in Canada, by letting the car drive itself to campus through signalled intersections and roundabouts on multi-lane city streets from a test track several kilometres away by the end of the year.
With even more daunting challenges ahead, such as how to train a car to decide between hitting an object that suddenly appears in its path and risking a rear-end collision by slamming on the brakes, Czarnecki is also dubious of the most optimistic time estimates for full commercial automation.
“Realistically, I think that within the next 10 years we will have some significant deployment of these cars on the road,” he says. “What will happen in the shorter term is really difficult to say.”
Despite the tremendous promise of AI, Fischmeister has a fundamental concern: understanding how the computer brains in control of vehicles will respond when confronted with new situations, as they inevitably will be.
As a result, his focus is on developing separate software to monitor those systems and put vehicles into safe mode – stopping or pulling over to the side of the road, for instance – when things seem to be going awry.
“Computers are stupid,” says Fischmeister, also an electrical and computer engineering professor. “They only do what you tell them to do and nothing extra. It’s the same thing for systems that rely solely on learning. At the moment, they only behave based on what they have learned.
“I like technology and I believe learning-based systems are essential for autonomous vehicles, but I point out problems – and I’m curious how people will solve them.”
The University of Waterloo will be a key partner with leading Canadian companies and sectors chosen to help grow our country’s global competitiveness through significant investments in the areas of artificial intelligence (AI) and advanced manufacturing
New building will house research on machine intelligence, mobile robotics, autonomous vehicles and wearable biomedical devices
U.S. News ranking recognizes Waterloo as a national leader in core disciplines
Read
Engineering stories
Visit
Waterloo Engineering home
Contact
Waterloo Engineering
The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg, and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations.