How UW Moose was used to create a self-driving platoon

Wednesday, April 10, 2024

Self-driving platoon experiment

We conducted an experiment to learn how human drivers follow a self-driving platoon. The results are published in our IEEE Transactions on Intelligent Transportation Systems journal article "Enhancing Safety in Mixed Traffic: Learning-Based Modeling and Efficient Control of Autonomous and Human-Driven Vehicles".

But how did we create a self-driving platoon using only one self-driving vehicle, the UW Moose?

We leveraged the mixed-reality capabilities of the WISE ADS and WISE Sim as follows:

  1. We launch the WISE Sim simulator in mixed-reality mode on UW Moose to execute the given scenario. The simulator produces a virtual vehicle which drives in front of UW Moose, and which follows a precise trajectory and a velocity profile. The simulated vehicle behaves exactly the same way every time the scenario is run thus making the experimental setup fully repeatable. 
  2. We launch the dynamic object detector on UW Moose to detect the human-driven vehicle which is following UW Moose. 
  3. We engage self-driving on UW Moose, which follows the virtual vehicle generated by WISE Sim. UW Moose follows the virtual vehicle in a consistent way, forming a repeatable self-driving platoon.
  4. WISE ADS dynamic object tracker mixes the object detections coming from WISE Sim and the dynamic object detector thus producing a set of tracks for both real and virtual dynamic objects.
  5. We record all the data of each scenario run allowing for downstream analysis. The data includes the dynamic object tracks. Human driver behavior us learned using Gaussian Process Regression.

Example: traffic jam scenario

To test human drivers following a self-driving platoon in stop-and-go traffic, we executed the traffic jam scenario illustrated below. This scenario definition is executed by WISE Sim to produce virtual object detections of the principal other vehicle (POV). The scenario proceeds as follows:

  1. When EGO traverses the first trigger T, POV begins driving on POV_path1 until it stops at the end of the path.
  2. POV traverses the second trigger, which causes it to begin driving on POV_path2 after a few seconds delay. 
  3. Similarly, after stopping at the end of POV_path2, the third trigger causes POV to continue on POV_path3. 
  4. EGO follows the POV and passes the scenario by traversing the first goal point g1.
  5. Both vehicles continue driving along the path performing a sharp left-turn, driving up hill, and slowing down for a speed bump (railway crossing), before reaching the final goal.
traffic jam scenario overview

Overview of the traffic jam scenario

The task of the human drivers was to follow the EGO while driving in their natural style. Since EGO is a self-driving vehicle, it sometimes behaves differently than a human-driven vehicle. Additionally, the human drivers did not see the virtual POV vehicle that the EGO was following, so the behavior of EGO was surprising when EGO was stopping for no apparent reason to the human driver.

Below is the drone footage of such scenario run.

Remote video URL

Internal data view of UW Moose

The video below shows images from front- and back- facing cameras, lidar point clouds, and object tracks of the virtual POV in front and the human-driven vehicle in the back. The virtual vehicle (ID: 766) is not visible in the front camera, whereas the human-driven vehicle (ID: 612) is visible in the back camera. The dynamic object tracker produces object tracks for all real and virtual objects.

Remote video URL

Additional resources

  1. For more information about the mixed-reality testing mode, see the SAE journal article "Modes of Automated Driving System Scenario Testing: Experience Report and  Recommendations" or watch the paper presentation video.
  2. Details of this experiment are published in an IEEE Transactions on Intelligent Transportation Systems journal article "Enhancing Safety in Mixed Traffic: Learning-Based Modeling and Efficient Control of Autonomous and Human-Driven Vehicles". The behavior of the human drivers is learned using Gausian Process Regression.
  3. Jie Wang published "An Intuitive Tutorial to Gaussian Process Regression" in Computing in Science & Engineering ( Volume: 25, Issue: 4, July-Aug. 2023).
  4. Project page WISE Automated Driving System and UW Moose.
  5. Project page WISE Sim.