Humans may be one of the biggest roadblocks keeping fully autonomous vehicles off city streets.
If a robot is going to navigate a vehicle safely through downtown Boston, it must be able to predict what nearby drivers, cyclists, and pedestrians are going to do next.
Behavior prediction is a tough problem, however, and current artificial intelligence solutions are either too simplistic (they may assume pedestrians always walk in a straight line), too conservative (to avoid pedestrians, the robot just leaves the car in park), or can only forecast the next moves of one agent (roads typically carry many users at once.)
MIT researchers have devised a deceptively simple solution to this complicated challenge. They break a multiagent behavior prediction problem into smaller pieces and tackle each one individually, so a computer can solve this complex task in real-time.
Their behavior-prediction framework first guesses the relationships between two road users — which car, cyclist, or pedestrian has the right of way, and which agent will yield — and uses those relationships to predict future trajectories for multiple agents.
These estimated trajectories were more accurate than those from other machine-learning models, compared to real traffic flow in an enormous dataset compiled by autonomous driving company Waymo. The MIT technique even outperformed Waymo’s recently published model. And because the researchers broke the problem into simpler pieces, their technique used less memory.
“This is a very intuitive idea, but no one has fully explored it before, and it works quite well. The simplicity is definitely a plus. We are comparing our model with other state-of-the-art models in the field, including the one from Waymo, the leading company in this area, and our model achieves top performance on this challenging benchmark. This has a lot of potential for the future,” says co-lead author Xin “Cyrus” Huang, a graduate student in the Department of Aeronautics and Astronautics and a research assistant in the lab of Brian Williams, professor of aeronautics and astronautics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Joining Huang and Williams on the paper are three researchers from Tsinghua University in China: co-lead author Qiao Sun, a research assistant; Junru Gu, a graduate student; and senior author Hang Zhao PhD ’19, an assistant professor. The research will be presented at the Conference on Computer Vision and Pattern Recognition.
Multiple small models
The researchers’ machine-learning method, called M2I, takes two inputs: past trajectories of the cars, cyclists, and pedestrians interacting in a traffic setting such as a four-way intersection, and a map with street locations, lane configurations, etc.
Using this information, a relation predictor infers which of two agents has the right of way first, classifying one as a passer and one as a yielder. Then a prediction model, known as a marginal predictor, guesses the trajectory for the passing agent, since this agent behaves independently.
A second prediction model, known as a conditional predictor, then guesses what the yielding agent will do based on the actions of the passing agent. The system predicts a number of different trajectories for the yielder and passer, computes the probability of each one individually, and then selects the six joint results with the highest likelihood of occurring.
M2I outputs a prediction of how these agents will move through traffic for the next eight seconds. In one example, their method caused a vehicle to slow down so a pedestrian could cross the street, then speed up when they cleared the intersection. In another example, the vehicle waited until several cars had passed before turning from a side street onto a busy, main road.
While this initial research focuses on interactions between two agents, M2I could infer relationships among many agents and then guess their trajectories by linking multiple marginal and conditional predictors.
Real-world driving tests
The researchers trained the models using the Waymo Open Motion Dataset, which contains millions of real traffic scenes involving vehicles, pedestrians, and cyclists recorded by lidar (light detection and ranging) sensors and cameras mounted on the company’s autonomous vehicles. They focused specifically on cases with multiple agents.
To determine accuracy, they compared each method’s six prediction samples, weighted by their confidence levels, to the actual trajectories followed by the cars, cyclists, and pedestrians in a scene. Their method was the most accurate. It also outperformed the baseline models on a metric known as overlap rate; if two trajectories overlap, that indicates a collision. M2I had the lowest overlap rate.
“Rather than just building a more complex model to solve this problem, we took an approach that is more like how a human thinks when they reason about interactions with others. A human does not reason about all hundreds of combinations of future behaviors. We make decisions quite fast,” Huang says.
Another advantage of M2I is that, because it breaks the problem down into smaller pieces, it is easier for a user to understand the model’s decision making. In the long run, that could help users put more trust in autonomous vehicles, says Huang.
But the framework can’t account for cases where two agents are mutually influencing each other, like when two vehicles each nudge forward at a four-way stop because the drivers aren’t sure who should be yielding.
They plan to address this limitation in future work. They also want to use their method to simulate realistic interactions between road users, which could be used to verify planning algorithms for self-driving cars or create huge amounts of synthetic driving data to improve model performance.
This research is supported, in part, by the Qualcomm Innovation Fellowship. Toyota Research Institute also provided funds to support this work.