Narrowing the coordinate-frame gap in behavior prediction models: Distillation for efficient and accurate scene-centric motion forecasting DiJia (Andy) Su 1 , Bertrand Douillard 2 , Rami Al-Rfou 2 , Cheolho Park 2 , Benjamin Sapp 2 Abstract — Behavior prediction models have proliferated in recent years, especially in the popular real-world robotics application of autonomous driving, where representing the distribution over possible futures of moving agents is essential for safe and comfortable motion planning. In these models, the choice of coordinate frames to represent inputs and outputs has crucial trade offs which broadly fall into one of two categories. Agent-centric models transform inputs and perform inference in agent-centric coordinates. These models are intrinsically invari- ant to translation and rotation between scene elements, are best- performing on public leaderboards, but scale quadratically with the number of agents and scene elements. Scene-centric models use a fixed coordinate system to process all agents. This gives them the advantage of sharing representations among all agents, offering efficient amortized inference computation which scales linearly with the number of agents. However, these models have to learn invariance to translation and rotation between scene elements, and typically underperform agent-centric models. In this work, we develop knowledge distillation techniques between probabilistic motion forecasting models, and apply these techniques to close the gap in performance between agent-centric and scene-centric models. This improves scene- centric model performance by 13.2% on the public Argoverse benchmark, 7.8% on Waymo Open Dataset and up to 9.4% on a large In-House dataset. These improved scene-centric models rank highly in public leaderboards and are up to 15 times more efficient than their agent-centric teacher counterparts in busy scenes. I. INTRODUCTION AND RELATED WORK Predicting the future behavior of multiple vehicle, cyclist, and pedestrian agents in real-world driving scenes is a difficult but essential task for safe and comfortable motion planning for autonomous vehicles. This task is typically referred to as “motion forecasting” or “behavior prediction”. It is challenging for a number of reasons. (1) The world state is heterogeneous, consisting of static and dynamic road network elements and dynamic agent state observations. (2) The outcomes depend heavily on multi-agent interactions. (3) The output distribution over possible futures is inherently uncertain and highly multi-modal due to latent agent intents. How to represent the input world state, interactions, and output distributions are all open questions and active areas of research. In the last few years, there has been a proliferation of behavior prediction systems which address these model- ing challenges, fueled by both the compelling promise of the autonomous vehicle industry, and public benchmarks to compare methods [1]–[4]. One of the most interesting design choices and the focus of this paper is that of the coordinate frames to represent input and output data. There 1 Princeton University, 2 Waymo LLC Fig. 1. Approach overview. On the left, the teacher and agent-centric model is repeatedly and independently applied to each agent in the scene, with all model inputs and outputs represented in each agent’s ego-centric coordinate frame. On the right, the student and scene-centric model is applied to the whole scene once, without requiring repeated computations per agent. While faster, a scene-centric formulation tend to be less accurate, since it also has to understand and model the per-agent invariance that is otherwise built-in the agent-centric approach. To use the computational efficiency of a scene-centric approach and yet benefit from the accuracy of an agent- centric approach we propose a knowledge distillation approach that uses the predicted trajectories of the agent-centric or teacher model, to train a scene-centric or student model. are two distinct common choices, each with advantages and disadvantages. Agent-centric models represent inputs and internal state in agent-centric coordinates, and perform inference reasoning in this frame 3 . The coordinates of road elements (e.g., lanes, crosswalks) and other agents’ states are described relative to the agent’s pose, thus the representation is inherently invariant to the global position and orientation of the agent. This can be considered a form of feature pre-processing that allows for models to specialize to an agent’s point of view, and in practice results in state-of-the-art performance on public benchmarks [5]–[9]. A key downside, however, becomes apparent when modeling many agents in a scene: each agent is modeled independently, thus computation is typically linear in the number of agents, and quadratic when modeling interactions [10]–[16]—for a scene with n agents, and m road elements, the computation scales as O ( n ( n + m )) . This is not an issue for public benchmarks 3 Without loss of generality, an agent-centric frame transforms world coordinates so that the origin is set to the ego-agent’s center, and rotated so that the agent’s heading direction is the unit vector ( x, y ) = (1 , 0) . arXiv:2206.03970v1 [cs.CV] 8 Jun 2022 which require modeling of less than ten agents at once [1]– [4], but is a computational bottleneck for busy real-world urban environments consisting of hundreds of agents. Scene-centric models, on the other hand, do the bulk of world state encoding in a shared, fixed frame for all agents 4 . Models that operate in this frame are typically “top- down” or “bird’s-eye-view” representations which discretize the world into spatial grid cells, and apply a convolutional neural net (CNN) backbone to encode the scene [17]–[23]— although non-raster scene-centric approaches also exist [24]– [26]. After such processing, the prediction head of these models decodes trajectories in agent-frame after a global-to- local transformation. A salient advantage of this formulation is that computation is primarily function of the spatial grid resolution and field of view, rather than the number of agents—a spatial grid of size H × W cells would scale as O ( HW + n ) , where the first term is processed with a CNN and dominates the second term in practical settings—see fig. 2 for quantification. The downsides of this formulation are (1) loss of information when discretizing world state into a raster format, (2) difficulty in modeling long-range interactions with CNNs, and (3) the model must either learn rotation/translation invariance or learn to perform a global to local transformation for each agent when decoding. In brief, agent-centric models outperform scene-centric models—borne out in public leaderboards, and likely ex- plainable by the shortcomings described above. However, scene-centric models are compelling due to amortized sub- linear scaling with respect to the number of agents in the scene; particularly relevant in dense urban environments. In this paper, we propose a novel knowledge distillation method to narrow the gap in performance between these two different modeling approaches. Knowledge distillation [27] is a popular and effective machine learning technique in domains like computer vision and natural language processing to transfer knowledge from a large model—the “teacher model”—to a smaller one– the “student model”. The knowledge transfer mechanism originally proposed for classification tasks, replaces training data groundtruth (“hard labels”) with predictions from the teacher model (“soft labels”). The intuition is that these soft labels contain a more information-rich smooth target space for the student model to learn from than the original data [28] [29]. Distillation has been extended beyond classification to sequence prediction tasks like Neural Machine Transla- tion [30]. To our knowledge, however, distillation has never been applied to the domain of behavior prediction / motion forecasting. Although behavior prediction can be considered a sequence problem, a key difference is that we wish the predicted future distributions to cover the entire space of outcomes accurately, in contrast to the typical NLP task for instance that aims to generate a single realistic output. Hence transferring knowledge between a teacher and a student 4 Without loss of generality, this can be an arbitrary conceptual center of the scene elements. for motion forecasting is an open problem. Furthermore, for motion forecasting where the future is represented as a distribution of trajectories covering intent modes (the approach we adopt here and is the most common, eg. [20], [31]–[33]), trajectory and mode diversity is crucial. One key challenge then is that distillation could be detrimental to diversity; this was investigated in [34] in the NLP domain. In this work, we develop and empirically validate a variety of distillation approaches for behavior prediction. We then apply these techniques by setting the teacher to be a high- performance agent-centric model, and transfer knowledge to an efficient student scene-centric model. In doing so we significantly improve the performance of our scene- centric model while maintaining it’s computational efficiency benefits. Contributions. The contributions of this paper are as follows: • We systematically analyze latency and quality of agent- centric and scene-centric model approaches in a com- mon framework. This supports our characterization of the coordinate-frame modeling choices in an empirically rigor- ous way. • We are the first to develop and apply knowledge distilla- tion techniques to the popular field of behavior prediction modeling. • Applying our best distillation approach gives a remarkable boost in performance to our efficient scene-centric models on several large autonomous vehicle future prediction datasets. Comparing to the non-distilled student model baseline, dis- tillation improves performance by 13.2% on the Argoverse dataset, 7.8% on the Waymo-Open-Motion dataset, and up to 9.4% on key metrics of our In-House dataset. II. B ACKGROUND Definition of the prediction problem . Let x be the observations of all agents in the scene (in the form of past trajectories) and additional contextual information (such as lane semantics and traffic light states), t be the discrete time step, s t be the state of an agent at time t . The future trajectory s = [ s 1 , ..., s T ] is the sequence of states of the agent up to time T . We assume our model predicts K trajectories, where each trajectory is a sequence of predicted states s k = [ s k 1 , ..., s k T ] . For both agent-centric and scene-centric approaches, we consider the class of models whose output is to predict a Gaussian distribution around a predicted trajectory: φ ( s k t | x ) = N ( s k t | μ k t ( x ) , Σ k t ( x )) (1) where μ k t is the mean and Σ k t is the co-variance of the Normal distribution. The mean and the variance are learnt parameters. The mean represents the mode of the distribu- tion, which is the most likely state at time t . We also model a probabilistic distribution over the pre- dicted trajectories, which can be interpreted as the “confi- dence” over each predicted trajectory: π ( s k | x ) = e fk ( x ) ∑ i e fi ( x ) , where f k ( x ) : R d ( x ) → R is the output parameterized by a neural network. Thus, combining the two elements above we obtain the Gaussian Mixture Model (GMM) distribution: p ( s | x ) = K ∑ k =1 π ( s k | x ) T ∏ t =1 φ ( s t | s k , x ) (2) This makes the simplifying assumption that time steps are conditionally independent given a history of world state, allowing us to use an efficient feed-forward neural network. A typical number for K is on the order of K = 10 output trajectories. This type of output representation is a fairly popular approach, as in [20], [31], [35], [36]. A. Teacher model: Agent-Centric Model We use an agent-centric coordinate frame model (ACM) to serve as the teacher. The agent-centric model encodes, processes, and reasons about the world from each individual agent’s point of view. This representation requires a transfor- mation of all scene information from the global coordinate frame into the agent’s frame. Because of this, with the agent- centric approach, inference time and memory requirement increases with the number of agents. Our ACM architecture is inspired by some best performing design choices in the literature. It consumes the following four types of input: road graph and traffic light information, motion history (i.e. agents states history), and agents inter- actions. For the road graph information, the ACM utilizes polylines to encode the road elements from a 3D high definition map with an MLP (multi-layer perceptron), similar to [6], [12], [25]. For traffic light information, the ACM utilizes a separate LSTM as the encoder. For the motion history, the ACM uses a LSTM to handle a sequence of past observations, and the last iteration of the hidden state is used as history embedding, as in [10], [16], [37], [38], to name a few. For agent interactions, we use a LSTM to encode the neighbors’ motion history in an agent centric frame, and aggregate all neighbors’ information via max-pooling to arrive to a single interaction. This is a simple form of fully- connected neighbor interaction modeling; other works have explicitly used GNNs [14], [25] and/or attention or max- pooling [6], [11], [13], [16]. Finally, these four encodings are concatenated together to create an embedding for each agent in the agent-centric coordinate frame. This final embedding is converted into a GMM (eqn.2) using an MLP based decoder. B. Student model: scene-centric model For the student we use a scene-centric coordinate frame model (SCM). In our SCM architecture, the input data is represented in a global coordinate frame that is shared across all agents. As mentioned above, one of the benefits of this formulation is that the scene can be processed as a whole, resulting in efficient inference which is invariant to the number of agents. The SCM consumes three types of inputs. It takes in road information represented as points augmented with semantic attributes, agents information in the form of points sampled from each agent’s oriented box, and traffic light informa- tion, also represented as points augmented with semantic attributes. The SCM encodes all these input points with a PointPillars encoder [39] followed by a 2D convolutional backbone [40]. A final per-agent embedding is extracted by cropping a patch out of the feature map at a location that maps to the current location of the agent in the scene, as is in [20]. Note that even though we end up with a per-agent embedding, all of the upstream processing is done for the full scene at once. The final per-agent embedding is transformed into a GMM (eqn.2) using a MLP based decoder, as for the ACM. Fig.2 provides inference speed comparisons between SCM and ACM. As is shown in the figure, the inference speed difference gets progressively larger as the number of agents in the scene increases, showing that the ACM doesn’t scale well. Despite SCMs’ fast inference speed, we observe that they underperform ACMs in general. We see this trend in public leaderboards, where agent-centric models tend to dominate (see sec. IV). We also see this directly comparing the ACM and SCM architectures described in this section. To get the best of both worlds (fast inference speed + good prediction accuracy), we now discuss using knowledge distillation from a slower but accurate teacher (ACM) to improve a faster but less accurate student (SCM). Learning Objective. Let the training data be in the form of { x m , ˆ s m } M m =1 with ˆ s m be the groundtruth trajectory, π ( s k | x ) , μ k t ( x ) , Σ k t ( x ) be the outputs of a deep neural net- work parameterized by θ . For both the ACM and SCM, we train to maximize the log-likelihood of recorded driving trajectories, following [20]: L base ( θ ) = − M ∑ m =1 K ∑ k =1 1 ( k = ˆ k m ) ( log ( π ( s k | x m ; θ ) ) + T ∑ t =1 log ( N ( s k t | μ k t , Σ k t ; x m , θ ) )) , (3) where 1 ( · ) is the indicator function, ˆ k m is the index of the predicted trajectory closest to the ground-truth ˆ s m , in terms of the L2 distance. The first term in the loss function fits the likelihood of each k th predicted trajectory (by making the closest-to-ground-truth predicted trajectory the most proba- ble one), and the second term is simply a time sequence extension of standard GMM likelihood fitting [41]. The advantage of training the network according to eqn.3 is that it avoids the need of performing the expectation-maximization procedure and avoids the intractability in directly fitting the GMM likelihood. III. D ISTILLATION M ETHODS In this section, we describe distillation techniques we developed for trajectory-based behavior prediction 5 . While we use these methods to distill from ACMs to SCMs in this work, the distillation methods here can be applied to any 5 Note that an alternative representation for future behavior, probabilistic occupancy grids (or “heatmaps”) could be considered. Possibly simpler distillation approaches for this representation could be developed. However heatmap representations are significantly less common in the literature, and more importantly, public benchmarks and their metrics specifically require trajectory-based representations. Fig. 2. Inference speed comparison between ACM (teacher) and SCM (student). trajectory-based behavior prediction models. An overview of our approach is provided in Figure 1. To facilitate the presentation, we use the following no- tations for the teacher model. We denote θ as the teacher network parameters, ξ k as the k th predicted trajectory output from the teacher, and Π( ξ k | x ) as the trajectory likelihood distribution from the teacher (analogous to π ( s k | x ) from the student). Lastly, we denote H ( · , · ) as the cross entropy function and D KL ( ·||· ) as the KL divergence function. Our distillation approaches are as follows. A. Trajectory Set Distillation In this distillation approach, we train our student model to match the full trajectory set output from the teacher. Recall that the full output representation of our models is a GMM; ignoring the covariances and taking the mode of each component gives us our trajectory set. The weights over this trajectory set are given by π . The distillation loss has two parts. For the first part, we use the teacher’s predicted trajectories (all K of them) as multiple pseudo-groundtruth trajectories for training the student. Here, we want the k th teacher trajectory to be max- imally likely under the learned distribution for the student’s corresponding k th mode. For the second part, we impose a cross entropy loss to encourage the student’s trajectory mode distribution π to match the teacher’s mode distribution Π . L distill ( θ ) = − M ∑ m =1 K ∑ k =1 ( − H ( π ( s k | x m ; θ ) , Π( ξ k | x m ; θ ) ) + T ∑ t =1 log ( N ( ξ k t | μ k t , Σ k t ; x m , θ ) )) (4) The full loss function is formed by adding L distill on top of the original loss L base (eqn.3) as follows: L ( θ ) = L distill ( θ ) + λ L base ( θ ) . (5) Note that L distill does not have the term 1 ( k = ˆ k m ) , compared to L base . This is because for the L distill , we match all K predicted trajectories to the teacher’s predicted trajectories, while for the L base , we only optimize over one trajectory (the real observed future groundtruth). One added benefit of this distillation formulation is that training includes additional information in the form of additional soft labels to learn from, that is, an additional K − 1 predicted trajectories with the associated distribution over them. One implementation detail to note for this approach is that it imposes correspondence between each of the K teacher trajectories and K student trajectories, which constrains the set of possible solutions for the student by removing equivalent solutions under permutation. We use the hyper-parameter λ to optionally disable the base loss for a certain number of steps of warm-up, which pre-trains the model with the distillation loss only. In our experiments we cross-validate whether we (i) set λ = 1 for all training (i.e., no pre-training), or (ii) set λ = 1 ( step ≥ total steps / 4) , i.e., pre-train for 25% of the total training iterations. B. Trajectory Sample Distillation As an alternative to using multiple trajectories as pseudo- groundtruth, as described above, we sample a single trajec- tory from the teacher’s distribution to be the groundtruth for the student: ξ k sampled ∼ Π( ξ k | x m , θ ) . We call this sampled teacher trajectory the proxy groundtruth label . Then, we directly optimize over this proxy groundtruth (instead of the true groundtruth label). Mathematically, this is expressed as follows: L sampled ( θ ) = − M ∑ m =1 K ∑ k =1 1 ( k = ˆ k m sampled ) ( log ( π ( s k | x m ; θ ) ) + T ∑ t =1 log ( N ( s k t | μ k t , Σ k t ; x m , θ ) )) (6) On expectation, over infinite samples, this loss is equivalent to requiring the student’s weighted trajectory set to match the teacher’s. While this formulation stands out its simplicity, it is the same as L base ( θ ) , it does not, however, encourage the full GMM distribution of the teacher and student to match; this is described next. C. Trajectory Distribution Distillation In this last formulation, the loss directly encourages the student’s full GMM output to match the teacher’s GMM. As in Trajectory Set Distillation , we force correspondence between the teacher and students k th trajectory (for all k ) to avoid permutation ambiguity in the solution space of the student. To match distributions, we use cross-entropy loss between the discrete mode distributions of the student and teacher ( π and Π ), and KL-divergence for each Gaussian distribution ( N t for the student, N t for the teacher) in the trajectory sequences: L ( θ ) = L base ( θ ) + M ∑ m =1 K ∑ k =1 ( H ( π, Π ) + T ∑ t =1 D KL ( N t || N t ) ) (7) IV. E XPERIMENTS We ran our experiments on the WOMD, Argoverse, and In-House datasets. The results are shown in Table I, II, and III. The best numbers across the entire table are highlighted as bold. The best methods among the SCM methods are marked as blue, and the second best ones as orange. In Table coord. test set method frame rank mAP( ↑ ) minADE( ↓ ) minFDE( ↓ ) Miss Rate( ↓ ) Scene-Transformer [24] scene 4 th 0.337 0.678 1.376 0.198 Multipath++ agent 1 st 0.401 0.569 1.194 0.143 ACM (teacher) agent – 0.329 0.676 1.488 0.178 Average SCM baseline scene – 0.322 0.757 1.691 0.205 improvement: SCM+Distill Set scene 3 rd 0.349 0.710 1.569 0.186 7.8% SCM+Distill Sample scene – 0.320 0.742 1.643 0.194 4.2% SCM+Distill Distr. scene – 0.330 0.758 1.681 0.199 1.5% TABLE I M ODEL DISTILLATION PERFORMANCE ON THE WOMD TEST SET . I and II, the first section of rows show the results of the models top-ranked in the corresponding public leaderboards. For both the teacher and student models, we train end- to-end using the Adam optimizer with a learning rate of 5 × 10 − 4 . We used gradient clipping to prevent gradient explosion with a threshold of 10. We trained all models for 1M training steps, and we submit to the leaderboard the best model based on its performance on the validation set. After cross-validation we set λ = 1 ( step ≥ total steps / 4) for the WOMD and λ = 1 for the In-house and Argoverse dataset. The teacher implementation uses off the shelf components such as polylines and LSTMs, as described in sec. II-A. The student models use an EfficientDet-d2 backbone, a 200 x 200 PointPillars grid with a 2 meters resolution, and a PointPillars embedding size of 64. Metrics. We follow the Argoverse benchmark and use the following metrics for evaluation: minimum average displace- ment error (minADE), minimum final displacement error (minFDE), and miss rate (MR). Besides these, there are additional metrics provided for each dataset: mAP (mean Average Precision) for WOMD, brier-minFDE (which scales the minFDE with prediction probability) for Argoverse, wADE (probability weighted Averaged Displacement Error) for the In-house dataset. Where the choice of K is required, to define the top K trajectories to be used for evaluation of a metric (for example minADE ( K = k ) on Argoverse), we use k = 6 . Datasets. The Waymo Open Motion Dataset (WOMD) [1] is an open source data set for behavior prediction from Waymo. It contains 570 hours of data over 1750km of driving distance with more than 100,000 scenes that are on average about 20 seconds long. Our results on this dataset are summarized in Table I, and we report the rank in terms of mAP. The Argoverse Motion Forecasting Competition is a open source trajectory prediction dataset with more than 300,000 curated scenarios [2] with each sequence containing one target vehicle for prediction. Our results on this dataset are summarized in Table.II, and ranks are reported in terms of minADE. The In-House Dataset is a large scale real-world dataset of driving scenes in various urban and suburban environments within the US. It is collected by vehicles equipped with an industry grade sensor and perception stack, and it provides detailed logs of tracked objects. Results on this dataset are a valuable addition to the public benchmark results due to its large size and quality—over 13 millions training samples with richer HD map and state information. Our distillation results on this dataset are summarized in Table.III. A. Discussion Some clear trends emerge from our results. Across datasets, distillation improves the student model’s perfor- mance significantly—from 4.6%–13.2% average relative im- provement across metrics. The Trajectory Set distillation method worked the best across datasets. Interestingly Tra- jectory Sample distillation worked better only on Argoverse. The major differences with Trajectory Sample distillation is that it trains with a single trajectory groundtruth rather than trying to learn a trajectory set or full distribution like the other methods. The Argoverse dataset as well stands out from other datasets in that it is smaller, has a short prediction horizon, and has less diverse driving behavior [1]. Lastly, Distribution Distillation did not work as well as the other distillation methods across all 3 datasets. We hypothesize that this form of distillation task was too constrained: match- ing GMM distributions via KL-divergence is more difficult to achieve than simply maximizing likelihood of pseudo- groundtruth (as in Trajectory Set and Trajectory Sample distillation). Another trend from our experiments is that agent-centric models outperform the scene-centric models, in our own implementations, as well as in related works. This was our original motivation for this work, and the results presented here provide further justification for investigating distillation approaches. However, there is still improvements to be made in efficient, scene-centric models, since the gap has not been fully closed by our distillation techniques. Lastly, we want to highlight that our models are competi- tive on public leaderboards in an absolute sense. On WOMD, our best distilled model 6 is ranked 3 rd , and to our knowledge is the best performing scene-centric (and thus efficient) model. In the Argoverse leaderboard, our best distilled model ranks 17 th where other known, popular scene-centric models are ranked 35 th and 58 th place. Illustrations of improvements provided by the distilled models on a variety of urban driving scenarios are shown in Figure 3. 6 Our best distillation model is named as MPG-Distil(pretrain) on the WOMD’s public leader-board coord. test set method frame rank brier-minFDE( ↓ ) minFDE( ↓ ) MR( ↓ ) minADE( ↓ ) LaneRCNN [26] scene 58 th 2.147 1.453 0.123 0.904 LGN [42] scene 35 th 2.059 1.364 0.163 0.868 mmTransformer [43] agent 15 th 2.033 1.338 0.154 0.844 TPCN [44] agent 6 th 1.929 1.244 0.133 0.815 poly agent 1 st 1.793 1.214 0.132 0.790 ACM (teacher) agent – 1.906 1.280 0.147 0.816 Average SCM baseline scene – 2.206 1.588 0.225 0.931 improvement: SCM+Distill Set scene – 2.052 1.416 0.180 0.868 11.1% SCM+Distill Sample scene 17 th 2.017 1.383 0.173 0.853 13.2% SCM+Distill Distr. scene – 2.345 1.723 0.254 0.980 -8.2% TABLE II M ODEL DISTILLATION PERFORMANCE ON THE A RGOVERSE TEST SET . coord. method frame wADE( ↓ ) minADE( ↓ ) minFDE( ↓ ) Miss Rate( ↓ ) ACM (teacher) agent 1.200 0.524 1.145 0.335 Average: SCM baseline scene 1.270 0.558 1.532 0.357 improvement: SCM+Distill Set scene 1.190 0.545 1.526 0.324 4.6% SCM+Distill Sample scene 1.270 0.567 1.602 0.345 -0.7% SCM+Distill Distr. scene 1.220 0.608 1.789 0.358 -5.5% TABLE III M ODEL DISTILLATION PERFORMANCE ON THE I N -H OUSE TEST SET . Fig. 3. Illustration of improvements provided by the distilled models on the WOMD. For each example, the sub-figure on the left shows prediction for SCM-baseline while the sub-figure on the right shows the predictions for our SCM+Distill Set. The purple markers show the groundtruth while the red markers show the trajectory of the Autonomous Driving Vehicle (ADV). The predicted trajectories are shown in blue (the darker the blue, the higher the confidence). In different scenarios (parking lots, variuous types of traffic intersections) we see that the non-distilled baseline misses the groundtruth trajectory and predicts left or right turn while the groundtruth is going straight, or vice versa. In contrast, after applying the distillation techniques proposed in this paper, the model more accurately predicts the groundtruth. V. CONCLUSIONS In this paper, we develop novel knowledge distillation techniques to bridge the coordinate-frame gap in behavior prediction models. We use a agent-centric model as a teacher to improve the accuracy of an otherwise more efficient scene- centric model. Our method improves the performance of the scene-centric model by 13.2% on the public Argoverse benchmark, 7.8% on the public Waymo Open Dataset, and up to 9.4% on a large In-House dataset. The resulting improved scene-centric models are also 15 times faster than their agent- centric distillation counterparts in busy urban scenes. R EFERENCES [1] S. Ettinger, S. Cheng, B. Caine, C. Liu, H. Zhao, S. Pradhan, Y. Chai, B. Sapp, C. R. Qi, Y. Zhou, et al. , “Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset,” in Proceedings of the IEEE/CVF International Conference on Computer Vision , 2021, pp. 9710–9719. [2] M.-F. Chang, J. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang, P. Carr, S. Lucey, D. Ramanan, et al. , “Argoverse: 3d tracking and forecasting with rich maps,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2019, pp. 8748–8757. [3] W. Zhan, L. Sun, D. Wang, H. Shi, A. Clausse, M. Naumann, J. Kummerle, H. Konigshof, C. Stiller, A. de La Fortelle, et al. , “Interaction dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps,” arXiv preprint arXiv:1910.03088 , 2019. [4] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” 2020. [5] C. Tang and R. R. Salakhutdinov, “Multiple futures prediction,” Advances in Neural Information Processing Systems , vol. 32, pp. 15 424–15 434, 2019. [6] J. Gao, C. Sun, H. Zhao, Y. Shen, D. Anguelov, C. Li, and C. Schmid, “Vectornet: Encoding hd maps and agent dynamics from vectorized representation,” in Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition , 2020, pp. 11 525–11 533. [7] J. Hong, B. Sapp, and J. Philbin, “Rules of the road: Predicting driving behavior with a convolutional model of semantic interactions,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2019, pp. 8454–8462. [8] J. Mercat, T. Gilles, N. El Zoghby, G. Sandou, D. Beauvois, and G. P. Gil, “Multi-head attention for multi-modal joint vehicle motion forecasting,” in 2020 IEEE International Conference on Robotics and Automation (ICRA) . IEEE, 2020, pp. 9638–9644. [9] N. Rhinehart, K. M. Kitani, and P. Vernaza, “R2p2: A reparameterized pushforward policy for diverse, precise generative path forecasting,” in Proceedings of the European Conference on Computer Vision (ECCV) , 2018, pp. 772–788. [10] J. Mercat, T. Gilles, N. Zoghby, G. Sandou, D. Beauvois, and G. Gil, “Multi-head attention for joint multi-modal vehicle motion forecast- ing,” in IEEE Intl. Conf. on Robotics and Automation , 2020. [11] H. Zhao, J. Gao, T. Lan, C. Sun, B. Sapp, B. Varadarajan, Y. Shen, Y. Shen, Y. Chai, C. Schmid, et al. , “Tnt: Target-driven trajectory prediction,” arXiv preprint arXiv:2008.08294 , 2020. [12] S. Khandelwal, W. Qi, J. Singh, A. Hartnett, and D. Ramanan, “What- if motion prediction for autonomous driving,” ArXiv , 2020. [13] C. Tang and R. R. Salakhutdinov, “Multiple futures prediction,” in NeurIPS , 2019. [14] S. Casas, C. Gulino, R. Liao, and R. Urtasun, “Spagnn: Spatially- aware graph neural networks for relational behavior forecasting from sensor data,” in IEEE Intl. Conf. on Robotics and Automation . IEEE, 2020. [15] N. Rhinehart, R. McAllister, K. Kitani, and S. Levine, “PRECOG: Prediction conditioned on goals in visual multi-agent settings,” in Intl. Conf. on Computer Vision , 2019. [16] T. Salzmann, B. Ivanovic, P. Chakravarty, and M. Pavone, “Trajec- tron++: Multi-agent generative trajectory forecasting with heteroge- neous data for control,” arXiv preprint arXiv:2001.03093 , 2020. [17] N. Lee, W. Choi, P. Vernaza, C. B. Choy, P. H. Torr, and M. Chan- draker, “Desire: Distant future prediction in dynamic scenes with in- teracting agents,” in Proceedings of the IEEE conference on computer vision and pattern recognition , 2017, pp. 336–345. [18] M. Bansal, A. Krizhevsky, and A. Ogale, “Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst,” arXiv preprint arXiv:1812.03079 , 2018. [19] S. Casas, W. Luo, and R. Urtasun, “Intentnet: Learning to predict intention from raw sensor data,” in Conference on Robot Learning . PMLR, 2018, pp. 947–956. [20] Y. Chai, B. Sapp, M. Bansal, and D. Anguelov, “Multipath: Multiple probabilistic anchor trajectory hypotheses for behavior prediction,” in Conference on Robot Learning , 2019. [21] T. Phan-Minh, E. C. Grigore, F. A. Boulton, O. Beijbom, and E. M. Wolff, “Covernet: Multimodal behavior prediction using trajectory sets,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2020, pp. 14 074–14 083. [22] Y. Yuan and K. M. Kitani, “Diverse trajectory forecasting with de- terminantal point processes,” in International Conference on Learning Representations , 2019. [23] T. Buhet, E. Wirbel, A. Bursuc, and X. Perrotton, “Plop: Probabilistic polynomial objects trajectory planning for autonomous driving,” 2020. [24] J. Ngiam, B. Caine, V. Vasudevan, Z. Zhang, H.-T. L. Chiang, J. Ling, R. Roelofs, A. Bewley, C. Liu, A. Venugopal, et al. , “Scene transformer: A unified multi-task model for behavior prediction and planning,” arXiv preprint arXiv:2106.08417 , 2021. [25] M. Liang, B. Yang, R. Hu, Y. Chen, R. Liao, S. Feng, and R. Urtasun, “Learning lane graph representations for motion forecasting,” arXiv preprint arXiv:2007.13732 , 2020. [26] W. Zeng, M. Liang, R. Liao, and R. Urtasun, “Lanercnn: Distributed representations for graph-centric motion forecasting,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems , 2021. [27] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” stat , vol. 1050, p. 9, 2015. [28] M. Phuong and C. Lampert, “Towards understanding knowledge dis- tillation,” in International Conference on Machine Learning . PMLR, 2019, pp. 5142–5151. [29] X. Cheng, Z. Rao, Y. Chen, and Q. Zhang, “Explaining knowledge distillation by quantifying the knowledge,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2020, pp. 12 925–12 935. [30] Y. Kim and A. M. Rush, “Sequence-level knowledge distillation,” in EMNLP , 2016. [31] C. Tang and R. R. Salakhutdinov, “Multiple futures prediction,” in nips , 2019. [32] T. Zhao, Y. Xu, M. Monfort, W. Choi, C. Baker, Y. Zhao, Y. Wang, and Y. N. Wu, “Multi-agent tensor fusion for contextual trajectory prediction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2019, pp. 12 126–12 134. [33] T. Phan-Minh, E. C. Grigore, F. A. Boulton, O. Beijbom, and E. M. Wolff, “Covernet: Multimodal behavior prediction using trajectory sets,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2020, pp. 14 074–14 083. [34] C. Zhou, G. Neubig, and J. Gu, “Understanding knowledge dis- tillation in non-autoregressive machine translation,” ArXiv , vol. abs/1911.02727, 2020. [35] J. Hong, B. Sapp, and J. Philbin, “Rules of the road: Predicting driving behavior with a convolutional model of semantic interactions,” in CVPR , 2019. [36] T. Salzmann, B. Ivanovic, P. Chakravarty, and M. Pavone, “Tra- jectron++: Dynamically-feasible trajectory forecasting with heteroge- neous data,” in Computer Vision–ECCV 2020: 16th European Confer- ence, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16 . Springer, 2020, pp. 683–700. [37] A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi, “Social GAN: Socially acceptable trajectories with generative adversarial networks,” in CVPR , 2018. [38] A. ”Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese, “Social LSTM: Human Trajectory Prediction in Crowded Spaces,” in CVPR , 2016. [39] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “Pointpillars: Fast encoders for object detection from point clouds,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2019, pp. 12 697–12 705. [40] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for con- volutional neural networks,” in International Conference on Machine Learning . PMLR, 2019, pp. 6105–6114. [41] C. M. Bishop, Pattern Recognition and Machine Learning (Informa- tion Science and Statistics) . Berlin, Heidelberg: Springer-Verlag, 2006. [42] M. Liang, B. Yang, R. Hu, Y. Chen, R. Liao, S. Feng, and R. Ur- tasun, “Learning lane graph representations for motion forecasting,” in European Conference on Computer Vision . Springer, 2020, pp. 541–556. [43] Y. Liu, J. Zhang, L. Fang, Q. Jiang, and B. Zhou, “Multimodal motion prediction with stacked transformers,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2021, pp. 7577–7586. [44] M. Ye, T. Cao, and Q. Chen, “Tpcn: Temporal point cloud networks for motion forecasting,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2021, pp. 11 318–11 327.