【Paper】PDFormer

PDFormer: Propagation Delay-Aware Dynamic Long-Range Transformer for Traffic Flow Prediction

Abstract

As a core technology of Inteligent Transportation System, traffic flow prediction has a wide range of application. The fundamental challenge in traffic flow prediction is to effictely model the complex spatial-temporal dependencies in traffic data.

Spatial-temporal Graph Neural Network(GNN) models have emerged as one of the most promising methods to solve this problem.

However, GNN-based models have three major limitations for traffic predication:

  1. Most methods model spatial dependencies in a static manner, which limits the ablility to learn dynamic urban traffiic pattern;
  • “Static manner”: 静态建模指的是一种静态的方式或方法,即假设在一个特定的时间点内空间上的依赖关系是固定不变的。
  • 这些图卷积类的方法依赖与静态的图结构,“静态建模”则表示这些方法认为这些关系是不变的,也就是说,在时间上没有任何变化或者演化。因此,这些方法不能捕捉到空间依赖性关系的动态变化,可能不适合需要考虑空间关系的动态变化的应用场景。
  • 所以限制了动态的城市交通模式的学习。
  1. Most methods only consider short-range spatial infermation and are unable to capture long-range spatial dependencis;
  • 大部分方法只考虑 短范围的空间关系,无法获取较长范围的的空间依赖关系。
  1. These methods ignore the fact that the propagation of traffic conditions between locations has a time delay in traffice systems.
  • propagation : 传播

  • traffice condition: 交通状况

  • 这些方法忽略了这样一个事实:在交通系统中,不同地点之间交通状况的传播具有一定的延时性。
    To this end, we propose a novel Propagation Delay-aware dynamic long-range transFormer, namely PDFormer, for accurate traffice long-range prediction. Specifically, we design a spatial self-attention module to capture the dynamic spatial dependencies. Then, two graph masking matrices are introduced to hignlight spatial dependencies from short-and long-range views.

  • 我们提出了一个 新的 Delay-aware, dynamic long-range Transformer, 叫做PDFormer.

  • 我们设计了自注意力机制来捕获动态的空间依赖

  • 使用两个图 masking metrices 来强调 短距离和长距离的空间依赖,猜测可能是对长距离和短距离的节点进行加权处理,后面回来验证。

Moreover, a traffic delay-aware feature transformation module is proposed to empower PDFormer with the capability of explicity.

  • delay-aware feature transformation 可以显示增强 PDFormer的性能

Extensive experimental results on six real-world public traffic datasets show that our method can not only archive state-of-the-art perfermance but also exhibit competitive computational efficiency.Moreover, we visualize the learned spatial-temporal attention map to make our model highly interprtable.

  • 在 6个交通数据上达到了sota
  • 表现出较好的计算效率
  • 可视化了时空注意力图,来使得模型具有较好的解释性。

Introduction

In recent years, rapid urbanization has posed great challenges to modern urban traffic management. As an indispensable part of model smart cities. 应用场景后面回来看

For traffic flow prediction, the fundamental challenge is to effiecively caputure and model the complex and dynamic spatial-temporal dependencies of traffic data.

Many attenpts have been made in the literature to develop various deep learning models for this task. As early solutions, convolutional neural networks(CNNs) were applied to grid-based traffic data to capture spatial dependencies, and recurrent neural networks(RNNs) were used to learn temporal dyanamic.

graph neural networks(GNNs) were shown to be more suited to model the underlying graph structure of traffic data, and thus GNN-based methods have been widely explored in traffic prediction.

Despite the effectiveness, GNN-based models still have three major limitations for traffic prediction.

First, the spatial dependecies between locations in a traffic system are highly dynamic instead of being static,which are time-varying as they are affected by travel patterns and unexpected events. For example, as shown in Fig.1(b), the correlation between nodes A and B becomes stronger during the morning peek and weaker during other periods. While, existing methods model spatial dependencies mainly in a static manner(either perdefine or self-learned), which limits the ablility to learn dynamic urban traffic patterns.Secondly, due to the function division of the city, two distant locations, such as nodes A and C in Fig.1©, may reflect similar traffic pattens, implying the spatial dependencies between locaions as long-range.
在这里插入图片描述

  • 在交通系统中,不同地点的空间依赖是十分动态的而不是静态的, 这些依赖变化是随时间变化的。 这是由于受出行模式(应该是早高峰,晚高峰类似的)以及其他特定事件的影响。
  • 从图1(a) 可以看出 A和B节点在地图上较近,但是在某些特定的时间上,相关性不一定强。一个在上班的主干道,一个是商业街的路?
  • 从图1( c )可以看出, A和较远的C有较强的相关性,比如A和C都在上班的路上。A和C的模式较为接近。
  • 从图1(d)可以看出, D和E是同一条路上的两个节点,但是发现同一个节点的相邻交通节点的交通情况具有传播延时性。比如车祸导致堵车,堵车这个会沿着相应的节点进行传播。

Existing methods are often designed locally and unabel to capture long-range dependencies. For example, GNN-based models suffer from over-smothing, making it difficult to capture long-range spatial dependencies. Thirdly, the effect of time delay might occur in the spatial information propagation between locations in a traffic system. For example, when a traffic accident occurs in one location, it will take several minutes(a delay) to affect the traffic condition in neighboring locations, such as nodes D and E in Fig. 1(d). However, such a feature has been ignored in the immediate message passing mechanism of typicl GNN-based models.

To address the above issues, in this paper, we propose a Propagation Delay-aware dynamic long-range trransFormer model, namly PDFormer, for traffic flow prediction. As the core technical contribution, we design a noval spatial self-attention module to capture the dynamic spatial dependencies. This module incorporates local geographic neighborhood and global semantic neighborhood information into the self-attention ineraction via different graph masking methods , which can simulataneously capture the short-and long- range spatial dependencies in traffic data.

  • 为了解决以上的问题,设计了PDFormer。
  • 设计了空间自注意力模块来捕获动态空间依赖。 这个模块通过不同的图掩码方法将局部的领接图和全局语义领接信息嵌入到自注意力中, 这样在交通数据中,可以同时捕捉到长短距离的空间依赖。

Based on this module, we further design a delay-aware feature transformation module to integrate historical traffic patterns into spatial self-attention and explicity model the time delay of spatial information propagation.

  • 基于以上的模块,进一步设计了延迟感知特征转化模块来集成历史交通数据模式到空间自注意力 并且显式建模了空间信息传播。

Notation and Definitions

Definition 1 Road Network.

将道路网络表示为图 G=(V,ε,A)G = (V, \varepsilon, A)G=(V,ε,A)

  • V={ v1,…,vN}V = \{v1, \dots, v_N\}V={ v1,,
评论 20
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值