用Transformer做的微表情识别,看完记录一下。

摘要:
However,the main drawback of the current methods is their inability to fully extracting holistic contextual information from ME images.
传统方法最大的弊端在于,无法从图像中完全提取到全部上下文信息。
this paper uses Transformer variants as the main backbone and the dual-branch architecture as the main framework to extract meaningful multi-modal contextual features for ME recognition(MER).
本文使用Transformer的变体作为主干网络,采用双分支结构为框架,提取多模态上下文特征用于实现微表情的识别。
The first branch leverages an optical flow operator to facilitate the motion information extraction between ME sequences,and the corresponding optical flow maps are fed into the Swin Transformer to acquire motion–spatial representation.
第一个分支利用光流法获取运动信息,再将光流信息传入Swin Transformer中获取运动空间表征(motion-spatial representation)。
The second branch directly sends the apex frame in one ME clip to Mobile ViT(Vision Transformer),which can capture the local–global features of MEs.
第二个分支直接将峰值帧传入Mobile ViTal中,获取微表情的局部与全局特征。
to achieve the optimal feature stream fusion,a CAB(cross attention block)is designed to interact the feature extracted by each branch for adaptive learning fusion.
为了实现最优的特征流融合,设计了一个CAB(交叉注意力块)来交互各分支提取的特征进行自适应学习融合。
1.简介
These studies empirically designed several manual descriptors,such as Local Binary Patternfrom Three Orthogonal Planes(LBP