Dual-Branch Cross-Attention Network for Micro-Expression Recognition with Transformer Variants阅读笔记

用Transformer做的微表情识别,看完记录一下。
摘要:
However,the main drawback of the current methods is their inability to fully extracting holistic contextual information from ME images.
传统方法最大的弊端在于,无法从图像中完全提取到全部上下文信息。
this paper uses Transformer variants as the main backbone and the dual-branch architecture as the main framework to extract meaningful multi-modal contextual features for ME recognition(MER).
本文使用Transformer的变体作为主干网络,采用双分支结构为框架,提取多模态上下文特征用于实现微表情的识别。
The first branch leverages an optical flow operator to facilitate the motion information extraction between ME sequences,and the corresponding optical flow maps are fed into the Swin Transformer to acquire motion–spatial representation.
第一个分支利用光流法获取运动信息,再将光流信息传入Swin Transformer中获取运动空间表征(motion-spatial representation)。
The second branch directly sends the apex frame in one ME clip to Mobile ViT(Vision Transformer),which can capture the local–global features of MEs.
第二个分支直接将峰值帧传入Mobile ViTal中,获取微表情的局部与全局特征。
to achieve the optimal feature stream fusion,a CAB(cross attention block)is designed to interact the feature extracted by each branch for adaptive learning fusion.
为了实现最优的特征流融合,设计了一个CAB(交叉注意力块)来交互各分支提取的特征进行自适应学习融合。
1.简介
These studies empirically designed several manual descriptors,such as Local Binary Patternfrom Three Orthogonal Planes(LBP
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

pzb19841116

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值