- 博客(31)
- 收藏
- 关注
原创 [VSOD] UFO: A Unified Transformer Framework for CoS, CoSD, VSOD
对于三种任务提出了统一的框架
2022-08-25 15:24:26
1402
原创 [Transformer] LightViT: Towards Light-weight Convolution-free Vision Transformers
轻量级无卷积Transformer
2022-08-25 15:17:29
1335
原创 [Transformer] EdgeViTs: Competing Light-weight CNNs on Mobile Devices with Vision Transformers
适用于边缘设备的ViT
2022-08-05 18:16:24
1064
原创 [Transformer] AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
adaptformer
2022-06-24 11:59:59
992
原创 [Transformer] TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation
TopFormer:打造Arm端实时分割与检测模型,完美超越MobileNet!CVPR2022TopFormer: Token Pyramid Transformer for Mobile Semantic SegmentationPaper: http://arxiv.org/pdf/2204.05525Code: https://github.com/hustvl/TopFormer1 Introduction为了使ViT适应各种密集的预测任务,最近的Vi...
2022-04-25 23:15:03
1073
原创 RepLKNet:Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
《Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs》论文: https://arxiv.org/abs/2203.06717MegEngine代码和模型:https://github.com/megvii-research/RepLKNetPyTorch代码和模型:https://github.com/DingXiaoH/RepLKNet-pytorchCVPR2022 20...
2022-03-18 20:54:03
668
原创 [Transformer] CMT:Convolutional Neural Networks Meet Vision Transformers
论文链接: https://arxiv.org/abs/2107.06263论文代码(个人实现版本): GitHub - FlyEgle/CMT-pytorch: CMT: Convolutional Neural Networks Meet Vision Transformers官方代码暂未开源复现博客: 浅谈Transformer+CNN混合架构:CMT以及从0-1复现1 OverviewCNN+Transformer混合模型:利用CNN对局部特征进行建模,利用Tran...
2022-03-11 13:28:13
5279
原创 [Attention] VAN:visual attention network
2022年2月Arxiv Link : https://arxiv.org/abs/2202.09741Code Link : https://github.com/Visual-Attention-Networkblog参考: https://mp.weixin.qq.com/s/cTKOZ3gODdiqpuvPGoqAmQ1 简介简单的将自然语言处理中的自注意力机制拿到计算机视觉中是否合理 ?卷积的优点是可以充分利用图像本身的2D结构信息。而自注意力机制的优点...
2022-03-04 15:16:52
3109
原创 [Transformer] PVT系列:PVT & CPVT & Twins
PVT:《Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions》论文: https://arxiv.org/abs/2102.12122源码:https://github.com/whai362/PVT把金字塔结构引入Transformer,简单地堆叠多个独立的Transformer encoder,在每个Stage中通过Patch Embedding来逐...
2022-03-04 12:59:07
1936
原创 [Transformer] TransVOD: End-to-End Video Object Detection with Spatial-Temporal Transformers
2022年1月https://arxiv.org/abs/2201.05047v3https://github.com/SJTU-LuHe/TransVOD.DETR《End-to-End Object Detection with Transformers》Deformable DETR《Deformable Transformers for End-to-End Object Detection》TransVOD 《End-to-End Video Object Detect..
2022-02-28 14:57:53
4861
原创 [Transformer] Deformable DETR:Deformable Transformers for End-to-End Object Detection
2020.10作者单位:商汤目标检测:在DETR中加入了Deformable和多尺度特征融合策略Paper: https://arxiv.org/abs/2010.04159Code: https://github.com/fundamentalvision/Deformable-DETR1 简介DETR可以避免在进行目标检测时手工设计很多组件,但由于transformer注意模块在处理图像特征图时的局限性,导致收敛速度慢,特征空间分辨率有限。为了解决这些问题,提出了D...
2022-02-28 11:36:43
2457
原创 [Transformer] DETR: End-to-End Object Detection with Transformers
ECCV2020Facebook AI第一个用Transformer进行目标检测的网络论文: https://arxiv.org/abs/2005.12872代码: https://github.com/facebookresearch/detr搭建参考: 【CODE】Facebook 最新DETR(基于Transformer)目标检测算法实战_哔哩哔哩_bilibili1 简介DETR通过将常见的CNN与Transformer架构相结合,不仅做到了并行化,不需要..
2022-02-25 15:01:02
1056
原创 [VSP] Spatiotemporal module for video saliency prediction based on self-attention
Spatiotemporal module for video saliency prediction based on self-attention (sciencedirectassets.com)1 IntroductionVideo saliency prediction是预估动态场景下人眼凝视点的任务,可以应用于许多领域:视频压缩、视频目标分割、视频字幕。如何有效的学习上下文信息,并模拟人眼对于运动物体的视觉注意力便成为了待解决的问题。许多先进的网络基于CNN-RNN结构,但...
2022-01-21 16:56:10
2937
原创 [Transformer] DeiT:Training data-efficient image transformers & distillation through attention
DeiT:Training data-efficient image transformers & distillation through attentionhttps://arxiv.org/pdf/2012.12877.pdfGitHub - facebookresearch/deit: Official DeiT repository2020 Dec现有的基于Transformer的分类模型ViT需要在海量数据上(JFT-300M,3亿张图片)进行预训练,再在ImageN..
2022-01-18 20:20:56
982
原创 [Video Transformer] Video Swin Transformer
代码: GitHub - SwinTransformer/Video-Swin-Transformer: This is an official implementation for "Video Swin Transformers".论文: https://arxiv.org/pdf/2106.13230.pdf代码解读: Swin-Transformer代码讲解-Video Swin-Transformer_ly59782的博客-优快云博客Swin Transformerht...
2022-01-18 20:13:23
8617
4
原创 [Transformer] Conformer: Local Features Coupling Global Representations for Visual Recognition
https://arxiv.org/abs/2105.03889https://github.com/pengzhiliang/ConformerICCV2021Conformer的组成成分:a stem module,dual branches,FCUs,two classifiersStem module:7*7 conv with stride 2,3*3 max pooling with stride 2dual branches:CNN分支(ResNet,提取局部信...
2022-01-18 20:07:42
1655
原创 [Video Transformer] VTN: Video Transformer Network
https://arxiv.org/abs/2102.00719SlowFast/README.md at master · bomri/SlowFast · GitHubICCV2021Video action recognition总结:相当于把CNN+LSTM结构中的LSTM替换为VTN适用于处理长视频,在inference时可以一次输入整个视频模型框架是模块化的,2D backbone可以换成不同的网络,注意力模块也可以设置为不同的transformer模型...
2022-01-18 20:05:49
1562
原创 [Video Transformer] TimeSformer: Is Space-Time Attention All You Need for Video Understanding?
论文:https://arxiv.org/pdf/2102.05095.pdf代码:https://github.com/lucidrains/TimeSformer-pytorch参考博客:https://mp.weixin.qq.com/s/E43AaQEcr2_Nm4FqcXXM7gaccept: ICML2021author: Facebook AIInput clips:H*W*3*F从原视频中取出的F帧RGB视频帧,size H*W。Decomposi...
2022-01-18 18:25:05
1623
原创 [Transformer] Swin Transformer V2: Scaling Up Capacity and Resolution
论文:代码:1 Introduction在Swin Transformer的基础上,提出了以下的改进措施:1)post normalization technique and scaled cosine attention提升大型视觉模型的稳定性;2)log-spaced continuous position bias 进行低分辨率预训练模型向高分辨率模型迁移。3)implementation details 大幅节省GPU显存占用以使得大视觉模型训练变得可行。...
2022-01-18 16:07:02
2744
原创 [Transformer] PyramidTNT
TNT: Transformer in Transformer论文: https://arxiv.org/pdf/2103.00112.pdf代码:https://github.com/huawei-noah/noah-research/tree/master/TNTPyramidTNT: Improved Transformer-in-Transformer Baselines with Pyramid Architecture论文: https://arxiv.org/abs/22...
2022-01-18 15:21:15
577
原创 [ConvNeXt] A ConvNet for the 2020s
《A ConvNet for the 2020s》Facebook AI Research(FAIR)论文链接: https://arxiv.org/pdf/2201.03545.pdf代码链接: GitHub - facebookresearch/ConvNeXt: Code release for ConvNeXt model1 IntroductionViT迅速成为主流的研究方向,然而,当应用于广义CV任务(如目标检测、语义分割)时,常规的ViT面临着极大挑战。因此,分层...
2022-01-17 20:14:55
1762
原创 [Video Transformer] ViViT: A Video Vision Transformer
CVPR2021论文:https://arxiv.org/abs/2103.15691代码:scenic/scenic/projects/vivit at main · google-research/scenic · GitHub参考博客:初探Video Transformer(二):谷歌开源更全面、高效的无卷积视频分类模型ViViT (qq.com)1 IntroductionViViT利用纯Transformer结构进行视频分类,是ViT在视频输入上的应用。..
2022-01-17 14:46:56
2469
原创 [Transformer] MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
作者单位:Apple论文:https://arxiv.org/abs/2110.02178代码:GitHub - apple/ml-cvnets: CVNets: A library for training computer vision networks1 Introduction传统CNN易于优化且可根据特定任务整合不同网络,ViT则需要大规模的数据且更难优化,学习量大且计算量大,这是因为ViT缺乏图像固有的归纳偏差。结合CNN和ViT的优势,为移动视觉任务建..
2022-01-17 14:15:35
1072
1
原创 [Video Transformer] UniFormer:Unified Transformer for Efficient Spatial-Temporal Representation Lear
UniFormer:Unified Transformer for Efficient Spatial-Temporal Representation Learning论文: https://openreview.net/pdf?id=nBU_u6DLvoKICLR 20221 Introduction从高维视频中学习多尺度时空语义是很困难的,因为视频帧之间的全局依赖很复杂。在图1中,TimeSformer在浅层中学习视频信息,但是空间和时间注意力都过于冗余。空间注意力...
2022-01-14 19:03:38
1749
原创 [Video Transformer] X-ViT: Space-time Mixing Attention for Video Transformer
论文: https://arxiv.org/abs/2106.05968代码:Home | Adrian BulatGitHub - 1adrianb/video-transformers博客:《X-ViT》-基于时空混合attention的视频Transformer,大幅度降低计算复杂度 - 知乎Samsung 视频识别1 Introduction用transformer进行视频识别时,由于时间维度的额外建模,导致计算开销显著提升。本文提出的模型复杂度与视频..
2022-01-14 15:13:44
1260
原创 [Transformer] DAT: Vision Transformer with Deformable Attention
论文: https://arxiv.org/abs/2201.00520代码: https://github.com/LeapLabTHU/DAT2022年1月1 简介与CNN模型相比,基于Transformer的模型具有更大的感受野,擅长于建模长期依赖关系,在大量训练数据和模型参数的情况下取得了优异的性能。但是,计算成本较高,收敛速度较慢,过拟合的风险增加。为了降低计算复杂度,Swin Transformer采用基于Window的局部注意力来限制Local Window中的注意
2022-01-14 14:44:07
3384
2
空空如也
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人