- 博客(21)
- 收藏
- 关注
原创 C++-CnC
CnC是一种新型编程模型,是为创建并行应用程序、而非为明确表达并行性而设计的。在CnC中,程序员只需要声明指定计算单元之间的依赖关系,但并不给出任何方式来指示如何满足这些依赖关系,CnC就专门用于解决潜在并行计算单元和数据之间的协调问题。本质上,程序员声明了两个或多个计算单元之间的某些依赖关系。指定依赖项比表达并行性更简单,因为它只使应用程序语义显式,并且不依赖于平台或任何特定的并行化技术。此外,CnC还体现了更多并行性方面的潜力,因为它没有将特定的并行性绑定到算法/程序。
2024-02-29 12:16:00
466
原创 [2205] [CVPR 2022 Workshop NTIRE] Residual Local Feature Network for Efficient Super-Resolution
Residual Local Feature Network for Efficient Super-Resolution
2022-07-03 13:52:43
1056
原创 [2206] An Improved One millisecond Mobile Backbone
An Improved One millisecond Mobile Backbone
2022-06-19 18:22:03
464
原创 [2108] [ICCV 2021] SwinIR: Image Restoration Using Swin Transformer
SwinIR: Image Restoration Using Swin Transformer
2022-04-13 16:25:52
2730
原创 [2203] SepViT: Separable Vision Transformer
SepViT: Separable Vision Transformer
2022-04-06 21:10:27
1411
原创 [2103] [ICCV 2021] Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
2022-04-04 18:31:32
1126
原创 [2111] Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers
Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers
2022-03-24 17:07:52
2979
原创 [2111] [CVPR 2022] Restormer: Efficient Transformer for High-Resolution Image Restoration
Restormer: Efficient Transformer for High-Resolution Image Restoration
2022-03-18 18:12:58
1988
原创 [2201] VRT: A Video Restoration Transformer
VRT: A Video Restoration Transformer
2022-03-12 22:55:02
3083
原创 [2106] Video Super-Resolution Transformer
Video Super-Resolution Transformer
2022-03-10 14:57:40
2146
原创 [2107] [NIPS 2021] Focal Self-attention for Local-Global Interactions in Vision Transformers
Focal Self-attention for Local-Global Interactions in Vision Transformers
2022-03-02 15:59:45
2049
原创 [2107] [CVPR 2022] CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Window
CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows
2022-03-02 15:59:07
1170
原创 [2106] [NIPS 2021] Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer
Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer
2022-03-02 15:58:20
958
原创 [2104] [NIPS 2021] Twins: Revisiting the Design of Spatial Attention in Vision Transformers
Twins: Revisiting the Design of Spatial Attention in Vision Transformers
2022-03-02 15:57:33
1033
原创 [2101] [ICCV 2021] Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
2022-02-21 16:56:21
1178
原创 [2111] [CVPR 2022] MetaFormer is Actually What You Need for Vision
MetaFormer is Actually What You Need for Vision
2022-01-19 10:11:08
738
原创 [2112] On Efficient Transformer and Image Pre-training for Low-level Vision
On Efficient Transformer and Image Pre-training for Low-level Vision
2022-01-19 09:34:26
1753
空空如也
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人