自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(21)
  • 收藏
  • 关注

原创 C++-CnC

CnC是一种新型编程模型,是为创建并行应用程序、而非为明确表达并行性而设计的。在CnC中,程序员只需要声明指定计算单元之间的依赖关系,但并不给出任何方式来指示如何满足这些依赖关系,CnC就专门用于解决潜在并行计算单元和数据之间的协调问题。本质上,程序员声明了两个或多个计算单元之间的某些依赖关系。指定依赖项比表达并行性更简单,因为它只使应用程序语义显式,并且不依赖于平台或任何特定的并行化技术。此外,CnC还体现了更多并行性方面的潜力,因为它没有将特定的并行性绑定到算法/程序。

2024-02-29 12:16:00 466

原创 背包型动态规划

背包型动态规划

2023-02-21 13:31:41 161

原创 广度优先搜索和深度优先搜索

广度优先搜索和深度优先搜索

2022-10-15 21:22:20 206

原创 时间复杂度小于O(n)的算法

时间复杂度小于O(n)的算法

2022-10-07 16:33:11 591

原创 [2205] [CVPR 2022 Workshop NTIRE] Residual Local Feature Network for Efficient Super-Resolution

Residual Local Feature Network for Efficient Super-Resolution

2022-07-03 13:52:43 1056

原创 [2206] An Improved One millisecond Mobile Backbone

An Improved One millisecond Mobile Backbone

2022-06-19 18:22:03 464

原创 [2108] [ICCV 2021] SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer

2022-04-13 16:25:52 2730

原创 [2203] SepViT: Separable Vision Transformer

SepViT: Separable Vision Transformer

2022-04-06 21:10:27 1411

原创 [2103] [ICCV 2021] Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

2022-04-04 18:31:32 1126

原创 [2202] Visual Attention Network

Visual Attention Network

2022-03-29 19:39:31 3547

原创 [2111] Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers

Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers

2022-03-24 17:07:52 2979

原创 [2111] [CVPR 2022] Restormer: Efficient Transformer for High-Resolution Image Restoration

Restormer: Efficient Transformer for High-Resolution Image Restoration

2022-03-18 18:12:58 1988

原创 [2201] VRT: A Video Restoration Transformer

VRT: A Video Restoration Transformer

2022-03-12 22:55:02 3083

原创 [2106] Video Super-Resolution Transformer

Video Super-Resolution Transformer

2022-03-10 14:57:40 2146

原创 [2107] [NIPS 2021] Focal Self-attention for Local-Global Interactions in Vision Transformers

Focal Self-attention for Local-Global Interactions in Vision Transformers

2022-03-02 15:59:45 2049

原创 [2107] [CVPR 2022] CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Window

CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows

2022-03-02 15:59:07 1170

原创 [2106] [NIPS 2021] Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer

Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer

2022-03-02 15:58:20 958

原创 [2104] [NIPS 2021] Twins: Revisiting the Design of Spatial Attention in Vision Transformers

Twins: Revisiting the Design of Spatial Attention in Vision Transformers

2022-03-02 15:57:33 1033

原创 [2101] [ICCV 2021] Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

2022-02-21 16:56:21 1178

原创 [2111] [CVPR 2022] MetaFormer is Actually What You Need for Vision

MetaFormer is Actually What You Need for Vision

2022-01-19 10:11:08 738

原创 [2112] On Efficient Transformer and Image Pre-training for Low-level Vision

On Efficient Transformer and Image Pre-training for Low-level Vision

2022-01-19 09:34:26 1753

空空如也

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除