
Deep Learning
文章平均质量分 61
海绵baby强无敌
CV菜鸡炼丹师
展开
-
模型参数量/flops/吞吐量的计算
模型参数量/flops/吞吐量的计算参数量def computation_paras(kernel_size, in_channels, out_channels, out_size, groups = 1): return kernel_size ** 2 * in_channels * out_channels / (groups * 2 ** 20) # M为单位flopsdef computation_flops(kernel_size, in_channels, out_chan原创 2022-03-14 12:07:41 · 2572 阅读 · 0 评论 -
Deep Learning Based Registration文章阅读(十六)《NccFlow: Unsupervised Learning of Optical Flow With Non-oc》
Deep Learning Based Registration文章阅读(十六)本次论文是2021.7月的文章,挂在arXiv,笔者还不知道发在了哪里,投的应该是IEEE Trans期刊,题目《NccFlow: Unsupervised Learning of Optical Flow With Non-occlusion from Geometry》。这篇文章的贡献是在non-occlusion的区域引入了几何约束,从而提升光流估计的准确度。在之前的无监督光流估计的工作中,引入了亮度一致性损失,但是亮度原创 2022-01-15 16:06:51 · 2237 阅读 · 0 评论 -
tensorflow实现Local Context Normalization
tensorflow实现Local Context Normalization参考代码:PyTorch implementation for Local Context Normalization: Revisiting Local Normalization参考文章:Local Context Normalization: Revisiting Local Normalization代码实现的是torch的code,以及是对2D图像的LCN,笔者改写成了tensorflow 1.4的code以及3D原创 2021-12-15 18:48:32 · 1801 阅读 · 1 评论 -
Deep Learning Based Registration文章阅读(十四)Deformable MR-CT Image Registration Using an Unsupervised, D
Deep Learning Based Registration文章阅读(十四)本篇文章《Deformable MR-CT Image Registration Using an Unsupervised, Dual-Channel Network for Neurosurgical Guidance》来自于MIA 2021,本篇文章处理多模态问题还是基于GAN来做,通过结合CycleGAN网络来将多模态问题转为单模态问题来解。整体框架还是无监督,NCC做转为单模态后的similarity loss,个人原创 2021-11-17 14:52:00 · 950 阅读 · 0 评论 -
《Machine Learning Yearning》
《Machine Learning Yearning》概要原创 2021-11-14 16:07:46 · 1276 阅读 · 0 评论 -
R-CNN Family
R-CNN参考:From R-CNN to Mask R-CNNObject Detection for Dummies Part 3: R-CNN Family原创 2021-07-31 16:25:46 · 116 阅读 · 0 评论 -
Deep Learning Based Registration文章阅读(十三)UPFlow: Upsampling Pyramid for Unsupervised Optical Flow Lea
Deep Learning Based Registration文章阅读(十三)本次文章是CVPR2021 megvii关于无监督光流的一篇,孙剑通讯。Motivation目前的无监督光流的sota是UFlow,整合了目前为止包括pyramid structure等各个模块后形成的框架。但是目前的pyramid structure有两个问题,这篇文章也是根据这两个问题提出了相应的method解决从而取得无监督sota。第一,pyramid structure中存在upsampling的操作,但是目前原创 2021-07-04 18:05:20 · 1041 阅读 · 0 评论 -
Deep Learning Based Registration文章阅读(十二)《AutoFlow: Learning a Better Training Set for Optical Flow》
Deep Learning Based Registration文章阅读(十二)这次的文章是CVPR2021关于光流的一篇文章《AutoFlow: Learning a Better Training Set for Optical Flow》。光流的学习可以分为有监督和无监督两类,对于有监督的学习,通常需要flow field的ground truth。真实世界中通常很难获得这样的ground truth,所以一个目前常用的训练方法是用合成数据。现在最常用的合成数据是Flying Chairs和Flyi原创 2021-07-04 16:46:38 · 698 阅读 · 0 评论 -
Backbone
BackboneLeNet-5:《Gradient-based learning applied to document recognition》AlexNet:《ImageNet Classification with Deep Convolutional Neural Networks》NIPS 2012VGG:《VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION》ICLR 2015InceptionNet:原创 2021-06-30 16:55:04 · 86 阅读 · 0 评论 -
Learning rate调参调研
Learning rate调参调研learning rate的调整一般和batch size密切相关,笔者参考以下资料后对learning rate调整做了一个简单整理参考资料:如何理解深度学习分布式训练中的large batch size与learning rate的关系?PS: 上面博客提到的找最优初始学习率的方法参考如何找到最优初始学习率?Priya Goyal《Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour》Smith原创 2021-06-21 14:40:29 · 134 阅读 · 0 评论 -
Drop Out
Drop OutUnderstanding Dropout with the Simplified Math behind itA Gentle Introduction to Dropout for Regularizing Deep Neural Networks原创 2021-06-15 11:17:08 · 124 阅读 · 1 评论 -
Batch Normalization
Batch Normalization深度学习中常用Normalization包括Batch Normalization, Layer Normalization, Instance Normalization等,本文根据文章《Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift》内容理解Batch Normalization。先验知识文中提到,因为Network中的每原创 2021-06-08 18:58:09 · 120 阅读 · 0 评论 -
深度学习初始化整理
深度学习初始化内容整理笔者最近遇到在配准任务中初始化会极大影响训练结果的现象,本文目的旨在调研一些博客,文章,书籍中关于初始化的知识并记录,疏漏之处还请评论告知。参考资料:Initializing neural networks...原创 2021-05-30 17:44:51 · 297 阅读 · 0 评论