End-to-end/hand-crafted的含义(深度学习)

本文探讨了深度学习中的端到端(end-to-end)方法与手工设计(hand-crafted)方法的区别。端到端方法强调从原始数据直接到结果的模型训练,而手工设计则涉及逐步骤的算法设计。端到端是神经网络的固有特性,代表了一种从输入到输出的直接映射。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

深度学习中经常会提到end-to-end的方法,这个词也常常出现在论文标题中。它和hand-crafted的含义是对应的。

End-to-end: 即端到端方法,意思是只有输入端和输出端,当中不需要设计和具体算法,就是一个神经网络。输入原始数据,输出结果即可。

Hand-crafted: 与end-to-end相对,hand-crafted方法就是人工设计的一步步能够说出理由来的方法。

其实end-to-end可以说是神经网络的一个固有性质,一种表达方式而已,因为神经网络方法就是所谓end-to-end的。

# Blind Image Deconvolution Using Variational Deep Image Prior Offical implementation of [Blind Image Deconvolution Using Variational Deep Image Prior](https://arxiv.org/abs/2202.00179) Dong Huo, Abbas Masoumzadeh, Rafsanjany Kushol, and Yee-Hong Yang ## Overview Conventional deconvolution methods utilize hand-crafted image priors to constrain the optimization. While deep-learning-based methods have simplified the optimization by end-to-end training, they fail to generalize well to blurs unseen in the training dataset. Thus, training image-specific models is important for higher generalization. Deep image prior (DIP) provides an approach to optimize the weights of a randomly initialized network with a single degraded image by maximum a posteriori (MAP), which shows that the architecture of a network can serve as the hand-crafted image prior. Different from the conventional hand-crafted image priors that are statistically obtained, it is hard to find a proper network architecture because the relationship between images and their corresponding network architectures is unclear. As a result, the network architecture cannot provide enough constraint for the latent sharp image. This paper proposes a new variational deep image prior (VDIP) for blind image deconvolution, which exploits additive hand-crafted image priors on latent sharp images and approximates a distribution for each pixel to avoid suboptimal solutions. Our mathematical analysis shows that the proposed method can better constrain the optimization. The experimental results further demonstrate that the generated images have better quality than that of the original DIP on benchmark datasets. ## Prerequisites - Python 3.8 - PyTorch 1.9.0 - Requirements: opencv-python - Platforms: Ubuntu 20.04, RTX A6000, cuda-11.1 ## Datasets VDIP is evaluated on synthetic and real blurred datasets [Lai et al](http://vllab.ucmerced.edu/wlai24/cvpr16_deblur_study/). ## Citation If you use this code for your research, please cite our paper. ``` @article{huo2023blind, title={Blind Image Deconvolution Using Variational Deep Image Prior}, author={Huo, Dong and Masoumzadeh, Abbas and Kushol, Rafsanjany and Yang, Yee-Hong}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, year={2023}, publisher={IEEE} } ```
最新发布
03-25
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值