【深度学习MVS系列论文】MVSNet: Depth Inference for Unstructured Multi-view Stereo

MVSNet是一种针对非结构化多视图立体问题的深度推断网络,它结合了2D特征提取与3D成本体正则化,能够从多个视图中恢复三维结构。该方法使用差分同源变换构建3D成本体,并通过3D卷积网络正则化以获得初始深度图。

MVSNet: Depth Inference for Unstructured Multi-view Stereo

ECCV 2018

核心思路

  1. extract deep visual image features
  2. build 3D cost column upon the reference camera frustum via the differential equations homography warping
  3. apply 3D convolution to regularize and regress the initial deep map
  4. refine with the reference image

input: one reference image + several source images

output: depth for the reference image

key sight: differential equations homography warping operation, encode camera geometries in the network to build the 3D cost volumes from 2D image features and enables the end-to-end training

contribution:

  • encode the camera parameters as the differential equations homography to build the 3D cost volume upon the camera frustum
  • bridge the 2D feature extraction and 3D cost regularization networks
  • decouple the MVS reconstruction to smaller problems of per-view depth map estimation
  • variance-based metric that maps multiple features into one cost feature to adopt arbitrary number of views
  • 3D cost volumn is built upon the camera frustum instead of the regular Euclidean space
  • decouple MVS reconstruction to per-view depth map estimation

相关工作

  • point cloud reconstruction: propagation strategy, gradually density the reconstruction

    • 无法并行化,耗费时间长
  • volumetric reconstruction: 将3D空间划分为网格,判断点是否附着在surface

    • 离散化误差,高内存消耗
  • depth map reconstruction

传统方法:</

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

doubleZ0108

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值