Title: Diffusion-based 3D Human Mesh Recovery from Multi-view Images with Segmentation Masks
Abstract:
This paper addresses the challenging problem of 3D human mesh recovery from multi-view images in the presence of occlusions. We propose a novel diffusion-based framework that leverages the power of multi-view geometry and segmentation masks to achieve robust and accurate mesh reconstruction. Our approach incorporates a pre-trained diffusion model as the backbone for feature extraction and integrates it with a multi-view fusion module that effectively combines information from different viewpoints. Additionally, we introduce a segmentation-guided attention mechanism that focuses on the relevant regions of the input images, thereby improving the robustness to occlusions. We conduct extensive experiments on benchmark datasets and demonstrate that our method outperforms state-of-the-art approaches in terms of both accuracy and efficiency.
Keywords:
- 3D Human Mesh Recovery
- Multi-view Images
- Segmentation Masks
- Diffusion Models
- Occlusion Handling
TOC:
1. Introduction
- 1.1 Problem Statement and Motivation
- 1.1.1 Challenges in Occluded Human Mesh Recovery
- 1.1.2 Advantages of Multi-view Images and Segmentation Masks
- 1.1.3 Potential of Diffusion Models for 3D Vision
- 1.2 Related Work
- 1.2.1 Monocular Human Mesh Recovery
- 1.2.1.1 Optimization-based Methods
- 1.2.1.2 Learning-based Methods
- 1.2.2 Multi-view Human Mesh Recovery
- 1.2.2.1 Volumetric Reconstruction
- 1.2.2.2 Multi-view Fusion
- 1.2.3 Diffusion Models for 3D Vision
- 1.2.3.1 Point Cloud Generation
- 1.2.3.2 Shape and Pose Estimation
- 1.2.1 Monocular Human Mesh Recovery
- 1.3 Contributions
- 1.3.1 Novel Diffusion-based Framework
- 1.3.2 Multi-view Fusion Module
- 1.3.3 Segmentation-guided Attention Mechanism
- 1.4 Paper Organization
2. Proposed Method
- 2.1 Overview of the Framework
- 2.1.1 Input and Output
- 2.1.2 Pipeline Description
- 2.2 Multi-view Feature Extraction
- 2.2.1 Camera Calibration and Pose Estimation
- 2.2.2 Feature Extraction from Each View
- 2.2.3 Multi-view Feature Fusion
- 2.3 Segmentation-guided Attention Module
- 2.3.1 Segmentation Mask Processing
- 2.3.2 Attention Mechanism Design
- 2.3.3 Integration with Diffusion Model
- 2.4 Diffusion-based Mesh Reconstruction
- 2.4.1 Pre-trained Diffusion Model
- 2.4.2 Mesh Parameter Regression
- 2.4.3 Loss Function
- 2.5 Training Strategy
- 2.5.1 Data Augmentation
- 2.5.2 Optimization Algorithm
- 2.5.3 Hyperparameter Tuning
3. Experiments
- 3.1 Datasets and Evaluation Metrics
- 3.1.1 Human3.6M Dataset
- 3.1.2 CMU Panoptic Dataset
- 3.1.3 Evaluation Metrics (CD, P2S, M2M)
- 3.2 Implementation Details
- 3.2.1 Hardware and Software
- 3.2.2 Training Settings
- 3.3 Quantitative Results
- 3.3.1 Comparison with State-of-the-art Methods
- 3.3.1.1 Monocular Methods
- 3.3.1.2 Multi-view Methods
- 3.3.2 Ablation Studies
- 3.3.2.1 Number of Views
- 3.3.2.2 Segmentation Mask Quality
- 3.3.2.3 Diffusion Model Architecture
- 3.3.1 Comparison with State-of-the-art Methods
- 3.4 Qualitative Results
- 3.4.1 Visual Comparison
- 3.4.2 Failure Cases
4. Discussion
- 4.1 Analysis of Results
- 4.1.1 Impact of Multi-view Fusion
- 4.1.2 Effectiveness of Segmentation-guided Attention
- 4.1.3 Generalization Ability
- 4.2 Limitations and Future Work
- 4.2.1 Handling Extreme Occlusions
- 4.2.2 Real-time Performance
- 4.2.3 Extension to Other 3D Tasks
5. Conclusion
- 5.1 Summary of Findings
- 5.2 Contributions and Impact
- 5.3 Future Directions
References
6720

被折叠的 条评论
为什么被折叠?



