scannet数据集:
一共1513个采集场景数据(每个场景中点云数量都不一样,如果要用到端到端可能需要采样,使每一个场景的点都相同),共21个类别的对象,其中,1201个场景用于训练,312个场景用于测试,有四个评测任务:3D语义分割、3D实例分割、2D语义分割和2D实例分割。
本文末尾会放网盘链接。
如果去官网下载,要填一个TOS协议,然后发邮件过去,会得到python脚本。
类似下面这样,脚本放在github里保存。
#coding:utf-8
#!/usr/bin/env python
# Downloads ScanNet public data release
# Run with ./download-scannet.py (or python download-scannet.py on Windows)
# -*- coding: utf-8 -*-
import argparse
import os
import urllib.request #(for python3)
# import urllib
import tempfile
BASE_URL = 'http://kaldir.vc.in.tum.de/scannet/'
TOS_URL = BASE_URL + 'ScanNet_TOS.pdf'
FILETYPES = ['.sens', '.txt',
'_vh_clean.ply', '_vh_clean_2.ply',
'_vh_clean.segs.json', '_vh_clean_2.0.010000.segs.json',
'.aggregation.json', '_vh_clean.aggregation.json',
'_vh_clean_2.labels.ply',
'_2d-instance.zip', '_2d-instance-filt.zip',
'_2d-label.zip', '_2d-label-filt.zip']
FILETYPES_TEST = ['.sens', '.txt', '_vh_clean.ply', '_vh_clean_2.ply']
PREPROCESSED_FRAMES_FILE = ['scannet_frames_25k.zip', '5.6GB']
TEST_FRAMES_FILE = ['scannet_frames_test.zip', '610MB']
LABEL_MAP_FILES = ['scannetv2-labels.combined.tsv', 'scannet-labels.combined.tsv']
RELEASES = ['v2/scans', 'v1/scans']
RELEASES_TASKS = ['v2/tasks', 'v1/tasks']
RELEASES_NAMES = ['v2', 'v1']
RELEASE = RELEASES[0]
RELEASE_TASKS = RELEASES_TASKS[0]
RELEASE_NAME = RELEASES_NAMES[0]
LABEL_MAP_FILE = LABEL_MAP_FILES[0]
RELEASE_SIZE = '1.2TB'
V1_IDX = 1
### 剩下的就不贴了,你懂得
整个数据大小是1.2T,太大了,只下载需要的部分。
python3 download-scannetv2.py -o scannet/ --type _vh_clean_2.ply
python3 download-scannetv2.py -o scannet/ --type _vh_clean_2.labels.ply
python3 download-scannetv2.py -o scannet/ --type _vh_clean_2.0.010000.segs.json
python3 download-scannetv2.py -o scannet/ --type .aggregation.json
下载好之后会是这样子的,网盘文件中包含。
网盘链接:地址
提取码:roq0

显示其中一个看下
>>> import open3d as o3d
>>> pcd = o3d.io.read_point_cloud('scene0000_00_vh_clean_2.ply')
>>> o3d.visualization.draw_geometries([pcd])

看下它的语义标签
>>> pcd = o3d.io.read_point_cloud('scene0000_00_vh_clean_2.labels.ply')
>>> o3d.visualization.draw_geometries([pcd])

Scannet V2数据集包括1513个3D场景,涵盖21个类别,用于训练和测试4项任务:3D语义分割、3D实例分割、2D语义分割和2D实例分割。训练集1201个场景,测试集312个。数据集总量达1.2T,可通过官方TOS协议申请或使用提供的GitHub上的Python脚本来获取部分数据。网盘链接和提取码已提供。
2万+

被折叠的 条评论
为什么被折叠?



