nnUNet-v2 官方调用教程(节选关键内容)_nnunetv2(2)

  • SimpleITKIO: .nii.gz, .nrrd, .mha
  • Tiff3DIO: .tif, .tiff. 3D tif images! Since TIF does not have a standardized way of storing spacing information,
    nnU-Net expects each TIF file to be accompanied by an identically named .json file that contains three numbers
    (no units, no comma. Just separated by whitespace), one for each dimension.

The file extension lists are not exhaustive and depend on what the backend supports. For example, nibabel and SimpleITK
support more than the three given here. The file endings given here are just the ones we tested!

IMPORTANT: nnU-Net can only be used with file formats that use lossless (or no) compression! Because the file
format is defined for an entire dataset (and not separately for images and segmentations, this could be a todo for
the future), we must ensure that there are no compression artifacts that destroy the segmentation maps. So no .jpg and
the likes!

Dataset folder structure

Datasets must be located in the nnUNet_raw folder (which you either define when installing nnU-Net or export/set every
time you intend to run nnU-Net commands!).
Each segmentation dataset is stored as a separate ‘Dataset’. Datasets are associated with a dataset ID, a three digit
integer, and a dataset name (which you can freely choose): For example, Dataset005_Prostate has ‘Prostate’ as dataset name and
the dataset id is 5. Datasets are stored in the nnUNet_raw folder like this:

nnUNet_raw/
├── Dataset001_BrainTumour
├── Dataset002_Heart
├── Dataset003_Liver
├── Dataset004_Hippocampus
├── Dataset005_Prostate
├── ...

Within each dataset folder, the following structure is expected:

Dataset001_BrainTumour/
├── dataset.json
├── imagesTr
├── imagesTs  # optional
└── labelsTr

When adding your custom dataset, take a look at the dataset_conversion folder and
pick an id that is not already taken. IDs 001-010 are for the Medical Segmentation Decathlon.

  • imagesTr contains the images belonging to the training cases. nnU-Net will perform pipeline configuration, training with
    cross-validation, as well as finding postprocessing and the best ensemble using this data.
  • imagesTs (optional) contains the images that belong to the test cases. nnU-Net does not use them! This could just
    be a convenient location for you to store these images. Remnant of the Medical Segmentation Decathlon folder structure.
  • labelsTr contains the images with the ground truth segmentation maps for the training cases.
  • dataset.json contains metadata of the dataset.

The scheme introduced above results in the following folder structure. Given
is an example for the first Dataset of the MSD: BrainTumour. This dataset hat four input channels: FLAIR (0000),
T1w (0001), T1gd (0002) and T2w (0003). Note that the imagesTs folder is optional and does not have to be present.

nnUNet_raw/Dataset001_BrainTumour/
├── dataset.json
├── imagesTr
│   ├── BRATS_001_0000.nii.gz
│   ├── BRATS_001_0001.nii.gz
│   ├── BRATS_001_0002.nii.gz
│   ├── BRATS_001_0003.nii.gz
│   ├── BRATS_002_0000.nii.gz
│   ├── BRATS_002_0001.nii.gz
│   ├── BRATS_002_0002.nii.gz
│   ├── BRATS_002_0003.nii.gz
│   ├── ...
├── imagesTs
│   ├── BRATS_485_0000.nii.gz
│   ├── BRATS_485_0001.nii.gz
│   ├── BRATS_485_0002.nii.gz
│   ├── BRATS_485_0003.nii.gz
│   ├── BRATS_486_0000.nii.gz
│   ├── BRATS_486_0001.nii.gz
│   ├── BRATS_486_0002.nii.gz
│   ├── BRATS_486_0003.nii.gz
│   ├── ...
└── labelsTr
    ├── BRATS_001.nii.gz
    ├── BRATS_002.nii.gz
    ├── ...

Here is another example of the second dataset of the MSD, which has only one input channel:

nnUNet_raw/Dataset002_Heart/
├── dataset.json
├── imagesTr
│   ├── la_003_0000.nii.gz
│   ├── la_004_0000.nii.gz
│   ├── ...
├── imagesTs
│   ├── la_001_0000.nii.gz
│   ├── la_002_0000.nii.gz
│   ├── ...
└── labelsTr
    ├── la_003.nii.gz
    ├── la_004.nii.gz
    ├── ...

Remember: For each training case, all images must have the same geometry to ensure that their pixel arrays are aligned. Also
make sure that all your data is co-registered!

See also dataset format inference!!

dataset.json

The dataset.json contains metadata that nnU-Net needs for training. We have greatly reduced the number of required
fields since version 1!

Here is what the dataset.json should look like at the example of the Dataset005_Prostate from the MSD:

{ 
 "channel_names": {  # formerly modalities
   "0": "T2", 
   "1": "ADC"
 }, 
 "labels": {  # THIS IS DIFFERENT NOW!
   "background": 0,
   "PZ": 1,
   "TZ": 2
 }, 
 "numTraining": 32, 
 "file_ending": ".nii.gz"
 "overwrite_image_reader_writer": "SimpleITKIO"  # optional! If not provided nnU-Net will automatically determine the ReaderWriter
 }

The channel_names determine the normalization used by nnU-Net. If a channel is marked as ‘CT’, then a global
normalization based on the intensities in the foreground pixels will be used. If it is something else, per-channel
z-scoring will be used. Refer to the methods section in our paper
for more details. nnU-Net

<think>好的,用户想了解nnUNet v2的网络架构和工作原理。我需要先回忆之前提供的引用内容,看看有哪些相关信息。引用1提到nnunet的网络需要修改dynamic-network-architectures中的结构和数据集的plan。引用3详细描述了nnUNet的三种配置:2D、3D和3D_cascade,以及它们的训练和推理过程。 首先,我应该解释nnUNet v2的整体架构,包括三种网络类型。需要说明每种网络的结构和适用场景,比如2D用于切片处理,3D处理整个体积,而级联网络分两步优化。然后,结合引用3中的信息,说明训练时的交叉验证和后处理步骤。 关于实现机制,用户可能想知道如何具体修改网络结构。根据引用1和2,需要克隆dynamic-network-architectures仓库,并在源码中修改网络定义。可能涉及到修改构建块的配置,比如卷积层数、特征图数量等。同时,数据预处理部分可能需要调整plan文件中的参数,如输入尺寸和归一化方法。 用户可能还有疑问,比如如何选择最佳配置,或者如何优化训练参数。引用3提到nnUNet会经验性地选择表现最好的参数,并可能进行后处理来抑制次优结果。这部分需要详细说明交叉验证的过程和结果整合的方法。 最后,生成相关问题时要围绕网络结构修改、实现步骤、配置选择、数据预处理优化以及应用场景。确保问题覆盖用户可能进一步探索的方向。</think>### nnUNet v2网络架构解析 nnUNet v2延续了模块化设计思想,通过三种基础架构实现医学图像分割的适应性。其核心网络类型包含: $$ \text{网络架构} = \begin{cases} \text{2D U-Net: 处理轴向切片序列}\\ \text{3D U-Net: 全三维体数据处理}\\ \text{3D级联网络: 多阶段优化} \end{cases} $$ **1. 网络结构实现机制** - **架构修改入口**:通过修改`dynamic-network-architectures`库中的`network_architecture/architectures`模块[^1] - **特征金字塔构建**:典型配置包含5级下采样,每层使用两个$3\times3$卷积核(3D网络为$3\times3\times3$) - **跳跃连接**:采用通道拼接方式实现特征融合,如: ```python class ConvBlock(nn.Module): def __init__(self, in_channels, out_channels): super().__init__() self.conv = nn.Sequential( nn.Conv3d(in_channels, out_channels, kernel_size=3, padding=1), nn.InstanceNorm3d(out_channels), nn.LeakyReLU(0.01) ) ``` **2. 训练工作机制** - **自动配置选择**:系统会根据GPU显存自动选择最大可行输入尺寸[^3] - **交叉验证策略**:默认使用5折交叉验证,最终组合方式为: $$ \text{最终预测} = \frac{1}{3}(\text{2D预测} + \text{3D预测} + \text{级联预测}) $$ - **后处理优化**:通过连通域分析去除小伪影区域,保留最大三个连通分量 **3. 数据流处理** 1. **预处理阶段**:执行重采样至目标间距,如将CT数据统一采样到$1.5\times0.8\times0.8$ mm³ 2. **标准化处理**:应用z-score归一化: $$ I_{norm} = \frac{I - \mu}{\sigma} $$ 3. **数据增强**:动态生成旋转($\theta \in [-15^\circ,15^\circ]$)和缩放($\lambda \in [0.7,1.4]$)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值