Miniconda3与ITK:Python图像处理的强大组合

66 篇文章 ¥59.90 ¥99.00
本文介绍了如何结合Miniconda3和ITK库在Python中进行图像处理。Miniconda3提供Python环境,ITK则是一个强大的开源图像处理库。通过示例,展示了如何安装这两个工具,读取、转换和保存图像,强调了它们在复杂图像任务中的潜力,如图像配准和分割。

在Python领域中,有许多出色的工具和库可供选择,其中Miniconda3和ITK(Insight Segmentation and Registration Toolkit)都是备受推崇的工具。Miniconda3是一个轻量级的Anaconda发行版,它提供了Python解释器以及众多科学计算和数据分析所需的包。而ITK是一个强大的开源图像处理库,提供了丰富的图像分割和配准算法,被广泛应用于医学图像、计算机视觉和机器学习等领域。

结合Miniconda3和ITK,我们可以利用Python语言轻松实现各种复杂的图像处理任务。下面,我将给出一个简单的示例,介绍如何使用Miniconda3和ITK来读取、处理和保存图像。

首先,我们需要安装Miniconda3和ITK。请访问https://docs.conda.io/en/latest/miniconda.html下载并安装最新版本的Miniconda3。安装完成后,打开终端或命令提示符窗口,输入以下命令创建一个新的虚拟环境:

conda create -n itk_env python=3.8

然后,激活虚拟环境:

conda activate itk_env

接下来,安装ITK库:

conda install -c conda-forge itk

安装完成后,我们可以开始编写代码。假设我们有一张名为"image.jpg"的图像文件,我们想要将其转换为灰度图像并保存。

解决一下(nnUNet) root@autodl-container-54694c8faa-39a99183:/autodl-fs/data/nnU-Net/nnUNet# nnUNetv2_train 4 2d 0 ############################ INFO: You are using the old nnU-Net default plans. We have updated our recommendations. Please consider using those instead! Read more here: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md ############################ Using device: cuda:0 ####################################################################### Please cite the following paper when using nnU-Net: Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211. ####################################################################### 2025-05-12 19:33:23.235735: Using torch.compile... 2025-05-12 19:33:23.251931: do_dummy_2d_data_aug: False 2025-05-12 19:33:23.261743: Using splits from existing split file: /root/autodl-fs/nnU-Net/nnUNet/nnUNet_preprocessed/Dataset004_Hippocampus/splits_final.json 2025-05-12 19:33:23.262319: The split file contains 5 splits. 2025-05-12 19:33:23.262368: Desired fold for training: 0 2025-05-12 19:33:23.262403: This split has 208 training and 52 validation cases. using pin_memory on device 0 using pin_memory on device 0 This is the configuration used by this training: Configuration name: 2d {&#39;data_identifier&#39;: &#39;nnUNetPlans_2d&#39;, &#39;preprocessor_name&#39;: &#39;DefaultPreprocessor&#39;, &#39;batch_size&#39;: 366, &#39;patch_size&#39;: [56, 40], &#39;median_image_size_in_voxels&#39;: [50.0, 35.0], &#39;spacing&#39;: [1.0, 1.0], &#39;normalization_schemes&#39;: [&#39;ZScoreNormalization&#39;], &#39;use_mask_for_norm&#39;: [False], &#39;resampling_fn_data&#39;: &#39;resample_data_or_seg_to_shape&#39;, &#39;resampling_fn_seg&#39;: &#39;resample_data_or_seg_to_shape&#39;, &#39;resampling_fn_data_kwargs&#39;: {&#39;is_seg&#39;: False, &#39;order&#39;: 3, &#39;order_z&#39;: 0, &#39;force_separate_z&#39;: None}, &#39;resampling_fn_seg_kwargs&#39;: {&#39;is_seg&#39;: True, &#39;order&#39;: 1, &#39;order_z&#39;: 0, &#39;force_separate_z&#39;: None}, &#39;resampling_fn_probabilities&#39;: &#39;resample_data_or_seg_to_shape&#39;, &#39;resampling_fn_probabilities_kwargs&#39;: {&#39;is_seg&#39;: False, &#39;order&#39;: 1, &#39;order_z&#39;: 0, &#39;force_separate_z&#39;: None}, &#39;architecture&#39;: {&#39;network_class_name&#39;: &#39;dynamic_network_architectures.architectures.unet.PlainConvUNet&#39;, &#39;arch_kwargs&#39;: {&#39;n_stages&#39;: 4, &#39;features_per_stage&#39;: [32, 64, 128, 256], &#39;conv_op&#39;: &#39;torch.nn.modules.conv.Conv2d&#39;, &#39;kernel_sizes&#39;: [[3, 3], [3, 3], [3, 3], [3, 3]], &#39;strides&#39;: [[1, 1], [2, 2], [2, 2], [2, 2]], &#39;n_conv_per_stage&#39;: [2, 2, 2, 2], &#39;n_conv_per_stage_decoder&#39;: [2, 2, 2], &#39;conv_bias&#39;: True, &#39;norm_op&#39;: &#39;torch.nn.modules.instancenorm.InstanceNorm2d&#39;, &#39;norm_op_kwargs&#39;: {&#39;eps&#39;: 1e-05, &#39;affine&#39;: True}, &#39;dropout_op&#39;: None, &#39;dropout_op_kwargs&#39;: None, &#39;nonlin&#39;: &#39;torch.nn.LeakyReLU&#39;, &#39;nonlin_kwargs&#39;: {&#39;inplace&#39;: True}}, &#39;_kw_requires_import&#39;: [&#39;conv_op&#39;, &#39;norm_op&#39;, &#39;dropout_op&#39;, &#39;nonlin&#39;]}, &#39;batch_dice&#39;: True} These are the global plan.json settings: {&#39;dataset_name&#39;: &#39;Dataset004_Hippocampus&#39;, &#39;plans_name&#39;: &#39;nnUNetPlans&#39;, &#39;original_median_spacing_after_transp&#39;: [1.0, 1.0, 1.0], &#39;original_median_shape_after_transp&#39;: [36, 50, 35], &#39;image_reader_writer&#39;: &#39;SimpleITKIO&#39;, &#39;transpose_forward&#39;: [0, 1, 2], &#39;transpose_backward&#39;: [0, 1, 2], &#39;experiment_planner_used&#39;: &#39;ExperimentPlanner&#39;, &#39;label_manager&#39;: &#39;LabelManager&#39;, &#39;foreground_intensity_properties_per_channel&#39;: {&#39;0&#39;: {&#39;max&#39;: 486420.21875, &#39;mean&#39;: 22360.326171875, &#39;median&#39;: 362.88250732421875, &#39;min&#39;: 0.0, &#39;percentile_00_5&#39;: 28.0, &#39;percentile_99_5&#39;: 277682.03125, &#39;std&#39;: 60656.1328125}}} 2025-05-12 19:33:25.892359: Unable to plot network architecture: nnUNet_compile is enabled! 2025-05-12 19:33:25.901765: 2025-05-12 19:33:25.902487: Epoch 0 2025-05-12 19:33:25.902790: Current learning rate: 0.01 Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 670, in call_user_compiler compiled_fn = compiler_fn(gm, self.fake_example_inputs()) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/debug_utils.py", line 1055, in debug_wrapper compiled_gm = compiler_fn(gm, example_inputs) File "/root/miniconda3/lib/python3.10/site-packages/torch/__init__.py", line 1388, in __call__ from torch._inductor.compile_fx import compile_fx File "/root/miniconda3/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 21, in <module> from . import config, metrics, overrides, pattern_matcher File "/root/miniconda3/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 19, in <module> from .lowering import lowerings as L File "/root/miniconda3/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 3868, in <module> import_submodule(kernel) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1304, in import_submodule importlib.import_module(f"{mod.__name__}.{filename[:-3]}") File "/root/miniconda3/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/root/miniconda3/lib/python3.10/site-packages/torch/_inductor/kernel/conv.py", line 22, in <module> from ..utils import ( ImportError: cannot import name &#39;is_ones&#39; from &#39;torch._inductor.utils&#39; (/root/miniconda3/lib/python3.10/site-packages/torch/_inductor/utils.py) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/root/miniconda3/bin/nnUNetv2_train", line 8, in <module> sys.exit(run_training_entry()) File "/autodl-fs/data/nnU-Net/nnUNet/nnunetv2/run/run_training.py", line 267, in run_training_entry run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights, File "/autodl-fs/data/nnU-Net/nnUNet/nnunetv2/run/run_training.py", line 207, in run_training nnunet_trainer.run_training() File "/autodl-fs/data/nnU-Net/nnUNet/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 1371, in run_training train_outputs.append(self.train_step(next(self.dataloader_train))) File "/autodl-fs/data/nnU-Net/nnUNet/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 989, in train_step output = self.network(data) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn return fn(*args, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors return callback(frame, cache_size, hooks) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame result = inner_convert(frame, cache_size, hooks) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn return fn(*args, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert return _compile( File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper r = func(*args, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile out_code = transform_code_object(code, transform) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object transformations(instructions, code_options) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform tracer.run() File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run super().run() File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in run and self.step() File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 540, in step getattr(self, inst.opname)(inst) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1792, in RETURN_VALUE self.output.compile_subgraph( File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 541, in compile_subgraph self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 588, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper r = func(*args, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 675, in call_user_compiler raise BackendCompilerFailed(self.compiler_fn, e) from e torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised ImportError: cannot import name &#39;is_ones&#39; from &#39;torch._inductor.utils&#39; (/root/miniconda3/lib/python3.10/site-packages/torch/_inductor/utils.py) Set torch._dynamo.config.verbose=True for more information You can suppress this exception and fall back to eager by setting: torch._dynamo.config.suppress_errors = True Exception in thread Thread-2 (results_loop): Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/threading.py", line 1016, in _bootstrap_inner Exception in thread Thread-1 (results_loop): Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/root/miniconda3/lib/python3.10/threading.py", line 953, in run self.run() self._target(*self._args, **self._kwargs) File "/root/miniconda3/lib/python3.10/threading.py", line 953, in run File "/root/miniconda3/lib/python3.10/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 125, in results_loop self._target(*self._args, **self._kwargs) File "/root/miniconda3/lib/python3.10/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 125, in results_loop raise e File "/root/miniconda3/lib/python3.10/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 103, in results_loop raise e File "/root/miniconda3/lib/python3.10/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 103, in results_loop raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the " RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the " RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
05-13
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符  | 博主筛选后可见
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值