_init_paths.py 隐形地改变文件路径

import sys
import os
from os import path as osp

def add_path(path):
    if path not in sys.path:
        sys.path.insert(0, path)
        try:
            os.environ["PYTHONPATH"] = path + ":" + os.environ["PYTHONPATH"]
        except KeyError:
            os.environ["PYTHONPATH"] = path

this_dir = osp.dirname(__file__) # 获取当前路径

# Add lib to PYTHONPATH
lib_path = osp.join(this_dir, '..', 'lib') # 当前路径改为 这一级的路径/../lib
add_path(lib_path)

在主文件里的第一行导入该包,主文件的相对路径变为了:这一级的路径/../lib

import _init_paths

sys.patheval 函数的工作原理

什么是 sys.path

sys.path 是一个列表,包含了 Python 查找模块时会搜索的目录路径。当你导入一个模块时,Python 会按照 sys.path 中列出的顺序逐个目录进行搜索,直到找到该模块。

你可以通过 import sys 并打印 sys.path 来查看这个列表:

import sys
print(sys.path)

通常情况下,sys.path 会包含以下几类目录:

  1. 当前脚本所在的目录。
  2. PYTHONPATH 环境变量中列出的目录。
  3. Python 标准库和第三方库的安装目录。

eval 函数

eval 是一个内置函数,用于执行一个字符串形式的表达式,并返回结果。在这个特定情况下,eval 被用来动态解析和调用一个类名。

动态实例化类

当代码 train_set = eval(cfg.DATASET.DATASET)("train", cfg) 执行时,发生了以下过程:

  1. 获取配置项:从 cfg 对象中读取 DATASET.DATASET,得到字符串 "IMBALANCEDCIFAR10"

  2. 使用 eval 函数eval(cfg.DATASET.DATASET) 等价于 eval("IMBALANCEDCIFAR10"),这会将字符串 "IMBALANCEDCIFAR10" 解析为一个类对象。

  3. 查找类:Python 会在当前的命名空间中查找 IMBALANCEDCIFAR10 类。由于在 main/_init_paths.py 中已经将 lib 目录添加到了 sys.path 中,Python 知道去 lib 目录下查找模块。

_init_paths.py

import sys
import os
from os import path as osp

def add_path(path):
    if path not in sys.path:
        sys.path.insert(0, path)
        try:
            os.environ["PYTHONPATH"] = path + ":" + os.environ["PYTHONPATH"]
        except KeyError:
            os.environ["PYTHONPATH"] = path

this_dir = osp.dirname(__file__)

# Add lib to PYTHONPATH
lib_path = osp.join(this_dir, '..', 'lib')
add_path(lib_path)

这段代码将 lib 文件夹添加到 sys.path。因此,当 eval("IMBALANCEDCIFAR10") 执行时,Python 能够在 lib.dataset.imbalance_cifar 模块中找到 IMBALANCEDCIFAR10 类。

详细过程

  1. 添加路径_init_paths.pylib 文件夹添加到 sys.path
  2. 导入模块:当执行 train_set = eval(cfg.DATASET.DATASET)("train", cfg) 时,eval"IMBALANCEDCIFAR10" 解析为类名。
  3. 查找类:Python 在 sys.path 中的各个目录查找 IMBALANCEDCIFAR10 类,由于 lib 已经在 sys.path 中,所以 Python 能够找到 lib/dataset/imbalance_cifar.py 文件,并从中导入 IMBALANCEDCIFAR10 类。
1.手动点击选择视频A,提取视频A的音频,保存于临时文件夹Y,并把视频帧率调整60帧。然后再对其进行拆帧,每张帧图按1,2,3,4....等等命名,保存于临时文件夹A里,用数字1命名为第一帧按顺序继续往下排列 2.手动选择一张或者多张图片作为B,把图片分辨率拉伸为1080*1920分辨率。 3.把临时文件夹A里帧的第(10, 20, 30, 40, 50, 60,250, 260, 270, 280, 290, 300,490, 500, 510, 520, 530, 540,730, 740, 750, 760, 770, 780,970,980, 990, 1000, 1010, 1020,1210, 1220, 1230, 1240, 1250, 1260,1450, 1460, 1470, 1480, 1490, 1500 ,1690,1700,1710,1720,1730,1740,1930,1940,1950,1960,1970,1980,2170,2180,2190,2200,2210,2220,2410,2420,2430,2440,2450,2460,2650,2660,2670,2680,2690,2700,2890,2900,2910,2920,2930,2940,3130,3140,3150,3160,3170,3180,3370,3380,3390,3400,3410,3420,3610,3620,3630,3640,3650,3660,3850,3860,3870,3880,3890,3900,4090,4100,4110,4120,4130,4140,4330,4340,4350,4360,4370,4380,4570,4580,4590,4600,4610,4620,4810,4820,4830,4840,4850,4860,5050,5060,5070,5080,5090,5100,5290,5300,5310,5320,5330,5340,5530,5540,5550,5560,5570,5580,5770,5780,5790,5800,5810,5820,6010,6020,6030,6040,6050,6060,6250,6260,6270,6280,6290,6300,6490,6500,6510,6520,6530,6540,6730,6740,6750,6760,6770,6780,6970,6980,6990,7000,7010,7020,7210,7220,7230,7240,7250,7260,7450,7460,7470,7480,7490,7500,7690,7700,7710,7720,7730,7740,7930,7940,7950,7960,7970,7980)帧替换成调整后的图片B。 4.然后再把图片B分辨率调整为540*960,透明度调整为50(不透明度100为满值,值越小透明度越高),放在临时文件夹A里帧的第(120, 130, 140, 150, 160, 170, 180, 360, 370, 380, 390, 400, 410, 420,600, 610, 620, 630, 640, 650, 660,840, 850, 860, 870, 880, 890, 900,1080, 1090, 1100, 1110, 1120, 1130, 1140,1320, 1330, 1340, 1350, 1360, 1370, 1380,1560, 1570, 1580, 1590, 1600, 1610, 1620,1800, 1810, 1820, 1830, 1840, 1850, 1860,2040, 2050, 2060, 2070, 2080, 2090, 2100,2280, 2290, 2300, 2310, 2320, 2330, 2340,2520, 2530, 2540, 2550, 2560, 2570, 2580,2760, 2770, 2780, 2790, 2800, 2810, 2820, 3000, 3010, 3020, 3030, 3040, 3050, 3060,3240, 3250, 3260, 3270, 3280, 3290, 3300,3480, 3490, 3500, 3510, 3520, 3530, 3540,3720, 3730, 3740, 3750, 3760, 3770, 3780,3960, 3970, 3980, 3990, 4000, 4010, 4020,4200, 4210, 4220, 4230, 4240, 4250, 4260,4440, 4450, 4460, 4470, 4480, 4490, 4500,4680, 4690, 4700, 4710, 4720, 4730, 4740,4920, 4930, 4940, 4950, 4960, 4970, 4980,5160, 5170, 5180, 5190, 5200, 5210, 5220,5400, 5410, 5420, 5430, 5440, 5450, 5460,5640, 5650, 5660, 5670, 5680, 5690, 5700,5880, 5890, 5900, 5910, 5920, 5930, 5940,6120, 6130, 6140, 6150, 6160, 6170, 6180,6360, 6370, 6380, 6390, 6400, 6410, 6420,6600, 6610, 6620, 6630, 6640, 6650, 6660,6840, 6850, 6860, 6870, 6880, 6890, 6900,7080, 7090, 7100, 7110, 7120, 7130, 7140, 7320, 7330, 7340, 7350, 7360, 7370, 7380,7560, 7570, 7580, 7590, 7600, 7610, 7620,7800, 7810, 7820, 7830, 7840, 7850, 7860,8040, 8050, 8060, 8070, 8080, 8090, 8100,)帧的左上角和右下角,对齐于左上角和右下角。 再把这些帧重新合成视频C。帧率为60帧。加入视频A的音频导出最终视频,最终处理完的视频分辨率为1080*2336,帧率调整为60帧,分辨率不足的地方用黑色像素上下平均填充,最终时长以视频A为准,加入视频A的音频并对音频写入隐写术算法干扰,加入几乎听不见声纹,加入声纹去重处理技术,随机写入隐形元数据,污染元数据,做到有去重效果,例如,要将拍摄日期,地点,设备。处理完后导出在软件同目录,命名为OK,有同名的话后面就加1、2、3以此类推,最后删除运行软件时产生的垃圾文件和临时文件和使用过的图片B, 最终跟导出视频时长以视频A为准, 加入批量处理模式,图片可以多选,优化所有选择的图片都按照上面流程和视频A处理后导出,批量处理可以减少拆帧视频A的次数,同一个视频A,再和所有选择的图片处理完后,再把拆帧的临时文件夹删除, 使用多线程处理 ,自动读取电脑的线程数,线程数量可以自己选择 。 加个ICO图标,和一个二维码图片,图片在项目目录,将代码展示成分发后无需命令行运行的模式,而是鼠标点击就可以运行的软件,将所有需要用的文件全部打包进进去,以免找不到程序路径造成错误。分发时支持中文路径,只要Windows版本的非常详细操作。每一步操作都详细的发给我,文件夹的结构,需要的文件和代码,还有文件的命名,用python build.py的命令打包,优化使用多线程处理 线程数量自动读取且可以自己选择 。分发时支持中文路径,将所有文件打包进单个可执行文件
最新发布
09-10
(nnunet_env) jzuser@vpc87-3:~/Work_dir/Gn/pystudy/nnUNet/nnUNet$ ls -R .: documentation LICENSE nnunetv2 nnunetv2.egg-info pyproject.toml readme.md setup.py UNKNOWN.egg-info ./documentation: assets dataset_format.md __init__.py run_inference_with_pretrained_models.md benchmarking.md explanation_normalization.md installation_instructions.md set_environment_variables.md changelog.md explanation_plans_files.md manual_data_splits.md setting_up_paths.md competitions extending_nnunet.md pretraining_and_finetuning.md tldr_migration_guide_from_v1.md convert_msd_dataset.md how_to_use_nnunet.md region_based_training.md dataset_format_inference.md ignore_label.md resenc_presets.md ./documentation/assets: amos2022_sparseseg10_2d.png dkfz_logo.png nnUNetMagician.png regions_vs_labels.png sparse_annotation_amos.png amos2022_sparseseg10.png HI_Logo.png nnU-Net_overview.png scribble_example.png ./documentation/competitions: AortaSeg24.md AutoPETII.md FLARE24 __init__.py Toothfairy2 ./documentation/competitions/FLARE24: __init__.py Task_1 Task_2 ./documentation/competitions/FLARE24/Task_1: inference_flare_task1.py __init__.py readme.md ./documentation/competitions/FLARE24/Task_2: inference_flare_task2.py __init__.py readme.md ./documentation/competitions/Toothfairy2: inference_script_semseg_only_customInf2.py __init__.py readme.md ./nnunetv2: batch_running ensembling imageio model_sharing preprocessing training configuration.py evaluation inference paths.py run utilities dataset_conversion experiment_planning __init__.py postprocessing tests ./nnunetv2/batch_running: benchmarking collect_results_custom_Decathlon.py __init__.py release_trainings collect_results_custom_Decathlon_2d.py generate_lsf_runs_customDecathlon.py jobs.sh ./nnunetv2/batch_running/benchmarking: generate_benchmarking_commands.py __init__.py summarize_benchmark_results.py ./nnunetv2/batch_running/release_trainings: __init__.py nnunetv2_v1 ./nnunetv2/batch_running/release_trainings/nnunetv2_v1: collect_results.py generate_lsf_commands.py __init__.py ./nnunetv2/dataset_conversion: convert_MSD_dataset.py Dataset114_MNMs.py Dataset223_AMOS2022postChallenge.py convert_raw_dataset_from_old_nnunet_format.py Dataset115_EMIDEC.py Dataset224_AbdomenAtlas1.0.py Dataset015_018_RibFrac_RibSeg.py Dataset119_ToothFairy2_All.py Dataset226_BraTS2024-BraTS-GLI.py Dataset021_CTAAorta.py Dataset120_RoadSegmentation.py Dataset227_TotalSegmentatorMRI.py Dataset023_AbdomenAtlas1_1Mini.py Dataset137_BraTS21.py Dataset987_dummyDataset4.py Dataset027_ACDC.py Dataset218_Amos2022_task1.py Dataset989_dummyDataset4_2.py Dataset042_BraTS18.py Dataset219_Amos2022_task2.py datasets_for_integration_tests Dataset043_BraTS19.py Dataset220_KiTS2023.py generate_dataset_json.py Dataset073_Fluo_C3DH_A549_SIM.py Dataset221_AutoPETII_2023.py __init__.py ./nnunetv2/dataset_conversion/datasets_for_integration_tests: Dataset996_IntegrationTest_Hippocampus_regions_ignore.py Dataset998_IntegrationTest_Hippocampus_ignore.py __init__.py Dataset997_IntegrationTest_Hippocampus_regions.py Dataset999_IntegrationTest_Hippocampus.py ./nnunetv2/ensembling: ensemble.py __init__.py ./nnunetv2/evaluation: accumulate_cv_results.py evaluate_predictions.py find_best_configuration.py __init__.py ./nnunetv2/experiment_planning: dataset_fingerprint __init__.py plan_and_preprocess_entrypoints.py verify_dataset_integrity.py experiment_planners plan_and_preprocess_api.py plans_for_pretraining ./nnunetv2/experiment_planning/dataset_fingerprint: fingerprint_extractor.py __init__.py ./nnunetv2/experiment_planning/experiment_planners: default_experiment_planner.py __init__.py network_topology.py resampling resencUNet_planner.py residual_unets ./nnunetv2/experiment_planning/experiment_planners/resampling: __init__.py planners_no_resampling.py resample_with_torch.py ./nnunetv2/experiment_planning/experiment_planners/residual_unets: __init__.py residual_encoder_unet_planners.py ./nnunetv2/experiment_planning/plans_for_pretraining: __init__.py move_plans_between_datasets.py ./nnunetv2/imageio: base_reader_writer.py natural_image_reader_writer.py reader_writer_registry.py simpleitk_reader_writer.py __init__.py nibabel_reader_writer.py readme.md tif_reader_writer.py ./nnunetv2/inference: data_iterators.py export_prediction.py JHU_inference.py readme.md examples.py __init__.py predict_from_raw_data.py sliding_window_prediction.py ./nnunetv2/model_sharing: entry_points.py __init__.py model_download.py model_export.py model_import.py ./nnunetv2/postprocessing: __init__.py remove_connected_components.py ./nnunetv2/preprocessing: cropping __init__.py normalization preprocessors resampling ./nnunetv2/preprocessing/cropping: cropping.py __init__.py ./nnunetv2/preprocessing/normalization: default_normalization_schemes.py __init__.py map_channel_name_to_normalization.py readme.md ./nnunetv2/preprocessing/preprocessors: default_preprocessor.py __init__.py ./nnunetv2/preprocessing/resampling: default_resampling.py __init__.py no_resampling.py resample_torch.py utils.py ./nnunetv2/run: __init__.py load_pretrained_weights.py run_training.py ./nnunetv2/tests: example_data __init__.py integration_tests ./nnunetv2/tests/example_data: example_ct_sm.nii.gz example_ct_sm_T300_output.nii.gz ./nnunetv2/tests/integration_tests: add_lowres_and_cascade.py lsf_commands.sh run_integration_test_bestconfig_inference.py run_nnunet_inference.py cleanup_integration_test.py prepare_integration_tests.sh run_integration_test.sh __init__.py readme.md run_integration_test_trainingOnly_DDP.sh ./nnunetv2/training: data_augmentation dataloading __init__.py logging loss lr_scheduler nnUNetTrainer ./nnunetv2/training/data_augmentation: compute_initial_patch_size.py custom_transforms __init__.py ./nnunetv2/training/data_augmentation/custom_transforms: cascade_transforms.py __init__.py region_based_training.py deep_supervision_donwsampling.py masking.py transforms_for_dummy_2d.py ./nnunetv2/training/dataloading: data_loader.py __init__.py nnunet_dataset.py utils.py ./nnunetv2/training/logging: __init__.py nnunet_logger.py ./nnunetv2/training/loss: compound_losses.py deep_supervision.py dice.py __init__.py robust_ce_loss.py ./nnunetv2/training/lr_scheduler: __init__.py polylr.py warmup.py ./nnunetv2/training/nnUNetTrainer: __init__.py nnUNetTrainer.py primus variants ./nnunetv2/training/nnUNetTrainer/primus: __init__.py primus_trainers.py ./nnunetv2/training/nnUNetTrainer/variants: benchmarking data_augmentation loss network_architecture sampling competitions __init__.py lr_schedule optimizer training_length ./nnunetv2/training/nnUNetTrainer/variants/benchmarking: __init__.py nnUNetTrainerBenchmark_5epochs_noDataLoading.py nnUNetTrainerBenchmark_5epochs.py ./nnunetv2/training/nnUNetTrainer/variants/competitions: aortaseg24.py __init__.py ./nnunetv2/training/nnUNetTrainer/variants/data_augmentation: __init__.py nnUNetTrainerDAOrd0.py nnUNetTrainer_noDummy2DDA.py nnUNetTrainerDA5.py nnUNetTrainerNoDA.py nnUNetTrainerNoMirroring.py ./nnunetv2/training/nnUNetTrainer/variants/loss: __init__.py nnUNetTrainerCELoss.py nnUNetTrainerDiceLoss.py nnUNetTrainerTopkLoss.py ./nnunetv2/training/nnUNetTrainer/variants/lr_schedule: __init__.py nnUNetTrainerCosAnneal.py nnUNetTrainer_warmup.py ./nnunetv2/training/nnUNetTrainer/variants/network_architecture: __init__.py nnUNetTrainerBN.py nnUNetTrainerNoDeepSupervision.py ./nnunetv2/training/nnUNetTrainer/variants/optimizer: __init__.py nnUNetTrainerAdam.py nnUNetTrainerAdan.py ./nnunetv2/training/nnUNetTrainer/variants/sampling: __init__.py nnUNetTrainer_probabilisticOversampling.py ./nnunetv2/training/nnUNetTrainer/variants/training_length: __init__.py nnUNetTrainer_Xepochs_NoMirroring.py nnUNetTrainer_Xepochs.py ./nnunetv2/utilities: collate_outputs.py default_n_proc_DA.py helpers.py network_initialization.py crossval_split.py file_path_utilities.py __init__.py overlay_plots.py dataset_name_id_conversion.py find_class_by_name.py json_export.py plans_handling ddp_allgather.py get_network_from_plans.py label_handling utils.py ./nnunetv2/utilities/label_handling: __init__.py label_handling.py ./nnunetv2/utilities/plans_handling: __init__.py plans_handler.py ./nnunetv2.egg-info: dependency_links.txt entry_points.txt PKG-INFO requires.txt SOURCES.txt top_level.txt ./UNKNOWN.egg-info: dependency_links.txt PKG-INFO SOURCES.txt top_level.txt (nnunet_env) jzuser@vpc87-3:~/Work_dir/Gn/pystudy/nnUNet/nnUNet$
08-15
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值