20200624学习情况

今日学习:简答题

 

11. 属性和public字段的区别是什么?

         一个是引用类型,一个是值类型;

 

12. 请叙述属性与索引器的区别

        1、属性名可自定义,索引器必须以this命名。

        2、属性可以为实例或静态,索引器必须是实例的。

        3、索引器有索引参数列表,而属性没有。

 

13. 什么是装箱(boxing)和拆箱(unboxing)

        装箱是将值类型转换为引用类型 ;拆箱是将引用类型转换为值类型。

 

14. 类(class)与结构(struct)的异同?

         Class可以被实例化,属于引用类型,是分配在内存的堆上的;

        Struct属于值类型,是分配在内存的栈上的

 

15. 值类型和引用类型的区别?

        1.将一个值类型变量赋给另一个值类型变量时,将复制包含的值。引用类型变量的赋值只复制对对象

的引用,而不复制对象本身。

        2.值类型不可能派生出新的类型:所有的值类型均隐式派生自System.ValueType。但与引用类型相

同的是,结构也可以实现接口。

        3.值类型不可能包含null值:然而,可空类型功能允许将null赋给值类型。

        4.每种值类型均有一个隐式的默认构造函数来初始化该类型的默认值。

 

System environment: sys.platform: win32 Python: 3.8.20 (default, Oct 3 2024, 15:19:54) [MSC v.1929 64 bit (AMD64)] CUDA available: True MUSA available: False numpy_random_seed: 42 GPU 0: NVIDIA GeForce MX350 CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1 NVCC: Cuda compilation tools, release 12.1, V12.1.66 MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.42.34435 版 GCC: n/a PyTorch: 2.0.0 PyTorch compiling details: PyTorch built with: - C++ Version: 199711 - MSVC 193431937 - Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e) - OpenMP 2019 - LAPACK is enabled (usually provided by MKL) - CPU capability usage: AVX2 - CUDA Runtime 11.8 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90;-gencode;arch=compute_37,code=compute_37 - CuDNN 8.7 - Magma 2.5.4 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=C:/cb/pytorch_1000000000000/work/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj /FS -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=OFF, TORCH_VERSION=2.0.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.15.0 OpenCV: 4.11.0 MMEngine: 0.10.7 Runtime environment: cudnn_benchmark: True mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: 42 Distributed launcher: none Distributed training: False GPU number: 1 ------------------------------------------------------------ 06/11 16:05:36 - mmengine - INFO - Config: crop_size = ( 512, 512, ) data_preprocessor = dict( bgr_to_rgb=True, mean=[ 130.9550538547, 140.2221399179, 149.2311794435, ], pad_val=0, seg_pad_val=255, size=( 512, 512, ), std=[ 118.7814609013, 110.3165588617, 105.461818473, ], type='SegDataPreProcessor') data_root = 'D:/3D/data/voc/taihedian/data_dataset_voc' dataset_type = 'SPRACAVOCDataset' default_hooks = dict( checkpoint=dict(by_epoch=False, interval=2000, type='CheckpointHook'), logger=dict(interval=50, log_metric_by_epoch=False, type='LoggerHook'), param_scheduler=dict(type='ParamSchedulerHook'), sampler_seed=dict(type='DistSamplerSeedHook'), timer=dict(type='IterTimerHook'), visualization=dict(type='SegVisualizationHook')) default_scope = 'mmseg' env_cfg = dict( cudnn_benchmark=True, dist_cfg=dict(backend='nccl'), mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0)) img_ratios = [ 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, ] launcher = 'none' lazy_import = True load_from = None log_level = 'INFO' log_processor = dict(by_epoch=False) model = dict( auxiliary_head=dict( align_corners=False, channels=256, concat_input=False, dropout_ratio=0.1, in_channels=1024, in_index=2, loss_decode=dict( class_weight=[ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, ], loss_weight=0.4, type='CrossEntropyLoss', use_sigmoid=False), norm_cfg=dict(requires_grad=True, type='BN'), num_classes=15, num_convs=1, type='FCNHead'), backbone=dict( contract_dilation=True, depth=50, dilations=( 1, 1, 2, 4, ), norm_cfg=dict(requires_grad=True, type='BN'), norm_eval=False, num_stages=4, out_indices=( 0, 1, 2, 3, ), strides=( 1, 2, 1, 1, ), style='pytorch', type='ResNetV1c'), data_preprocessor=dict( bgr_to_rgb=True, mean=[ 130.9550538547, 140.2221399179, 149.2311794435, ], pad_val=0, seg_pad_val=255, size=( 512, 512, ), std=[ 118.7814609013, 110.3165588617, 105.461818473, ], type='SegDataPreProcessor'), decode_head=dict( align_corners=False, c1_channels=48, c1_in_channels=256, channels=512, dilations=( 1, 12, 24, 36, ), dropout_ratio=0.1, in_channels=2048, in_index=3, loss_decode=[ dict( class_weight=[ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, ], loss_weight=1.0, type='CrossEntropyLoss', use_sigmoid=False), dict(loss_weight=0.8, type='DiceLoss'), ], norm_cfg=dict(requires_grad=True, type='BN'), num_classes=15, type='DepthwiseSeparableASPPHead'), pretrained='open-mmlab://resnet50_v1c', test_cfg=dict(mode='whole'), train_cfg=dict(), type='EncoderDecoder') norm_cfg = dict(requires_grad=True, type='BN') optim_wrapper = dict( clip_grad=None, optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005), type='OptimWrapper') optimizer = dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005) param_scheduler = [ dict( begin=0, by_epoch=False, end=20000, eta_min=0.0001, power=0.9, type='PolyLR'), ] randomness = dict(seed=42) resume = False test_cfg = dict(type='TestLoop') test_dataloader = dict( batch_size=1, dataset=dict( ann_file='ImageSets/Segmentation/val.txt', data_prefix=dict( img_path='JPEGImages', seg_map_path='SegmentationClass'), data_root='D:/3D/data/voc/taihedian/data_dataset_voc', pipeline=[ dict(type='LoadImageFromFile'), dict(keep_ratio=True, scale=( 512, 512, ), type='Resize'), dict(type='LoadAnnotations'), dict(type='PackSegInputs'), ], reduce_zero_label=False, type='SPRACAVOCDataset'), num_workers=4, persistent_workers=True, sampler=dict(shuffle=False, type='DefaultSampler')) test_evaluator = dict( iou_metrics=[ 'mIoU', ], type='IoUMetric') test_pipeline = [ dict(type='LoadImageFromFile'), dict(keep_ratio=True, scale=( 512, 512, ), type='Resize'), dict(type='LoadAnnotations'), dict(type='PackSegInputs'), ] train_cfg = dict(max_iters=20000, type='IterBasedTrainLoop', val_interval=2000) train_dataloader = dict( batch_size=4, dataset=dict( ann_file='ImageSets/Segmentation/train.txt', data_prefix=dict( img_path='JPEGImages', seg_map_path='SegmentationClass'), data_root='D:/3D/data/voc/taihedian/data_dataset_voc', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), dict( keep_ratio=True, ratio_range=( 0.5, 2.0, ), scale=( 512, 512, ), type='RandomResize'), dict( cat_max_ratio=0.75, crop_size=( 512, 512, ), type='RandomCrop'), dict(prob=0.5, type='RandomFlip'), dict(type='PhotoMetricDistortion'), dict(type='PackSegInputs'), ], reduce_zero_label=False, type='SPRACAVOCDataset'), num_workers=4, persistent_workers=True, sampler=dict(shuffle=True, type='InfiniteSampler')) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), dict( keep_ratio=True, ratio_range=( 0.5, 2.0, ), scale=( 512, 512, ), type='RandomResize'), dict(cat_max_ratio=0.75, crop_size=( 512, 512, ), type='RandomCrop'), dict(prob=0.5, type='RandomFlip'), dict(type='PhotoMetricDistortion'), dict(type='PackSegInputs'), ] tta_model = dict(type='SegTTAModel') tta_pipeline = [ dict(backend_args=None, type='LoadImageFromFile'), dict( transforms=[ [ dict(keep_ratio=True, scale_factor=0.5, type='Resize'), dict(keep_ratio=True, scale_factor=0.75, type='Resize'), dict(keep_ratio=True, scale_factor=1.0, type='Resize'), dict(keep_ratio=True, scale_factor=1.25, type='Resize'), dict(keep_ratio=True, scale_factor=1.5, type='Resize'), dict(keep_ratio=True, scale_factor=1.75, type='Resize'), ], [ dict(direction='horizontal', prob=0.0, type='RandomFlip'), dict(direction='horizontal', prob=1.0, type='RandomFlip'), ], [ dict(type='LoadAnnotations'), ], [ dict(type='PackSegInputs'), ], ], type='TestTimeAug'), ] val_cfg = dict(type='ValLoop') val_dataloader = dict( batch_size=1, dataset=dict( ann_file='ImageSets/Segmentation/val.txt', data_prefix=dict( img_path='JPEGImages', seg_map_path='SegmentationClass'), data_root='D:/3D/data/voc/taihedian/data_dataset_voc', pipeline=[ dict(type='LoadImageFromFile'), dict(keep_ratio=True, scale=( 512, 512, ), type='Resize'), dict(type='LoadAnnotations'), dict(type='PackSegInputs'), ], reduce_zero_label=False, type='SPRACAVOCDataset'), num_workers=4, persistent_workers=True, sampler=dict(shuffle=False, type='DefaultSampler')) val_evaluator = dict( iou_metrics=[ 'mIoU', ], type='IoUMetric') vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( name='visualizer', type='SegLocalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), ]) work_dir = './work_dirs\\deeplabv3plus_r50-d8_4xb4-20k_voc12aug-512x512_myvoc' d:\ab\mmsegmentation\mmseg\models\backbones\resnet.py:431: UserWarning: DeprecationWarning: pretrained is a deprecated, please use "init_cfg" instead warnings.warn('DeprecationWarning: pretrained is a deprecated, ' d:\ab\mmsegmentation\mmseg\models\losses\cross_entropy_loss.py:251: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``. warnings.warn( 06/11 16:05:41 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used. d:\ab\mmsegmentation\mmseg\engine\hooks\visualization_hook.py:60: UserWarning: The draw is False, it means that the hook for visualization will not take effect. The results will NOT be visualized or stored. warnings.warn('The draw is False, it means that the ' 06/11 16:05:41 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val: (VERY_HIGH ) RuntimeInfoHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (NORMAL ) SegVisualizationHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_val: (VERY_HIGH ) RuntimeInfoHook -------------------- after_train: (VERY_HIGH ) RuntimeInfoHook (VERY_LOW ) CheckpointHook -------------------- before_test: (VERY_HIGH ) RuntimeInfoHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (NORMAL ) SegVisualizationHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test: (VERY_HIGH ) RuntimeInfoHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- 06/11 16:05:57 - mmengine - WARNING - The prefix is not set in metric class IoUMetric. 06/11 16:05:58 - mmengine - INFO - load model from: open-mmlab://resnet50_v1c 06/11 16:05:58 - mmengine - INFO - Loads checkpoint by openmmlab backend from path: open-mmlab://resnet50_v1c 06/11 16:05:58 - mmengine - WARNING - The model and loaded state dict do not mate dict do not match exactly unexpected key in source state_dict: fc.weight, fc.bias 06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBaunexpected key in source state_dict: fc.weight, fc.bias 06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 06/11 16:05:58 - mmengine - INFO - Checkpoints will be saved to D:\AB\mmsegmentation\work_dirs\deeplabv3plus_r50-d8_4xb4-20k_voc12aug-512x512_myvoc. unexpected key in source state_dict: fc.weight, fc.bias 06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 06/11 16:05:58 - mmengine - INFO - Checkpoints will be saved to D:\AB\mmsegmentaunexpected key in source state_dict: fc.weight, fc.bias 06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBa06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBa Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBa.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBa06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 06/11 16:05:58 - mmengine - INFO - Checkpoints will be saved to D:\AB\mmsegmenta06/11 16:06/11 16:05:58 - mmengine - INFO - Checkpoints will be saved to D:\AB\mmsegmentation\work_dirs\deeplabv3plus_r50-d8_4xb4-20k_voc12aug-512x512_myvoc. 这个是什么意思
06-12
考虑柔性负荷的综合能源系统低碳经济优化调度【考虑碳交易机制】(Matlab代码实现)内容概要:本文围绕“考虑柔性负荷的综合能源系统低碳经济优化调度”展开,重点研究在碳交易机制下如何实现综合能源系统的低碳化与经济性协同优化。通过构建包含风电、光伏、储能、柔性负荷等多种能源形式的系统模型,结合碳交易成本与能源调度成本,提出优化调度策略,以降低碳排放并提升系统运行经济性。文中采用Matlab进行仿真代码实现,验证了所提模型在平衡能源供需、平抑可再生能源波动、引导柔性负荷参与调度等方面的有效性,为低碳能源系统的设计与运行提供了技术支撑。; 适合人群:具备一定电力系统、能源系统背景,熟悉Matlab编程,从事能源优化、低碳调度、综合能源系统等相关领域研究的研究生、科研人员及工程技术人员。; 使用场景及目标:①研究碳交易机制对综合能源系统调度决策的影响;②实现柔性负荷在削峰填谷、促进可再生能源消纳中的作用;③掌握基于Matlab的能源系统建模与优化求解方法;④为实际综合能源项目提供低碳经济调度方案参考。; 阅读建议:建议读者结合Matlab代码深入理解模型构建与求解过程,重点关注目标函数设计、约束条件设置及碳交易成本的量化方式,可进一步扩展至多能互补、需求响应等场景进行二次开发与仿真验证。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值