20200624学习情况

今日学习:简答题

 

11. 属性和public字段的区别是什么?

         一个是引用类型,一个是值类型;

 

12. 请叙述属性与索引器的区别

        1、属性名可自定义,索引器必须以this命名。

        2、属性可以为实例或静态,索引器必须是实例的。

        3、索引器有索引参数列表,而属性没有。

 

13. 什么是装箱(boxing)和拆箱(unboxing)

        装箱是将值类型转换为引用类型 ;拆箱是将引用类型转换为值类型。

 

14. 类(class)与结构(struct)的异同?

         Class可以被实例化,属于引用类型,是分配在内存的堆上的;

        Struct属于值类型,是分配在内存的栈上的

 

15. 值类型和引用类型的区别?

        1.将一个值类型变量赋给另一个值类型变量时,将复制包含的值。引用类型变量的赋值只复制对对象

的引用,而不复制对象本身。

        2.值类型不可能派生出新的类型:所有的值类型均隐式派生自System.ValueType。但与引用类型相

同的是,结构也可以实现接口。

        3.值类型不可能包含null值:然而,可空类型功能允许将null赋给值类型。

        4.每种值类型均有一个隐式的默认构造函数来初始化该类型的默认值。

 

System environment: sys.platform: win32 Python: 3.8.20 (default, Oct 3 2024, 15:19:54) [MSC v.1929 64 bit (AMD64)] CUDA available: True MUSA available: False numpy_random_seed: 42 GPU 0: NVIDIA GeForce MX350 CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1 NVCC: Cuda compilation tools, release 12.1, V12.1.66 MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.42.34435 版 GCC: n/a PyTorch: 2.0.0 PyTorch compiling details: PyTorch built with: - C++ Version: 199711 - MSVC 193431937 - Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e) - OpenMP 2019 - LAPACK is enabled (usually provided by MKL) - CPU capability usage: AVX2 - CUDA Runtime 11.8 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90;-gencode;arch=compute_37,code=compute_37 - CuDNN 8.7 - Magma 2.5.4 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=C:/cb/pytorch_1000000000000/work/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj /FS -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=OFF, TORCH_VERSION=2.0.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.15.0 OpenCV: 4.11.0 MMEngine: 0.10.7 Runtime environment: cudnn_benchmark: True mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: 42 Distributed launcher: none Distributed training: False GPU number: 1 ------------------------------------------------------------ 06/11 16:05:36 - mmengine - INFO - Config: crop_size = ( 512, 512, ) data_preprocessor = dict( bgr_to_rgb=True, mean=[ 130.9550538547, 140.2221399179, 149.2311794435, ], pad_val=0, seg_pad_val=255, size=( 512, 512, ), std=[ 118.7814609013, 110.3165588617, 105.461818473, ], type='SegDataPreProcessor') data_root = 'D:/3D/data/voc/taihedian/data_dataset_voc' dataset_type = 'SPRACAVOCDataset' default_hooks = dict( checkpoint=dict(by_epoch=False, interval=2000, type='CheckpointHook'), logger=dict(interval=50, log_metric_by_epoch=False, type='LoggerHook'), param_scheduler=dict(type='ParamSchedulerHook'), sampler_seed=dict(type='DistSamplerSeedHook'), timer=dict(type='IterTimerHook'), visualization=dict(type='SegVisualizationHook')) default_scope = 'mmseg' env_cfg = dict( cudnn_benchmark=True, dist_cfg=dict(backend='nccl'), mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0)) img_ratios = [ 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, ] launcher = 'none' lazy_import = True load_from = None log_level = 'INFO' log_processor = dict(by_epoch=False) model = dict( auxiliary_head=dict( align_corners=False, channels=256, concat_input=False, dropout_ratio=0.1, in_channels=1024, in_index=2, loss_decode=dict( class_weight=[ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, ], loss_weight=0.4, type='CrossEntropyLoss', use_sigmoid=False), norm_cfg=dict(requires_grad=True, type='BN'), num_classes=15, num_convs=1, type='FCNHead'), backbone=dict( contract_dilation=True, depth=50, dilations=( 1, 1, 2, 4, ), norm_cfg=dict(requires_grad=True, type='BN'), norm_eval=False, num_stages=4, out_indices=( 0, 1, 2, 3, ), strides=( 1, 2, 1, 1, ), style='pytorch', type='ResNetV1c'), data_preprocessor=dict( bgr_to_rgb=True, mean=[ 130.9550538547, 140.2221399179, 149.2311794435, ], pad_val=0, seg_pad_val=255, size=( 512, 512, ), std=[ 118.7814609013, 110.3165588617, 105.461818473, ], type='SegDataPreProcessor'), decode_head=dict( align_corners=False, c1_channels=48, c1_in_channels=256, channels=512, dilations=( 1, 12, 24, 36, ), dropout_ratio=0.1, in_channels=2048, in_index=3, loss_decode=[ dict( class_weight=[ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, ], loss_weight=1.0, type='CrossEntropyLoss', use_sigmoid=False), dict(loss_weight=0.8, type='DiceLoss'), ], norm_cfg=dict(requires_grad=True, type='BN'), num_classes=15, type='DepthwiseSeparableASPPHead'), pretrained='open-mmlab://resnet50_v1c', test_cfg=dict(mode='whole'), train_cfg=dict(), type='EncoderDecoder') norm_cfg = dict(requires_grad=True, type='BN') optim_wrapper = dict( clip_grad=None, optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005), type='OptimWrapper') optimizer = dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005) param_scheduler = [ dict( begin=0, by_epoch=False, end=20000, eta_min=0.0001, power=0.9, type='PolyLR'), ] randomness = dict(seed=42) resume = False test_cfg = dict(type='TestLoop') test_dataloader = dict( batch_size=1, dataset=dict( ann_file='ImageSets/Segmentation/val.txt', data_prefix=dict( img_path='JPEGImages', seg_map_path='SegmentationClass'), data_root='D:/3D/data/voc/taihedian/data_dataset_voc', pipeline=[ dict(type='LoadImageFromFile'), dict(keep_ratio=True, scale=( 512, 512, ), type='Resize'), dict(type='LoadAnnotations'), dict(type='PackSegInputs'), ], reduce_zero_label=False, type='SPRACAVOCDataset'), num_workers=4, persistent_workers=True, sampler=dict(shuffle=False, type='DefaultSampler')) test_evaluator = dict( iou_metrics=[ 'mIoU', ], type='IoUMetric') test_pipeline = [ dict(type='LoadImageFromFile'), dict(keep_ratio=True, scale=( 512, 512, ), type='Resize'), dict(type='LoadAnnotations'), dict(type='PackSegInputs'), ] train_cfg = dict(max_iters=20000, type='IterBasedTrainLoop', val_interval=2000) train_dataloader = dict( batch_size=4, dataset=dict( ann_file='ImageSets/Segmentation/train.txt', data_prefix=dict( img_path='JPEGImages', seg_map_path='SegmentationClass'), data_root='D:/3D/data/voc/taihedian/data_dataset_voc', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), dict( keep_ratio=True, ratio_range=( 0.5, 2.0, ), scale=( 512, 512, ), type='RandomResize'), dict( cat_max_ratio=0.75, crop_size=( 512, 512, ), type='RandomCrop'), dict(prob=0.5, type='RandomFlip'), dict(type='PhotoMetricDistortion'), dict(type='PackSegInputs'), ], reduce_zero_label=False, type='SPRACAVOCDataset'), num_workers=4, persistent_workers=True, sampler=dict(shuffle=True, type='InfiniteSampler')) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), dict( keep_ratio=True, ratio_range=( 0.5, 2.0, ), scale=( 512, 512, ), type='RandomResize'), dict(cat_max_ratio=0.75, crop_size=( 512, 512, ), type='RandomCrop'), dict(prob=0.5, type='RandomFlip'), dict(type='PhotoMetricDistortion'), dict(type='PackSegInputs'), ] tta_model = dict(type='SegTTAModel') tta_pipeline = [ dict(backend_args=None, type='LoadImageFromFile'), dict( transforms=[ [ dict(keep_ratio=True, scale_factor=0.5, type='Resize'), dict(keep_ratio=True, scale_factor=0.75, type='Resize'), dict(keep_ratio=True, scale_factor=1.0, type='Resize'), dict(keep_ratio=True, scale_factor=1.25, type='Resize'), dict(keep_ratio=True, scale_factor=1.5, type='Resize'), dict(keep_ratio=True, scale_factor=1.75, type='Resize'), ], [ dict(direction='horizontal', prob=0.0, type='RandomFlip'), dict(direction='horizontal', prob=1.0, type='RandomFlip'), ], [ dict(type='LoadAnnotations'), ], [ dict(type='PackSegInputs'), ], ], type='TestTimeAug'), ] val_cfg = dict(type='ValLoop') val_dataloader = dict( batch_size=1, dataset=dict( ann_file='ImageSets/Segmentation/val.txt', data_prefix=dict( img_path='JPEGImages', seg_map_path='SegmentationClass'), data_root='D:/3D/data/voc/taihedian/data_dataset_voc', pipeline=[ dict(type='LoadImageFromFile'), dict(keep_ratio=True, scale=( 512, 512, ), type='Resize'), dict(type='LoadAnnotations'), dict(type='PackSegInputs'), ], reduce_zero_label=False, type='SPRACAVOCDataset'), num_workers=4, persistent_workers=True, sampler=dict(shuffle=False, type='DefaultSampler')) val_evaluator = dict( iou_metrics=[ 'mIoU', ], type='IoUMetric') vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( name='visualizer', type='SegLocalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), ]) work_dir = './work_dirs\\deeplabv3plus_r50-d8_4xb4-20k_voc12aug-512x512_myvoc' d:\ab\mmsegmentation\mmseg\models\backbones\resnet.py:431: UserWarning: DeprecationWarning: pretrained is a deprecated, please use "init_cfg" instead warnings.warn('DeprecationWarning: pretrained is a deprecated, ' d:\ab\mmsegmentation\mmseg\models\losses\cross_entropy_loss.py:251: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``. warnings.warn( 06/11 16:05:41 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used. d:\ab\mmsegmentation\mmseg\engine\hooks\visualization_hook.py:60: UserWarning: The draw is False, it means that the hook for visualization will not take effect. The results will NOT be visualized or stored. warnings.warn('The draw is False, it means that the ' 06/11 16:05:41 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val: (VERY_HIGH ) RuntimeInfoHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (NORMAL ) SegVisualizationHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_val: (VERY_HIGH ) RuntimeInfoHook -------------------- after_train: (VERY_HIGH ) RuntimeInfoHook (VERY_LOW ) CheckpointHook -------------------- before_test: (VERY_HIGH ) RuntimeInfoHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (NORMAL ) SegVisualizationHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test: (VERY_HIGH ) RuntimeInfoHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- 06/11 16:05:57 - mmengine - WARNING - The prefix is not set in metric class IoUMetric. 06/11 16:05:58 - mmengine - INFO - load model from: open-mmlab://resnet50_v1c 06/11 16:05:58 - mmengine - INFO - Loads checkpoint by openmmlab backend from path: open-mmlab://resnet50_v1c 06/11 16:05:58 - mmengine - WARNING - The model and loaded state dict do not mate dict do not match exactly unexpected key in source state_dict: fc.weight, fc.bias 06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBaunexpected key in source state_dict: fc.weight, fc.bias 06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 06/11 16:05:58 - mmengine - INFO - Checkpoints will be saved to D:\AB\mmsegmentation\work_dirs\deeplabv3plus_r50-d8_4xb4-20k_voc12aug-512x512_myvoc. unexpected key in source state_dict: fc.weight, fc.bias 06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 06/11 16:05:58 - mmengine - INFO - Checkpoints will be saved to D:\AB\mmsegmentaunexpected key in source state_dict: fc.weight, fc.bias 06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBa06/11 16:05:58 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBa Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBa.html#file-io 06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBa06/11 16:05:58 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 06/11 16:05:58 - mmengine - INFO - Checkpoints will be saved to D:\AB\mmsegmenta06/11 16:06/11 16:05:58 - mmengine - INFO - Checkpoints will be saved to D:\AB\mmsegmentation\work_dirs\deeplabv3plus_r50-d8_4xb4-20k_voc12aug-512x512_myvoc. 这个是什么意思
06-12
内容概要:本文是一份针对2025年中国企业品牌传播环境撰写的《全网媒体发稿白皮书》,聚焦企业媒体发稿的策略制定、渠道选择与效果评估难题。通过分析当前企业面临的资源分散、内容同质、效果难量化等核心痛点,系统性地介绍了新闻媒体、央媒、地方官媒和自媒体四大渠道的特点与适用场景,并深度融合“传声港”AI驱动的新媒体平台能力,提出“策略+工具+落地”的一体化解决方案。白皮书详细阐述了传声港在资源整合、AI智能匹配、舆情监测、合规审核及全链路效果追踪方面的技术优势,构建了涵盖曝光、互动、转化与品牌影响力的多维评估体系,并通过快消、科技、零售等行业的实战案例验证其有效性。最后,提出了按企业发展阶段和营销节点定制的媒体组合策略,强调本土化传播与政府关系协同的重要性,助力企业实现品牌声量与实际转化的双重增长。; 适合人群:企业市场部负责人、品牌方管理者、公关传播从业者及从事数字营销的相关人员,尤其适用于初创期至成熟期不同发展阶段的企业决策者。; 使用场景及目标:①帮助企业科学制定媒体发稿策略,优化预算分配;②解决渠道对接繁琐、投放不精准、效果不可衡量等问题;③指导企业在重大营销节点(如春节、双11)开展高效传播;④提升品牌权威性、区域渗透力与危机应对能力; 阅读建议:建议结合自身企业所处阶段和发展目标,参考文中提供的“传声港服务组合”与“预算分配建议”进行策略匹配,同时重视AI工具在投放、监测与优化中的实际应用,定期复盘数据以实现持续迭代。
先展示下效果 https://pan.quark.cn/s/987bb7a43dd9 VeighNa - By Traders, For Traders, AI-Powered. Want to read this in english ? Go here VeighNa是一套基于Python的开源量化交易系统开发框架,在开源社区持续不断的贡献下一步步成长为多功能量化交易平台,自发布以来已经积累了众多来自金融机构或相关领域的用户,包括私募基金、证券公司、期货公司等。 在使用VeighNa进行二次开发(策略、模块等)的过程中有任何疑问,请查看VeighNa项目文档,如果无法解决请前往官方社区论坛的【提问求助】板块寻求帮助,也欢迎在【经验分享】板块分享你的使用心得! 想要获取更多关于VeighNa的资讯信息? 请扫描下方二维码添加小助手加入【VeighNa社区交流微信群】: AI-Powered VeighNa发布十周年之际正式推出4.0版本,重磅新增面向AI量化策略的vnpy.alpha模块,为专业量化交易员提供一站式多因子机器学习(ML)策略开发、投研和实盘交易解决方案: :bar_chart: dataset:因子特征工程 * 专为ML算法训练优化设计,支持高效批量特征计算与处理 * 内置丰富的因子特征表达式计算引擎,实现快速一键生成训练数据 * Alpha 158:源于微软Qlib项目的股票市场特征集合,涵盖K线形态、价格趋势、时序波动等多维度量化因子 :bulb: model:预测模型训练 * 提供标准化的ML模型开发模板,大幅简化模型构建与训练流程 * 统一API接口设计,支持无缝切换不同算法进行性能对比测试 * 集成多种主流机器学习算法: * Lass...
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值