MKAnnotation image offset with custom pin image

本文详细介绍了如何通过调整中心偏移(centerOffset)来精确控制MKAnnotation视图在地图上的位置,确保其在不同缩放级别下保持正确显示,避免位置偏移现象。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

</pre><pre>

Your UIAnnotationView is always drawn at the same scale, the map's zoom level doesn't matter. That's why centerOffset isn't bound with the zoom level.

annView.centerOffset is what you need. If you see that your pin is not at the good location (for example, the bottom center move a little when you change the zoom level), it's because you didn't set the right centerOffset.

By the way, if you want to set the point of the coordinate at the bottom center of the image, the x coordinate of your centerOffset should be 0.0f, as annotationView center the image by default. So try :

annView.centerOffset = CGPointMake(0, -imageHeight / 2);

或是

Setting the centerOffset often moves the image arround when user rotates or zooms the map, as anchor point is still set to the center of the image instead of its lower center.

You can set the anchor point of the annotation to the bottom center of you custom image as follows:

yourAnnotation.layer.anchorPoint = CGPointMake(0.5f, 1.0f);

This way when user zooms or rotates the map your custom annotation will never move from its coordinates on the map.


http://stackoverflow.com/questions/8165262/mkannotation-image-offset-with-custom-pin-image

转载于:https://www.cnblogs.com/zsw-1993/p/4879181.html

内容概要:本文介绍了多种开发者工具及其对开发效率的提升作用。首先,介绍了两款集成开发环境(IDE):IntelliJ IDEA 以其智能代码补全、强大的调试工具和项目管理功能适用于Java开发者;VS Code 则凭借轻量级和多种编程语言的插件支持成为前端开发者的常用工具。其次,提到了基于 GPT-4 的智能代码生成工具 Cursor,它通过对话式编程显著提高了开发效率。接着,阐述了版本控制系统 Git 的重要性,包括记录代码修改、分支管理和协作功能。然后,介绍了 Postman 作为 API 全生命周期管理工具,可创建、测试和文档化 API,缩短前后端联调时间。再者,提到 SonarQube 这款代码质量管理工具,能自动扫描代码并检测潜在的质量问题。还介绍了 Docker 容器化工具,通过定义应用的运行环境和依赖,确保环境一致性。最后,提及了线上诊断工具 Arthas 和性能调优工具 JProfiler,分别用于生产环境排障和性能优化。 适合人群:所有希望提高开发效率的程序员,尤其是有一定开发经验的软件工程师和技术团队。 使用场景及目标:①选择合适的 IDE 提升编码速度和代码质量;②利用 AI 编程助手加快开发进程;③通过 Git 实现高效的版本控制和团队协作;④使用 Postman 管理 API 的全生命周期;⑤借助 SonarQube 提高代码质量;⑥采用 Docker 实现环境一致性;⑦运用 Arthas 和 JProfiler 进行线上诊断和性能调优。 阅读建议:根据个人或团队的需求选择适合的工具,深入理解每种工具的功能特点,并在实际开发中不断实践和优化。
内容概要:本文围绕低轨(LEO)卫星通信系统的星间切换策略展开研究,针对现有研究忽略终端运动影响导致切换失败率高的问题,提出了两种改进策略。第一种是基于预测的多属性无偏好切换策略,通过预测终端位置建立切换有向图,并利用NPGA算法综合服务时长、通信仰角和空闲信道数优化切换路径。第二种是多业务切换策略,根据不同业务需求使用层次分析法设置属性权重,并采用遗传算法筛选切换路径,同时引入多业务切换管理方法保障实时业务。仿真结果显示,这两种策略能有效降低切换失败率和新呼叫阻塞率,均衡卫星负载。 适合人群:从事卫星通信系统研究的科研人员、通信工程领域的研究生及工程师。 使用场景及目标:①研究和优化低轨卫星通信系统中的星间切换策略;②提高卫星通信系统的可靠性和效率;③保障不同类型业务的服务质量(QoS),特别是实时业务的需求。 其他说明:文章不仅详细介绍了两种策略的具体实现方法,还提供了Python代码示例,包括终端位置预测、有向图构建、多目标优化算法以及业务感知的资源分配等关键环节。此外,还设计了完整的仿真测试框架,用于验证所提策略的有效性,并提供了自动化验证脚本和创新点技术验证方案。部署建议方面,推荐使用Docker容器化仿真环境、Redis缓存卫星位置数据、GPU加速遗传算法运算等措施,以提升系统的实时性和计算效率。
auto_scale_lr = dict(base_batch_size=16, enable=False) backend_args = None base_lr = 0.004 checkpoint = 'https://download.openmmlab.com/mmdetection/v3.0/rtmdet/cspnext_rsb_pretrain/cspnext-tiny_imagenet_600e.pth' custom_hooks = [ dict( ema_type='ExpMomentumEMA', momentum=0.0002, priority=49, type='EMAHook', update_buffers=True), dict( switch_epoch=280, switch_pipeline=[ dict(backend_args=None, type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict( keep_ratio=True, ratio_range=( 0.5, 2.0, ), scale=( 640, 640, ), type='RandomResize'), dict(crop_size=( 640, 640, ), type='RandomCrop'), dict(type='YOLOXHSVRandomAug'), dict(prob=0.5, type='RandomFlip'), dict( pad_val=dict(img=( 114, 114, 114, )), size=( 640, 640, ), type='Pad'), dict(type='PackDetInputs'), ], type='PipelineSwitchHook'), ] data_root = 'data/coco/' dataset_type = 'CocoDataset' default_hooks = dict( checkpoint=dict(interval=10, max_keep_ckpts=3, type='CheckpointHook'), logger=dict(interval=50, type='LoggerHook'), param_scheduler=dict(type='ParamSchedulerHook'), sampler_seed=dict(type='DistSamplerSeedHook'), timer=dict(type='IterTimerHook'), visualization=dict(type='DetVisualizationHook')) default_scope = 'mmdet' env_cfg = dict( cudnn_benchmark=False, dist_cfg=dict(backend='nccl'), mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0)) img_scales = [ ( 640, 640, ), ( 320, 320, ), ( 960, 960, ), ] interval = 10 load_from = None log_level = 'INFO' log_processor = dict(by_epoch=True, type='LogProcessor', window_size=50) max_epochs = 300 model = dict( backbone=dict( act_cfg=dict(inplace=True, type='SiLU'), arch='P5', channel_attention=True, deepen_factor=0.167, expand_ratio=0.5, init_cfg=dict( checkpoint= 'https://download.openmmlab.com/mmdetection/v3.0/rtmdet/cspnext_rsb_pretrain/cspnext-tiny_imagenet_600e.pth', prefix='backbone.', type='Pretrained'), norm_cfg=dict(type='SyncBN'), type='CSPNeXt', widen_factor=0.375), bbox_head=dict( act_cfg=dict(inplace=True, type='SiLU'), anchor_generator=dict( offset=0, strides=[ 8, 16, 32, ], type='MlvlPointGenerator'), bbox_coder=dict(type='DistancePointBBoxCoder'), exp_on_reg=False, feat_channels=96, in_channels=96, loss_bbox=dict(loss_weight=2.0, type='GIoULoss'), loss_cls=dict( beta=2.0, loss_weight=1.0, type='QualityFocalLoss', use_sigmoid=True), norm_cfg=dict(type='SyncBN'), num_classes=80, pred_kernel_size=1, share_conv=True, stacked_convs=2, type='RTMDetSepBNHead', with_objectness=False), data_preprocessor=dict( batch_augments=None, bgr_to_rgb=False, mean=[ 103.53, 116.28, 123.675, ], std=[ 57.375, 57.12, 58.395, ], type='DetDataPreprocessor'), neck=dict( act_cfg=dict(inplace=True, type='SiLU'), expand_ratio=0.5, in_channels=[ 96, 192, 384, ], norm_cfg=dict(type='SyncBN'), num_csp_blocks=1, out_channels=96, type='CSPNeXtPAFPN'), test_cfg=dict( max_per_img=300, min_bbox_size=0, nms=dict(iou_threshold=0.65, type='nms'), nms_pre=30000, score_thr=0.001), train_cfg=dict( allowed_border=-1, assigner=dict(topk=13, type='DynamicSoftLabelAssigner'), debug=False, pos_weight=-1), type='RTMDet') optim_wrapper = dict( optimizer=dict(lr=0.004, type='AdamW', weight_decay=0.05), paramwise_cfg=dict( bias_decay_mult=0, bypass_duplicate=True, norm_decay_mult=0), type='OptimWrapper') param_scheduler = [ dict( begin=0, by_epoch=False, end=1000, start_factor=1e-05, type='LinearLR'), dict( T_max=150, begin=150, by_epoch=True, convert_to_iter_based=True, end=300, eta_min=0.0002, type='CosineAnnealingLR'), ] resume = False stage2_num_epochs = 20 test_cfg = dict(type='TestLoop') test_dataloader = dict( batch_size=5, dataset=dict( ann_file='annotations/instances_val2017.json', backend_args=None, data_prefix=dict(img='val2017/'), data_root='data/coco/', pipeline=[ dict(backend_args=None, type='LoadImageFromFile'), dict(keep_ratio=True, scale=( 640, 640, ), type='Resize'), dict( pad_val=dict(img=( 114, 114, 114, )), size=( 640, 640, ), type='Pad'), dict(type='LoadAnnotations', with_bbox=True), dict( meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', ), type='PackDetInputs'), ], test_mode=True, type='CocoDataset'), drop_last=False, num_workers=10, persistent_workers=True, sampler=dict(shuffle=False, type='DefaultSampler')) test_evaluator = dict( ann_file='data/coco/annotations/instances_val2017.json', backend_args=None, format_only=False, metric='bbox', proposal_nums=( 100, 1, 10, ), type='CocoMetric') test_pipeline = [ dict(backend_args=None, type='LoadImageFromFile'), dict(keep_ratio=True, scale=( 640, 640, ), type='Resize'), dict(pad_val=dict(img=( 114, 114, 114, )), size=( 640, 640, ), type='Pad'), dict(type='LoadAnnotations', with_bbox=True), dict( meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', ), type='PackDetInputs'), ] train_cfg = dict( dynamic_intervals=[ ( 280, 1, ), ], max_epochs=300, type='EpochBasedTrainLoop', val_interval=10) train_dataloader = dict( batch_sampler=None, batch_size=32, dataset=dict( ann_file='annotations/instances_train2017.json', backend_args=None, data_prefix=dict(img='train2017/'), data_root='data/coco/', filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=[ dict(backend_args=None, type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict( img_scale=( 640, 640, ), max_cached_images=20, pad_val=114.0, random_pop=False, type='CachedMosaic'), dict( keep_ratio=True, ratio_range=( 0.5, 2.0, ), scale=( 1280, 1280, ), type='RandomResize'), dict(crop_size=( 640, 640, ), type='RandomCrop'), dict(type='YOLOXHSVRandomAug'), dict(prob=0.5, type='RandomFlip'), dict( pad_val=dict(img=( 114, 114, 114, )), size=( 640, 640, ), type='Pad'), dict( img_scale=( 640, 640, ), max_cached_images=10, pad_val=( 114, 114, 114, ), prob=0.5, random_pop=False, ratio_range=( 1.0, 1.0, ), type='CachedMixUp'), dict(type='PackDetInputs'), ], type='CocoDataset'), num_workers=10, persistent_workers=True, pin_memory=True, sampler=dict(shuffle=True, type='DefaultSampler')) train_pipeline = [ dict(backend_args=None, type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict( img_scale=( 640, 640, ), max_cached_images=20, pad_val=114.0, random_pop=False, type='CachedMosaic'), dict( keep_ratio=True, ratio_range=( 0.5, 2.0, ), scale=( 1280, 1280, ), type='RandomResize'), dict(crop_size=( 640, 640, ), type='RandomCrop'), dict(type='YOLOXHSVRandomAug'), dict(prob=0.5, type='RandomFlip'), dict(pad_val=dict(img=( 114, 114, 114, )), size=( 640, 640, ), type='Pad'), dict( img_scale=( 640, 640, ), max_cached_images=10, pad_val=( 114, 114, 114, ), prob=0.5, random_pop=False, ratio_range=( 1.0, 1.0, ), type='CachedMixUp'), dict(type='PackDetInputs'), ] train_pipeline_stage2 = [ dict(backend_args=None, type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict( keep_ratio=True, ratio_range=( 0.5, 2.0, ), scale=( 640, 640, ), type='RandomResize'), dict(crop_size=( 640, 640, ), type='RandomCrop'), dict(type='YOLOXHSVRandomAug'), dict(prob=0.5, type='RandomFlip'), dict(pad_val=dict(img=( 114, 114, 114, )), size=( 640, 640, ), type='Pad'), dict(type='PackDetInputs'), ] tta_model = dict( tta_cfg=dict(max_per_img=100, nms=dict(iou_threshold=0.6, type='nms')), type='DetTTAModel') tta_pipeline = [ dict(backend_args=None, type='LoadImageFromFile'), dict( transforms=[ [ dict(keep_ratio=True, scale=( 640, 640, ), type='Resize'), dict(keep_ratio=True, scale=( 320, 320, ), type='Resize'), dict(keep_ratio=True, scale=( 960, 960, ), type='Resize'), ], [ dict(prob=1.0, type='RandomFlip'), dict(prob=0.0, type='RandomFlip'), ], [ dict( pad_val=dict(img=( 114, 114, 114, )), size=( 960, 960, ), type='Pad'), ], [ dict(type='LoadAnnotations', with_bbox=True), ], [ dict( meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'flip', 'flip_direction', ), type='PackDetInputs'), ], ], type='TestTimeAug'), ] val_cfg = dict(type='ValLoop') val_dataloader = dict( batch_size=5, dataset=dict( ann_file='annotations/instances_val2017.json', backend_args=None, data_prefix=dict(img='val2017/'), data_root='data/coco/', pipeline=[ dict(backend_args=None, type='LoadImageFromFile'), dict(keep_ratio=True, scale=( 640, 640, ), type='Resize'), dict( pad_val=dict(img=( 114, 114, 114, )), size=( 640, 640, ), type='Pad'), dict(type='LoadAnnotations', with_bbox=True), dict( meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', ), type='PackDetInputs'), ], test_mode=True, type='CocoDataset'), drop_last=False, num_workers=10, persistent_workers=True, sampler=dict(shuffle=False, type='DefaultSampler')) val_evaluator = dict( ann_file='data/coco/annotations/instances_val2017.json', backend_args=None, format_only=False, metric='bbox', proposal_nums=( 100, 1, 10, ), type='CocoMetric') vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( name='visualizer', type='DetLocalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), ])
最新发布
08-21
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值