关于linkage的loading问题

本文介绍了一种优化Flash加载速度的方法,通过调整linkage设置并利用中间MovieClip实现元素按需加载,有效减少了初始加载数据量。
 Flash提供的linkage给我们的编程带来了极大的便利, 我们甚至可以主场景里面不放如何元素,所有的元素都放在库中,然后用action来组织调用.但是,在默认的方式下,Flash必须将linkage的所有元素都下载完毕以后,才开始执行第一帧的action,这样就造成loading的失效. 怎么解决这个难题呢?
fictiony提供了一个解决的方案,可以很好的解决这个问题:
首先,对于库中所有的linkage,将 Export in first这个选项去掉(如图),
[img]/Files/BeyondPic/2006-9/19/pic1.gif[/img]
 
这样可以使linkage的元素不在第一帧前输出.
其次,既然告诉了Flash不要在第一帧前输出这些linkage元素,那么还必须有个操作告诉Flash在什么时候输出这些元素. 具体做法是,在Flash的loading和主体之间,放上一个MC,这个MC的第一帧是空帧,加动作stop();第二帧则将所有该输出的linkage都拖放在里面.由MC的工作原理可以知道,这个MC将停止在第一个空帧的位置,第二帧的所有东西都不会显示在场景中.  本质来讲,这个MC的作用,只是告诉Flash, 让Flash在loading和主体之间,载入所有的linkage元素.
完成这两步,你就会发现Flash第一帧的载入数据量将大大减少,Loading将变得流畅实用,赶紧试试吧:)
本文转自:http://www.5uflash.com/flashjiaocheng/Flash-loadingjiaocheng/2046.html
tao@thp:~/文档/PyTorch_Practice/lesson2/rmb_classification$ tensorboard --logdir=lesson5/runs 2025-03-25 19:57:40.459192: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2025-03-25 19:57:40.459794: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used. 2025-03-25 19:57:40.461640: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used. 2025-03-25 19:57:40.466451: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered WARNING: All log messages before absl::InitializeLog() is called are written to STDERR E0000 00:00:1742903860.474475 9404 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E0000 00:00:1742903860.476797 9404 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered W0000 00:00:1742903860.483264 9404 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1742903860.483289 9404 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1742903860.483292 9404 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1742903860.483294 9404 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. 2025-03-25 19:57:40.485439: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. E0000 00:00:1742903861.655321 9404 cuda_executor.cc:1228] INTERNAL: CUDA Runtime error: Failed call to cudaGetRuntimeVersion: Error loading CUDA libraries. GPU will not be used.: Error loading CUDA libraries. GPU will not be used. W0000 00:00:1742903861.664941 9404 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... NOTE: Using experimental fast data loading logic. To disable, pass "--load_fast=false" and report issues on GitHub. More details: https://github.com/tensorflow/tensorboard/issues/4784 Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all TensorBoard 2.19.0 at http://localhost:6006/ (Press CTRL+C to quit) 这里输入对应的网址打不开
03-26
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值