Manage images

本文介绍了如何使用SDK进行镜像管理,包括列出所有可用镜像、按ID获取镜像、按名称查找镜像以及上传新的镜像。通过具体代码示例展示了如何利用Glance和Nova客户端完成这些操作。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Manage images

When working with images in the SDK, you will call bothglance and nova methods.

用SDK使用 image ,需要调用glance和nova的方法。

List images

To list the available images, call the glanceclient.v2.images.Controller.list method:

glanceclient.v2.images.Controller.list 调用此方法,列出镜像。此方法将返回一个生成器: <generator object list at 0x105e9c2d0>。

import glanceclient.v2.client as glclient
glance = glclient.Client(...)
images = glance.images.list()

Get image by ID

To retrieve an image object from its ID, call the glanceclient.v2.images.Controller.get method:

用镜像ID返回镜像对象,通过调用 glanceclient.v2.images.Controller.get 方法。

import glanceclient.v2.client as glclient
image_id = 'c002c82e-2cfa-4952-8461-2095b69c18a6'
glance = glclient.Client(...)
image = glance.images.get(image_id)

Get image by name

The Image service Python bindings do not support the retrieval of an image object by name. However, the Compute Python bindings enable you to get an image object by name. To get an image object by name, call thenovaclient.v1_1.images.ImageManager.find method:

Image服务不支持通过名字检索镜像对象,但是Compute支持此项功能,可以通过novaclient.v1_1.images.ImageManager.find 方法通过名称得到镜像。


import novaclient.v1_1.client as nvclient
name = "cirros"
nova = nvclient.Client(...)
image = nova.images.find(name=name)

Upload an image

To upload an image, call the glanceclient.v2.images.ImageManager.create method:

上传镜像可以通过调用 glanceclient.v2.images.ImageManager.create方法。

import glanceclient.v2.client as glclient
imagefile = "/tmp/myimage.img" #上传镜像的路径
glance = glclient.Client(...)
with open(imagefile) as fimage:
  glance.images.create(name="myimage", is_public=False, disk_format="qcow2",
                       container_format="bare", data=fimage)


详情见:

http://docs.openstack.org/user-guide/sdk_manage_images.html


'use strict' export var scriptProperties = createScriptProperties() .addSlider({ name: 'fadeDuration', label: 'Fade Duration', value: 2, min: 1, max: 10, integer: true }) .addSlider({ name: 'imageDuration', label: 'Image Duration', value: 2, min: 2, max: 25, integer: true }) .addText({ name: 'normalImage', label: 'Normal Image Prefix', value: 'image' }) .addText({ name: 'specialImage', label: 'Special Image Prefix', value: 'special' }) .addText({ name: 'effectPrefix', label: 'Effect Prefix (Optional)', value: 'effect' }) .addCheckbox({ name: 'showInSpecial', label: 'Apply effects to special images?', value: false }) .addText({ name: 'musicPrefix', label: 'Music Prefix (Optional)', value: 'music' }) .addCheckbox({ name: 'syncMusic', label: 'Use synchronized music?', value: false }) .finish() /** @type {ILayer[]} */ const normalImages = [] /** @type {ILayer[]} */ const effectLayers = [] let fadeDuration let imageDuration let totalDurationPerImage /** @type {ILayer} */ let specialMusic /** @type {ILayer} */ let normalMusic let volumes = {} let elapsedTime = 0 let firstCicle = false let useMusic = false let prevToggle export function init(_) { const totalLayers = thisScene.enumerateLayers() const normalLayers = totalLayers.filter(e => e.name.includes(`${scriptProperties['normalImage']}`)) const effLayers = totalLayers.filter(e => e.name.includes(`${scriptProperties['effectPrefix']}`)) const musicLayer = totalLayers.filter(e => e.name.includes(`${scriptProperties['musicPrefix']}`)) // If the first image is special and showInSpecial is TRUE, then prevToggle is TRUE // If the first image is special and showInSpecial is FALSE, then prevToggle is FALSE // If the first image is normal and showInSpecial is TRUE, then prevToggle is FALSE // If the first image is normal and showInSpecial is FALSE, then prevToggle is TRUE const isSpecialImage = thisScene.getLayer(0).name.includes(`${scriptProperties['specialImage']}`) const showInSpecial = scriptProperties['showInSpecial'] // This covers the first cases if (isSpecialImage) { prevToggle = showInSpecial } else { // this covers the others prevToggle = !showInSpecial } // Collect effects for (const special of effLayers) { console.log(`🎥 "${special.name}" effect registered`) special.instance.alpha = prevToggle === true ? 1 : 0 effectLayers.push(special) } // Collect images for (const img of normalLayers){ if (img.name.includes(scriptProperties['specialImage'])) console.log(`📸"${img.name}" special image registered`) else console.log(`📷 "${img.name}" image registered`) // Animation is not applied for the first image in the first cycle. if (img !== thisScene.getLayer(0)) { img.alpha = 0 img.visible = true } normalImages.push(img) } if (musicLayer.length < 1) return else useMusic = true for (const music of musicLayer){ if (scriptProperties['syncMusic']) music.playbackmode = 'single' if (music.name.includes(`${scriptProperties['specialImage']}`)){ console.log(`🎶 "${music.name}" special music registered`) specialMusic = music } else { console.log(`🎶 "${music.name}" music registered`) normalMusic = music } volumes[music.name] = music.volume } specialMusic.play() normalMusic.play() if (prevToggle){ normalMusic.volume = 0 } else { specialMusic.volume = 0 } } function toggleEffects (value) { const progressInCurrentImage = elapsedTime % totalDurationPerImage const duration = (fadeDuration - (fadeDuration * .30)) if (prevToggle === value || progressInCurrentImage >= duration) return for (const effect of effectLayers){ if (!value){ effect.instance.alpha = 1 - (progressInCurrentImage / duration) } else { effect.instance.alpha = progressInCurrentImage / duration } } } function toggleMusic (value) { if (!useMusic) return const progressInCurrentImage = elapsedTime % totalDurationPerImage const duration = (fadeDuration - (fadeDuration * .30)) if (prevToggle === value || progressInCurrentImage >= duration) return if (!value){ specialMusic.volume = volumes[specialMusic.name] * ((duration - progressInCurrentImage )/ duration) normalMusic.volume = (progressInCurrentImage / duration) * volumes[normalMusic.name] } else { normalMusic.volume = volumes[normalMusic.name] * ((duration - progressInCurrentImage )/ duration) specialMusic.volume = (progressInCurrentImage / duration) * volumes[specialMusic.name] } } export function update(_) { fadeDuration = engine.userProperties['fadeduration'] ?? scriptProperties['fadeDuration'] imageDuration = engine.userProperties['imageduration'] ?? scriptProperties['imageDuration'] totalDurationPerImage = fadeDuration * 2 + imageDuration let idx = Math.floor(elapsedTime / totalDurationPerImage) % normalImages.length const prevIdx = idx === 0 ? normalImages.length - 1 : idx - 1 const progressInCurrentImage = elapsedTime % totalDurationPerImage if (!scriptProperties['syncMusic']) { if (!normalMusic.isPlaying()) normalMusic.play() if (!specialMusic.isPlaying()) specialMusic.play() } else if (!normalMusic.isPlaying() && !specialMusic.isPlaying()) { normalMusic.play() specialMusic.play() } // Manage images loop, only with 3 or more images if (normalImages.length > 2) { const nextIdx = (idx + 1) % normalImages.length; thisScene.sortLayer(normalImages[nextIdx], thisScene.getLayerIndex(normalImages[idx]) + 1); } if (idx === 0 && !firstCicle) { elapsedTime += engine.frametime return } if (idx === normalImages.length - 1){ firstCicle = true } if (progressInCurrentImage < fadeDuration){ normalImages[idx].alpha = progressInCurrentImage / fadeDuration // If there are only two images, alternate between them if (normalImages.length === 2) { normalImages[prevIdx].alpha = 1 - (progressInCurrentImage / fadeDuration) } if (normalImages[idx].name.includes(`${scriptProperties['specialImage']}`)){ toggleEffects(scriptProperties['showInSpecial']) toggleMusic(scriptProperties['showInSpecial']) } else { toggleEffects(!scriptProperties['showInSpecial']) toggleMusic(!scriptProperties['showInSpecial']) } } if (progressInCurrentImage >= fadeDuration && progressInCurrentImage < fadeDuration + imageDuration){ normalImages[idx].alpha = 1 normalImages[prevIdx].alpha = 0; if (normalImages[idx].name.includes(`${scriptProperties['specialImage']}`)){ prevToggle = scriptProperties['showInSpecial'] } else { prevToggle = !scriptProperties['showInSpecial'] } } elapsedTime += engine.frametime } 这段代码要如何使用于壁纸引擎内
08-02
内容概要:该研究通过在黑龙江省某示范村进行24小时实地测试,比较了燃煤炉具与自动/手动进料生物质炉具的污染物排放特征。结果显示,生物质炉具相比燃煤炉具显著降低了PM2.5、CO和SO2的排放(自动进料分别降低41.2%、54.3%、40.0%;手动进料降低35.3%、22.1%、20.0%),但NOx排放未降低甚至有所增加。研究还发现,经济性和便利性是影响生物质炉具推广的重要因素。该研究不仅提供了实际排放数据支持,还通过Python代码详细复现了排放特征比较、减排效果计算和结果可视化,进一步探讨了燃料性质、动态排放特征、碳平衡计算以及政策建议。 适合人群:从事环境科学研究的学者、政府环保部门工作人员、能源政策制定者、关注农村能源转型的社会人士。 使用场景及目标:①评估生物质炉具在农村地区的推广潜力;②为政策制定者提供科学依据,优化补贴政策;③帮助研究人员深入了解生物质炉具的排放特征和技术改进方向;④为企业研发更高效的生物质炉具提供参考。 其他说明:该研究通过大量数据分析和模拟,揭示了生物质炉具在实际应用中的优点和挑战,特别是NOx排放增加的问题。研究还提出了多项具体的技术改进方向和政策建议,如优化进料方式、提高热效率、建设本地颗粒厂等,为生物质炉具的广泛推广提供了可行路径。此外,研究还开发了一个智能政策建议生成系统,可以根据不同地区的特征定制化生成政策建议,为农村能源转型提供了有力支持。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值