The Problem with Cloud-Computing Standardization

随着云计算的迅速发展,标准化问题成为限制其广泛应用的关键因素。缺乏统一标准导致使用复杂性增加,限制了平台间互操作性和安全性。本文探讨了标准化问题及解决策略,包括虚拟化、API标准化和跨云平台互操作性的关键问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

http://www.infoq.com/articles/problem-with-cloud-computing-standardization

Cloud computing has become an increasingly popular approach in recent years, 

with seemingly nothing but ongo­ing growth in its future.

However, some industry observers say that the rapid growth has caused, and is also threatened by, the failure of compre­hensive cloud-computing standards to gain traction, despite the many groups working on them.


They say the lack of standards could make cloud computing trickier to use. 

It could also restrict imple­mentation by limiting interoperability among cloud platforms and causing inconsistency in areas such as secu­rity and interoperability. 

For example, the lack of standardization could keep a customer trying to switch from a pri­vate to a public cloud from doing so as seamlessly as switching browsers or e-mail systems. 

In addition, it would keep users from knowing the basic capabilities they could expect from any cloud service.

“Interoperability between offerings and the portability of services from one provider to another is very impor­tant to the customer to maximize the expected [return on investment] from cloud computing,” explained IBM vice president for software standards Angel Luis Diaz. 

Moreover, interoper­ability would keep users from being locked into a single cloud provider.


A lack of security standards - addressing issues such as data privacy and encryption - is also hurting wider cloud-computing adoption, said Nirlay Kundu, senior manager at Wipro Con­sulting Services.


With potentially sensitive infor­mation stored off-site and available only over the Internet, security is a critical concern, explained Vishy Narayan, principal architect with the System Integration Practice at Infosys Technologies, a vendor of consulting, technology, engineering, and out­sourcing services.


According to Lynda Stadtmueller, program director for cloud comput­ing at market research firm Frost & Sullivan’s Stratecast practice, an effec­tive lack of standardization makes it difficult for buyers to compare and evaluate cloud offerings.


Of course, cloud computing is relatively young, so the lack of stan­dardization - which usually occurs with more mature technologies - is not altogether surprising. And some experts say the market’s immaturity makes it too difficult for any one orga­nization to mandate standards.


There may be challenges to cloud-computing standardization along the way, and overcoming them could determine just how bright cloud com­puting’s future will be, said Winston Bumpus, director of standards archi­tecture at VMware, a virtualization and cloud infrastructure vendor. 

He is also president of the Distributed Management Task Force (DMTF), an industry-based standards consortium.

Cloud Standardization

At its most basic, cloud computing is simply the delivery of applications; security and other services; storage and other infrastructures; and plat­forms such as those for software development to users over the Internet or a private cloud.

Cloud computing appeals to many organizations because it minimizes the amount of hardware and software that users must own, maintain, and upgrade. In essence, users pay only for the computing capability they need.

Standardization issues

Many of today’s in-progress standards, summarized in Table 1, are being based in part on the US National Institute of Standards and Technology’s Special Publication 800-145, a document called “The NIST Definition of Cloud Computing (Draft).”

True interoperability requires translation of specific application and service functionality from one cloud to another, and this won’t happen without standardization, said Michael Crandell, CEO and founder of cloud-computing vendor RightScale. 

For example, there currently is no stan­dardized way to seamlessly translate security requirements and policies across cloud offerings.


A key standardization issue involves virtualization, which plays a critical role in most cloud-comput­ing approaches, said Willemstad van Biljon, cofounder and vice president of products for Nimbula, a cloud-infrastructure vendor.


Virtualization’s flexibility lets cloud providers optimize workloads among their hardware resources. This also enables users to, for exam­ple, connect to storage without having to know about server names and addresses, which would be the case in a traditional network.


In virtualization, hypervisors manage a host server’s processing and other resources so that it can run multiple virtual machines (VMs), using different operating systems and other platforms. Each cloud platform has its own type of hypervisor, noted Crandell.


Cloud systems utilizing different hypervisors won’t interoperate, in part because they don’t use the same data formats, noted Nimbula’s Van Biljon. 

Cloud platforms also won’t interoperate because their VMs don’t interact in a standard way with different network and storage archi­tectures, APIs, network connections, databases, and other elements, Cran­dell explained.


VM translation is an important issue to enable the preservation of security policy, network policy, and identity across clouds, said Van Biljon. Without standardization, moving a workload from one cloud platform to another requires creating a new VM on the second platform and then reinstalling the application, which can take considerable time and effort.

DMTF

The DMTF’s Open Virtualization Format, which debuted last year, is a first step toward hypervisor, and thus cloud-computing, interoperability.


OVF provides a way to move vir­tual machines from one hosted platform to another, noted Dave Link, founder and CEO of ScienceLogic, an IT-infrastructure-management prod­uct vendor. 

OVF standardizes use of a container that stores metadata and virtual machines and enables the migration of VMs between clouds. 

It also defines certain aspects of the VM and the application that runs on it, including size, CPU and networking requirements, memory, and storage.


However, users must manu­ally handle details necessary for interoperability, such as application-component interoperability.

The DMTF is still working on enabling a VM to run on multiple platforms and on defining how an application operates in the cloud, to perform functions such as load bal­ancing and session handling.


ANSI recognizes OVF as a standard, and it is also under consideration by the ISO, noted the DMTF’s Bumpus.

The DMTF’s Open Cloud Stan­dards Incubator subgroup is working to improve cloud interoperability via approaches such as open cloud-resource management standards, cloud-service-portability specifica­tions, and security mechanisms.

IEEE

IEEE Working Groups P2301 and P2302 are developing comprehensive standards that will address migration, management, and interoperability among cloud-computing platforms.


P2301, Draft Guide for Cloud Portability and Interoperability Pro­files, will serve as a metastandard, providing profiles of existing and in-progress cloud-computing stan­dards from multiple organizations in critical areas such as cloud-based applications, portability, manage­ment, interoperability interfaces, file formats, and operation conven­tions, said Steve Diamond, chair of the IEEE Cloud Computing Initiative and managing director of the Picosoft technology-business consultancy.


The purpose is to avoid having multiple standards address the same issues while having no standards addressing others, explained David Bernstein, chair of the P2301 and P2302 Working Groups and man­aging director of the Cloud Strategy Partners consultancy.


P2302, Draft Standard for Inter­cloud Interoperability and Federation, defines the topology, protocols, func­tionality, and governance required for cloud-to-cloud interoperability and data exchange.


Diamond said the two working groups haven’t yet established their roadmaps and probably won’t finish all of their work until 2012.

Open Grid Forum

The OGF - a community of grid-computing users, developers, and vendors - is developing the Open Cloud Computing Interface. OCCI specifies multiple protocols and APIs for various cloud-computing management tasks, including deploy­ment, automatic scaling, and network monitoring.


The APIs will incorporate aspects of other cloud-computing-related APIs, such as those used in GoGrid and Amazon’s Elastic Compute Cloud. The development of APIs will enable interfacing and interaction among different infrastructure-as-a-service (IaaS) platforms.

Organization for the Advancement of Structured Information Standards

OASIS has two technical commit­tees working on cloud-centric issues.

The IDCloud Technical Commit­tee is trying to resolve security issues regarding identity management in cloud computing, to make sure people using cloud resources are who they say they are. The group is also developing guidelines for vulnerabil­ity mitigation, considered important because of cloud computing’s open, distributed architecture.

The Symptoms Automation Frame­work Technical Committee is working on ways to make sure cloud-comput­ing providers understand consumer requirements - such as capacity and quality of service (QoS) - when designing and providing services.

Storage Networking Industry Association

The SNIA’s Cloud Data Manage­ment Interface standardizes cloud storage in three key areas.

CDMI’s client-to-cloud storage standard addresses the way a user interfaces with cloud-based storage resources. The cloud-data-manage­ment standard deals with issues such as QoS and encryption. The cloud-to- cloud-interactions standard focuses on the way stored data can be moved among clouds.

CDMI is now an SNIA architecture standard. SNIA also plans to submit it for ANSI and ISO standardization.

Standardization Challenges

The sheer number of standardiza­tion efforts, led by both vendors and standards bodies, are muddying the waters. 

They make it difficult for ven­dors and users to determine what is and isn’t going to emerge. 

Moreover, there are multiple standardization efforts in some cloud-computing areas and none in others.


According to IBM’s Diaz, even though the standardization efforts are important, they aren’t doing enough. 

He said they need more customer input, to make sure users’ needs are met. 

A group of companies and other organization recently formed the Cloud Standards Cus­tomer Council to supply such input.


An important issue is getting all vendors on the same page in terms of what a standard will cover, noted Dave Lithicum, founder and chief technology officer of cloud-computing consultancy Blue Moun­tain. 

Most vendors, he explained, have their own agendas, making the standards-adoption process difficult, frustrating, and so time-consuming that some organizations give up.


Another challenge is the mar­ket’s youth, said James Staten, vice president and principal analyst at Forrester Research. Because cloud computing is not yet established, the technological landscape could change substantially, making a new standard obsolete.


The technology and marketplace will need to mature and stabilize a bit before standardization require­ments become apparent, added Steve Crawford, vice president of marketing and business development at cloud-services provider Jamcracker.


Many industry watchers say that cloud-computing stan­dards development will occur but that timing is a key factor.


The standardization process will crystallize during the next five years with the emergence of a few core specifications forming a baseline that all cloud vendors will need to support, predicted IBM’s Diaz. Soon after, he added, domain- and indus­try-specific standards will appear. Stratecast’s Stadtmueller estimated that standards addressing security and interoperability will emerge within five years, once providers, governments, and compliance organizations agree on what constitutes a secure cloud environment.


On the other hand, said James Thomason, chief architect at IaaS software vendor Gale Technologies, cloud-computing standards will take “an excruciatingly long time to emerge.” 

He predicted that vendors will try to standardize their own implementations at first to gain a competitive advantage, initially creating ineffective specifications. 

Thus, he said, standardization will occur only when the market can no longer tolerate a lack of interoperability.

Jamcracker’s Crawford agreed, stating that standardization will be driven by consensus among large enterprise customers who insist on interoperability or by vendors recog­nizing that standards development is needed to drive further adoption.

There will be hiccups along the way, he said, but they will accentuate the need for standards, particularly to address interoperability.

About the Author

Sixto Ortiz Jr. is a freelance technol­ogy writer based in Amarillo, Texas. Contact him at sortiz1965@gmail.com.

标题基于SpringBoot+Vue的学生交流互助平台研究AI更换标题第1章引言介绍学生交流互助平台的研究背景、意义、现状、方法与创新点。1.1研究背景与意义分析学生交流互助平台在当前教育环境下的需求及其重要性。1.2国内外研究现状综述国内外在学生交流互助平台方面的研究进展与实践应用。1.3研究方法与创新点概述本研究采用的方法论、技术路线及预期的创新成果。第2章相关理论阐述SpringBoot与Vue框架的理论基础及在学生交流互助平台中的应用。2.1SpringBoot框架概述介绍SpringBoot框架的核心思想、特点及优势。2.2Vue框架概述阐述Vue框架的基本原理、组件化开发思想及与前端的交互机制。2.3SpringBoot与Vue的整合应用探讨SpringBoot与Vue在学生交流互助平台中的整合方式及优势。第3章平台需求分析深入分析学生交流互助平台的功能需求、非功能需求及用户体验要求。3.1功能需求分析详细阐述平台的各项功能需求,如用户管理、信息交流、互助学习等。3.2非功能需求分析对平台的性能、安全性、可扩展性等非功能需求进行分析。3.3用户体验要求从用户角度出发,提出平台在易用性、美观性等方面的要求。第4章平台设计与实现具体描述学生交流互助平台的架构设计、功能实现及前后端交互细节。4.1平台架构设计给出平台的整体架构设计,包括前后端分离、微服务架构等思想的应用。4.2功能模块实现详细阐述各个功能模块的实现过程,如用户登录注册、信息发布与查看、在线交流等。4.3前后端交互细节介绍前后端数据交互的方式、接口设计及数据传输过程中的安全问题。第5章平台测试与优化对平台进行全面的测试,发现并解决潜在问题,同时进行优化以提高性能。5.1测试环境与方案介绍测试环境的搭建及所采用的测试方案,包括单元测试、集成测试等。5.2测试结果分析对测试结果进行详细分析,找出问题的根源并
内容概要:本文详细介绍了一个基于灰狼优化算法(GWO)优化的卷积双向长短期记忆神经网络(CNN-BiLSTM)融合注意力机制的多变量多步时间序列预测项目。该项目旨在解决传统时序预测方法难以捕捉非线性、复杂时序依赖关系的问题,通过融合CNN的空间特征提取、BiLSTM的时序建模能力及注意力机制的动态权重调节能力,实现对多变量多步时间序列的精准预测。项目不仅涵盖了数据预处理、模型构建与训练、性能评估,还包括了GUI界面的设计与实现。此外,文章还讨论了模型的部署、应用领域及其未来改进方向。 适合人群:具备一定编程基础,特别是对深度学习、时间序列预测及优化算法有一定了解的研发人员和数据科学家。 使用场景及目标:①用于智能电网负荷预测、金融市场多资产价格预测、环境气象多参数预报、智能制造设备状态监测与预测维护、交通流量预测与智慧交通管理、医疗健康多指标预测等领域;②提升多变量多步时间序列预测精度,优化资源调度和风险管控;③实现自动化超参数优化,降低人工调参成本,提高模型训练效率;④增强模型对复杂时序数据特征的学习能力,促进智能决策支持应用。 阅读建议:此资源不仅提供了详细的代码实现和模型架构解析,还深入探讨了模型优化和实际应用中的挑战与解决方案。因此,在学习过程中,建议结合理论与实践,逐步理解各个模块的功能和实现细节,并尝试在自己的项目中应用这些技术和方法。同时,注意数据预处理的重要性,合理设置模型参数与网络结构,控制多步预测误差传播,防范过拟合,规划计算资源与训练时间,关注模型的可解释性和透明度,以及持续更新与迭代模型,以适应数据分布的变化。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值