OpenStack Nova Architecture

本文深入探讨了OpenStackNova的架构设计,包括概念架构和逻辑架构,并详细解析了各组件的功能和交互流程。重点覆盖了OpenStackNova的API、调度、计算、网络、存储等核心部分,同时指出了当前架构中需要增强的功能区域,如日志、计费、身份验证、客户门户和监控等。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

One of the common refrains I hear from people getting started with OpenStack is the lack of good introductory architectural overviews of the project. I was confronted by the same problem when I first started with the project - it was easy to get the low level code and API documentation but it was very difficult to find a “lay of the land”-type overview. Now that Cactus (OpenStack’s third version) has been released, I thought I’d take advantage of the lull in development to write up a quick architectural overview from my point of view. Since OpenStack is a fairly broad topic, I’ll break my thoughts into several posts. Today’s post will deal with OpenStack Nova’s (compute cloud) high level architecture.

Before we dive into the conceptual and logic architecture, let’s take a second to explain the OpenStack project:

OpenStack is a collection of open source technologies delivering a massively scalable cloud operating system.

You can think of it as software to power your own Infrastructure as a Service (IaaS) offering likeAmazon Web Services. It currently encompasses three main projects:

  • Swift which provides object/blob storage. This is roughly analogous to Rackspace Cloud Files (from which it is derived) or Amazon S3.
  • Glance which provides discovery, storage and retrieval of virtual machine images for OpenStack Nova.
  • Nova which provides virtual servers upon demand. This is similar to Rackspaces Cloud Servers or Amazon EC2.

While these three projects provide the core of the cloud infrastructure, OpenStack is open and evolving — there will be more projects (there are already related projects for web interfacesand a queue service). With that brief introduction, let’s delve into a conceptual architecture and then examine how OpenStack Nova could map to it.

Cloud Provider Conceptual Architecture

Imagine that we are going to build our own IaaS cloud and offer it to customers. To achieve this, we would need to provide several high level features:

  1. Allow application owners to register for our cloud services, view their usage and see their bill (basic customer relations management functionality)
  2. Allow Developers/DevOps folks to create and store custom images for their applications (basic build-time functionality)
  3. Allow DevOps/Developers to launch, monitor and terminate instances (basic run-time functionality)
  4. Allow the Cloud Operator to configure and operate the cloud infrastructure

While there are certainly many, many other features that we would need to offer (especially if we were to follow are more complete industry framework like eTOM), these four get to the very heart of providing IaaS. Now assuming that you agree with these four top level features, you might put together a conceptual architecture that looks something like this:

IaaS Conceptual Architecture

In this model, I’ve imagined four sets of users (developers, devops, owners and operators) that need to interact with the cloud and then separated out the functionality needed for each. From there, I’ve followed a pretty common tiered approach to the architecture (presentation, logic and resources) with two orthogonal areas (integration and management). Let’s explore each a little further:

  • As with presentation layers in more typical application architectures, components here interact with users to accept and present information. In this layer, you will find web portals to provide graphical interfaces for non-developers and API endpoints for developers. For more advanced architectures, you might find load balancing, console proxies, security and naming services present here also.

  • The logic tier would provide the intelligence and control functionality for our cloud. This tier would house orchestration (workflow for complex tasks), scheduling (determining mapping of jobs to resources), policy (quotas and such) , image registry (metadata about instance images), logging (events and metering).

  • There will need to integration functions within the architecture. It is assumed that most service providers will already have a customer identity and billing systems. Any cloud architecture would need to integrate with these systems.

  • As with any complex environment, we will need a management tier to operate the environment. This should include an API to access the cloud administration features as well as some forms of monitoring. It is likely that the monitoring functionality will take the form of integration into an existing tool. While I’ve highlighted monitoring and an admin API for our fictional provider, in a more complete architecture you would see a vast array of operational support functions like provisioning and configuration management.

  • Finally, since this is a compute cloud, we will need actual compute, network and storage resources to provide to our customers. This tier provides these services, whether they be servers, network switches, network attached storage or other resources.

With this model in place, let’s shift gears and look at OpenStack Nova’s logical architecture.

OpenStack Nova Logical Architecture

Now that we’ve looked at a proposed conceptual architecture, let’s see how OpenStack Nova is logically architected. Since Cactus is the newest release, I will concentrate there (which means if you are viewing this after around July 2011, this will be out of date). There are several logical components of OpenStack Nova architecture but the majority of these components are custom written python daemons of two varieties:

  • WSGI applications to receive and mediate API calls (nova-apiglance-api, etc.)
  • Worker daemons to carry out orchestration tasks (nova-computenova-networknova-schedule, etc.)

However, there are two essential pieces of the logical architecture are neither custom written nor Python based: the messaging queue and the database. These two components facilitate the asynchronous orchestration of complex tasks through message passing and information sharing. Putting this all together we get a picture like this:

OpenStack Nova Logical Architecture

This complicated, but not overly informative, diagram as it can be summed up in three sentences:

  • End users (DevOps, Developers and even other OpenStack components) talk to nova-api to interface with OpenStack Nova
  • OpenStack Nova daemons exchange info through the queue (actions) and database (information) to carry out API requests
  • OpenStack Glance is basically a completely separate infrastructure which OpenStack Nova interfaces through the Glance API

Now that we see the overview of the processes and their interactions, let’s take a closer look at each component.

  • The nova-api daemon is the heart of the OpenStack Nova. You may see it illustrated on many pictures of OpenStack Nova as API and “Cloud Controller”. While this is partly true, cloud controller is really just a class (specifically the CloudController in trunk/nova/api/ec2/cloud.py) within the nova-api daemon. It provides an endpoint for all API queries (either OpenStack API or EC2 API), initiates most of the orchestration activities (such as running an instance) and also enforces some policy (mostly quota checks).

  • The nova-schedule process is conceptually the simplest piece of code in OpenStack Nova: take a virtual machine instance request from the queue and determines where it should run (specifically, which compute server host it should run on). In practice however, I am sure this will grow to be the most complex as it needs to factor in current state of the entire cloud infrastructure and apply complicated algorithm to ensure efficient usage. To that end, nova-schedule implements a pluggable architecture that let’s you choose (or write) your own algorithm for scheduling. Currently, there are several to choose from (simple, chance, etc) and it is a area of hot development for the future releases of OpenStack Nova.

  • The nova-compute process is primarily a worker daemon that creates and terminates virtual machine instances. The process by which it does so is fairly complex (see this blog post by Laurence Luce for the gritty details) but the basics are simple: accept actions from the queue and then perform a series of system commands (like launching a KVM instance) to carry them out while updating state in the database.

  • As you can gather by the name, nova-volume manages the creation, attaching and detaching of persistent volumes to compute instances (similar functionality to Amazon’s Elastic Block Storage). It can use volumes from a variety of providers such as iSCSI or AoE.

  • The nova-network worker daemon is very similar to nova-compute and nova-volume. It accepts networking tasks from the queue and then performs tasks to manipulate the network (such as setting up bridging interfaces or changing iptables rules).

  • The queue provides a central hub for passing messages between daemons. This is currently implemented with RabbitMQ today, but theoretically could be any AMPQ message queuesupported by the python ampqlib.

  • The SQL database stores most of the build-time and run-time state for a cloud infrastructure. This includes the instance types that are available for use, instances in use, networks available and projects. Theoretically, OpenStack Nova can support any database supported by SQL-Alchemy but the only databases currently being widely used are sqlite3 (only appropriate for test and development work), MySQL and PostgreSQL.

  • OpenStack Glance is a separate project from OpenStack Nova, but as shown above, complimentary. While it is an optional part of the overall compute architecture, I can’t imagine that most OpenStack Nova installation will not be using it (or a complimentary product). There are three pieces to Glance: glance-apiglance-registry and the image store. As you can probably guess, glance-api accepts API calls, much like nova-api, and the actual image blobs are placed in the image store. The glance-registry stores and retrieves metadata about images. The image store can be a number of different object stores, including OpenStack Swift.

  • Finally, another optional project that we will need for our fictional service provider is an user dashboard. I have picked the OpenStack Dashboard here, but there are also several other web front ends available for OpenStack Nova. The OpenStack Dashboard provides a web interface into OpenStack Nova to give application developers and devops staff similar functionality to the API. It is currently implemented as a Django web application.

This logical architecture represents just one way to architect OpenStack Nova. With it’s pluggable architecture, we could easily swap out OpenStack Glance with another image service or use another dashboard. In the coming releases of OpenStack, expect to see more modularization of the code especially in the network and volume areas.

Nova Conceptual Mapping

Now that we’ve seen a conceptual architecture for a fictional cloud provider and examined the logical architecture of OpenStack Nova, it is fairly easy to map the OpenStack components to the conceptual areas to see what we are lacking:

OpenStack Nova conceptual coverage

As you can see from the illustration, I’ve overlaid logical components of OpenStack Nova, Glance and Dashboard to denote functional coverage. For each of the overlays, I’ve added the name of the logical component within the project that provides the functionality. While all of these judgements are highly subjective, you can see that we have a majority coverage of the functional areas with a few notable exceptions:

  • The largest gap in our functional coverage is logging and billing. At the moment, OpenStack Nova doesn’t have a billing component that can mediate logging events, rate the logs and create/present bills. That being said, most service providers will already have one (or many) of these so the focus is really on the logging and integration with billing. This could be remedied in a variety of ways: augmentations of the code (which should happen in the next release “Diablo”), integration with commercial products or services (perhaps Zuora) or custom log parsing.

  • Identity is also a point which will likely need to be augmented. Unless we are running a stock LDAP for our identity system, we will need to integrate our solution with OpenStack Nova. Having said that, this is true of almost all cloud solutions.

  • The customer portal will also be an integration point. While OpenStack Nova provides a user dashboard (to see running instance, launch new instances, etc.), it doesn’t provide an interface to allow application owners to signup for service, track their bills and lodge trouble tickets. Again, this is probably something that it is already in place at our imaginary service provider.
  • Ideally, the Admin API would replicate all functionality that we’d be able to do via the command line interface (which in this case is mostly the exposed through the nova-manage command). This will get better in the Diablo release with the Admin API work.

  • Cloud monitoring and operations will be an important area of focus for our service provider. A key to any good operations approach is good tooling. While OpenStack Nova provides nova-instancemonitor, which tracks compute node utilization, we’re really going to need a number of third party tools for monitoring.

  • Policy is an extremely important area but very provider specific. Everything from quotas (which are supported) to quality of service (QoS) to privacy controls can fall under this. I’ve given OpenStack Nova partial coverage here, but that might vary depending on the intricacies of the providers needs. For the record, OpenStack Nova Cactus provides quotas for instances (number and cores used), volumes (size and number), floating IP addresses and metadata.

  • Scheduling within OpenStack Nova is fairly rudimentary for larger installations today. The pluggable scheduler supports chance (random host assignment), simple (least loaded) and zone (random nodes within an availability zone). As within most areas on this list, this will be greatly augmented in Diablo. In development are distributed schedulers and schedulers that understand heterogeneous hosts (for support of GPUs and differing CPU architectures).

As you can see, OpenStack Nova provides a fair basis for our mythical service provider, as long as we are willing to do some integration here and there. In my next post, I’ll dive deeper into OpenStack Nova with a discussion on deployment architecture choices.

<think>嗯,用户这次问的是在OpenStack中创建nova计算服务的具体操作方法和常见问题排查。从用户提供的多个引用来看,ta应该正在部署OpenStack环境,可能遇到了服务创建失败或验证不通过的情况。用户引用的资料很有针对性,其中引用[1][2][3]都提到了完全相同的命令行操作,这说明ta已经掌握了基础命令,但不确定操作细节或遇到了报错。引用[4][5]则显示用户可能还需要后续的启动配置和架构理解。特别注意到用户提到"排查该命令的相关问题",这暗示实际执行时可能出现了意外状况。从技术角度看,创建计算服务的关键点有几个:首先是必须预先完成keystone的认证配置,这是引用[2]提到的先决条件;其次要注意命令行中"compute"这个服务类型名称是OpenStack的固定参数;最后创建后的端点注册容易被忽略。用户引用的资料里[1]恰好演示了完整的端点创建流程,这个细节很关键。用户可能隐藏的痛点是:当命令输出没有返回类似引用[3]的标准字段(如缺少ID值)时,如何判断服务是否创建成功?或者遇到认证失败的错误提示该如何处理?考虑到OpenStack部署的高失败率,应该重点强调环境检查和权限验证。值得补充但引用中未明确的是:1)如何通过openstackservicelist验证服务状态2)服务创建后的nova组件启动顺序3)可能出现的主机名解析问题。特别是引用[4]提到的自启动配置,对生产环境很重要但用户可能还没进行到这一步。最后要提醒用户注意命令输出的细微差异。比如引用[1]和[3]的输出格式略有不同(后者少了description字段),这可能是OpenStack版本差异导致的正常现象,避免用户误以为是错误。</think> 在OpenStack中创建名为`nova`的计算服务并排查相关问题,需按以下流程操作: --- ### 一、创建Nova计算服务 #### 1. **前置条件确认** - **Keystone认证就绪**:确保已加载管理员环境变量(如`admin-openrc`)[^2]: ```bash source admin-openrc.sh ``` - **用户与角色**:创建`nova`用户并赋予`admin`角色[^2]: ```bash openstack user create --domain default --password NOVA_PASS nova openstack role add --project service --user nova admin ``` #### 2. **创建计算服务实体** - 执行服务创建命令[^1][^2][^3]: ```bash openstack service create --name nova \ --description "OpenStack Compute" compute ``` - **成功输出示例**: ``` +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 83721cda2dd94e8bbfad43e34657a6da | | name | nova | | type | compute | +-------------+----------------------------------+ ``` #### 3. **注册API端点** > 端点需指向Nova-API服务的主机名/IP和端口(默认端口`8774/v2.1`)[^1][^4]。 ```bash # 创建Public端点 openstack endpoint create --region RegionOne \ compute public http://<控制器IP>:8774/v2.1 # 创建Internal端点 openstack endpoint create --region RegionOne \ compute internal http://<控制器IP>:8774/v2.1 # 创建Admin端点 openstack endpoint create --region RegionOne \ compute admin http://<控制器IP>:8774/v2.1 ``` --- ### 二、常见问题排查 #### ❌ 错误1:认证失败(HTTP 401) - **原因**:环境变量未加载或密码错误 - **解决**: ```bash # 重新加载认证信息 source admin-openrc.sh # 验证令牌有效性 openstack token issue ``` #### ❌ 错误2:服务类型`compute`不存在(HTTP 400) - **原因**:`compute`是固定服务类型,需确保拼写正确 - **解决**:检查命令末尾是否为**`compute`**(非`nova`): ```bash openstack service create --name nova compute # 正确 openstack service create --name nova nova # 错误 ``` #### ❌ 错误3:端口冲突(HTTP 503) - **原因**:Nova-API服务未启动或端口被占用 - **解决**: ```bash # 启动Nova服务 systemctl start openstack-nova-api # 检查端口监听 netstat -tnlp | grep 8774 ``` #### ❌ 错误4:端点注册失败 - **原因**:URL格式错误或Keystone未同步 - **解决**: - 确认URL包含协议(`http://`)和正确端口 - 重启Keystone服务: ```bash systemctl restart httpd ``` --- ### 三、服务验证 1. **检查服务列表**: ```bash openstack service list | grep nova ``` > 应显示`nova`服务状态为`Enabled`[^3]。 2. **验证端点**: ```bash openstack endpoint list | grep compute ``` > 确认`public`、`internal`、`admin`三类端点均存在。 3. **启动Nova组件**(部署后必做)[^4]: ```bash systemctl enable --now openstack-nova-{api,scheduler,conductor,novncproxy} ``` --- ### 关键架构原理 > Nova依赖其他服务协同工作[^5]: > - **Keystone**:身份认证 > - **Glance**:提供系统镜像 > - **Neutron**:网络管理 > - **Cinder**:块存储 > > ![](https://docs.openstack.org/nova/latest/_images/architecture.svg) > *Nova通过Driver调用Hypervisor(如KVM)实现虚拟机生命周期管理[^5]。* ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值