Abstract
Among the current core projects of OpenStack, Nova project is the core of the cores. Just as described in OpenStack website, Nova is a cloud computing fabric controller, the main part of an IaaS system. There are more than 20 binaries in OpenStack nova project. Among them, nova-scheduler is responsible to decide which compute node host should launch an image instance (server in terms of OpenStack) among other responsibilities. This article describes the way this component does its job together with other components and how it makes decisions faced with more than one compute node host and one instantiation request.
Overview
Just as said on the OpenStack website, OpenStack's mission is to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable. In such cloud environments, there is often more than one compute node to instantiate image instances on. How to manage and measure these compute nodes is a very prominent problem. Based on some data, how to react to one user's request is a hot spot as well.
Just as shown by above figure, nova-scheduler interacts with other components through queue and central database repo. For scheduling, queue is the essential communications hub.
All compute nodes (also known as hosts in terms of OpenStack) periodically publish their status, resources available and hardware capabilities to nova-scheduler through the queue. nova-scheduler then collects this data and uses it to make decisions when a request comes in.
Above picture shows us the general idea of how the scheduler does its main job. The whole process divides into two phases. Filtering phase will generate a list of suitable hosts by applying filters. Weighting phase will sort the hosts according to their weighted cost scores, which are given by applying some cost functions. The sorted list of hosts is candidates to fulfill the user's request. How many hosts in this list will be used depends on the number of instances requested in one request.
Following this overview, the rest contents of this article will describe:
1.What an instantiation request looks like and how it goes to nova-scheduler;
2.What are the main components in nova-scheduler;
3.How the nova-scheduler components collaborate to finish the scheduling work.
Scheduler Invocation
To depict the work of nova-scheduler, we have to talk a little about nova-api first. Just as with other nova binaries, nova-api is a WSGI server. Python Routes is adopted to map RESTful URL into internal Controller's method.

We can know from above figure that the main methods involved for nova-api to response to the image instantiation request. First the HTTP request is mapped to Controller's create() method. This method processes the request body and then invokes compute_api's create() method, and then method _create_instance() is called. At last the _schedule_run_instance() method will call rpc_method() to send out message onto message queue for nova-scheduler.
The rpc_method is called as follows:
rpc_method(context, FLAGS.scheduler_topic, {"method": "run_instance", "args": {"topic": FLAGS.compute_topic, "request_spec": request_spec, "admin_password": admin_password, "injected_files": injected_files, "requested_networks": requested_networks, "is_first_time": True, "filter_properties": filter_properties}}) |
Of the RPC message, Field injected_files represents files that will be injected into VM disk image, while requested_networks is for network information, such as which network(s) will be used by the instance(s).
Another two parts of this message, request_spec and filter_properties, need further explanation. We will use following command to generate sample values in following sections:
nova boot --image a3fb743d-42df-49ba-b9c4-8042ebbd344e --flavor 1 myserver --hint test=testvalue –availability_zone=myzone::testhost |
filter_properties
The filter_properties part of RPC message is to help nova-scheduler. Normally, it will contain scheduler_hints information from user request. We have sample content like this from previous nova boot command:
filter_properties:{ 'scheduler_hints': { 'force_hosts': [u'testhost'], u'test': u'testvalue' } } |
If our availability_zone complies with the pattern zone:xx:host, force_hosts field will be in scheduler_hints. The value of force_hosts can target the request to a given host directly before going through scheduler's filters. Also there can be ignore_hosts in scheduler_hints, which means the specified hosts will be skipped during scheduling. Both force_hosts and ignore_hosts are applied before going through filters. Please see following section Inside of FilterScheduler.
request_spec
The requst_spec part of RPC message is encapsulation or normalization of HTTP request.
Below table is the sample content of request_spec in the RPC message by previous nova boot command.
{ 'num_instances': 1, 'block_device_mapping': [], 'image': { 'status': 'active', 'name': 'cirros_blank', 'deleted': False, 'container_format': 'ami', 'created_at': '2012-04-05 14:26:24', 'disk_format': 'ami', 'updated_at': '2012-04-05 14:26:25', 'properties': { 'kernel_id': '46bf134e-2e6e-472a-a159-f4cd51f36d84', 'ramdisk_id': '106dc550-783e-4de7-951d-f4f3d5427698' }, 'min_ram': '0', 'checksum': '2f81976cae15c16ef0010c51e3a6c163', 'min_disk': '0', 'is_public': True, 'deleted_at': None, 'id': 'a3fb743d-42df-49ba-b9c4-8042ebbd344e', 'size': 25165824 }, 'instance_type': { 'root_gb': 0L, 'name': u 'm1.tiny', 'deleted': False, 'created_at': None, 'ephemeral_gb': 0L, 'updated_at': None, 'memory_mb': 512L, 'vcpus': 1L, 'flavorid': u '1', 'swap': 0L, 'rxtx_factor': 1.0, 'extra_specs': {}, 'deleted_at': None, 'vcpu_weight': None, 'id': 2L }, 'instance_properties': { # used to comsume virtual resources 'vm_state': 'building', 'ephemeral_gb': 0L, 'access_ip_v6': None, 'access_ip_v4': None, 'kernel_id': '46bf134e-2e6e-472a-a159-f4cd51f36d84', 'key_name': None, 'ramdisk_id': '106dc550-783e-4de7-951d-f4f3d5427698', 'instance_type_id': 2L, 'user_data': '', 'vm_mode': None, 'display_name': u 'myserver', 'config_drive_id': '', 'reservation_id': 'r-bdbnl7aa', 'key_data': None, 'root_gb': 0L, 'user_id': u '81ced34d11954800906096555539c885', 'uuid': u '4ccc7c93-cbde-4233-a7cb-5db81f82489b', 'root_device_name': None, 'availability_zone': u 'myzone', # default to FLAGS.default_schedule_zone 'launch_time': '2012-04-11T15:08:55Z', 'metadata': {}, 'display_description': u 'myserver', 'memory_mb': 512L, 'launch_index': 0, 'vcpus': 1L, 'locked': False, 'image_ref': u 'a3fb743d-42df-49ba-b9c4-8042ebbd344e', 'architecture': None, 'power_state': 0, 'auto_disk_config': None, 'progress': 0, 'os_type': None, 'project_id': u '9d049e4b60b64716978ab415e6fbd5c0', 'config_drive': '' }, 'security_group': ['default'] } |
Some values in instance_properties are copied from instance_type. Both copies of these values play a role in scheduler's work. Red colored parts are important to nova scheduler. Most of them can be used in filters and cost functions.
Nova-scheduler class diagram

Before diving into how the nova-scheduler deals with the instantiation request message, we had better have a look at the data structure it used.
Just as shown by above figure, many classes or modules work together in nova-scheduler:
1.SchedulerManager sits between the queue and the other nova-scheduler components. It receives requests from queue and delegates jobs to its driver. The driver is defined by configuration option FLAGS.scheduler_driver with "nova.scheduler.multi.MultiScheduler" as default value.
2.Scheduler, parent class for all other schedulers, has compute_api and host_manager attributes. The value of compute_api is nova.compute.api.API. The value of host_manager is defined by FLAGS.scheduler_host_manager with "nova.scheduler.host_manager.HostManager" as default value.
4.FilterScheduler is responsible for selecting hosts and provisioning resources. It chooses the host by applying filters and calculate s weighted cost. Host which passes filters and has least cost wins.
5.ChanceScheduler chooses the host randomly from running hosts
6.SimpleScheduler chooses the host based on the running cores. Host with least running cores wins out.
7.API is compute API, used to call API service of OpenStack compute.
8.HostManager is for collect ing and saving host data .
9.Module least_cost contains the cost function and WeightedHost class.
10.WeightedHost is a value object, with weight and hoststate as two fields.
11.HostState records host's capabilit ies and virtual consumptions of resources during one request processing .
Inside of nova-scheduler

Inside of FilterScheduler
By default, compute related scheduling will come to this class. Itsschedule_run_instance() method will take the control to fulfill user request.
This method selects some hosts to instantiate the image. By design, the instantiation request can be to create more than one instance. So it will return a sorted list of WeightedHosts by the weight. Least weight comes first. Also this function will populate filter_properties with more data, such as request_spec, config_options and instance_type, etc before calling filters and cost functions.
1.1 get_cost_function()
-
###### (FloatOpt) How much weight to give the fill-first cost function. A negative value will reverse behavior: e.g. spread-first
compute_fill_first_cost_fn_weight=-1.0
###### (ListOpt) Which cost functions the LeastCostScheduler should use
least_cost_functions="nova.scheduler.least_cost.compute_fill_first_cost_fn"
###### (FloatOpt) How much weight to give the noop cost function
-
def compute_fill_first_cost_fn(host_state, weighing_properties):
"""More free ram = higher weight. So servers will less free
ram will be preferred."""
return host_state.free_ram_mb
1.2 get_all_host_states()
It returns a dict of all the hosts the HostManager knows about. Also, each of the consumable resources in HostState is pre-populated and adjusted based on capabilities data of HostManager. A sample of the returned dict looks like {"host1":hoststate for host1, "host2":hoststate for host2,...}. Please see later sections for hoststate.
1.3 filter_host()
Filters allowed are defined by FLAGS.scheduler_available_filters with "nova.scheduler.filters.standard_filters" as default value. In fact, it will traverse the filter path and return a list of filter classes. As of now, the list is such as:
-
'nova.scheduler.filters.isolated_hosts_filter.IsolatedHostsFilter'
'nova.scheduler.filters.compute_filter.ComputeFilter'
'nova.scheduler.filters.availability_zone_filter.AvailabilityZoneFilter'
'nova.scheduler.filters.ram_filter.RamFilter'
'nova.scheduler.filters.json_filter.JsonFilter'
'nova.scheduler.filters.all_hosts_filter.AllHostsFilter'
'nova.scheduler.filters.core_filter.CoreFilter'
'nova.scheduler.filters.affinity_filter.AffinityFilter'
'nova.scheduler.filters.affinity_filter.DifferentHostFilter'
'nova.scheduler.filters.affinity_filter.SameHostFilter'
'nova.scheduler.filters.affinity_filter.SimpleCIDRAffinityFilter
Filters used are defined by FLAGS.scheduler_default_filters with "AvailabilityZoneFilter,RamFilter,ComputeFilter" as default value.
Each Filter has defined a host_passes() function which receives HostState and filter_properties as parameters and returns bool to indicate if the host specified in HostState is a good candidate for this filter.
1.4 Passes_filters()
With each HostState object, filter_host() method will call its passes_filters() to check if the host can pass all filters defined. Before going through filters, this method checks if the host complies with rules defined by force_hosts and ignore_hostsfields of scheduler_hints. If field ignore_hosts exists and the host represented by the HostState is in the list, the host fails. If field force_hosts exists, whether the host represented by the HostState object passes depends on if it is in force_hosts. After these rules, if not filtered out, the host will go through the filters until one of the filters fails. If all filters pass, the host will be ok to next phase-cost weighting.
1.5 Weighted_sum()
-
fn#1
fn#2
fn#n
Host1
Score#1_1
Score#1_2
Score#1_n
Host2
Score#2_1
Score#2_2
Score#2_n
Hostn
Score#n_1
Score#n_2
Score#n_
-
Final score of a certain host = ∑(weight of cost function * score returned by this function for the host)
Last it will sort the tuples and return a WeightedHost using the first tuple. This way the least cost host will win.
1.6 getHostState()
It returns the HostState object from selected WeightedHost
1.7 consume_from_instance()
It takes instance_properties as parameter, which comes from request_spec. This function adjusts HostState object's data to virtually consume the resources so that the HostState object can enter into the next loop of host choosing for this request's next instance.
2 _provision_resource()
This function creates requested resource, such as an image instance.
2.1 cast_to_compute_host()
Scheduler’s intelligent data - Host Capabilities
ComputeManager's _report_driver_status() method is a periodic task, which calls update_service_capabilityes() to update the capabilities. LibvirtConnection (There are other connection types. Which one to use depends on the configuration in nova.conf and is usually hypervisor dependent) is the one that does actual job. Its method get_host_stats() is used to collect host capabilities.
One sample of capabilities data looks like:
{ u 'disk_available': 226, u 'cpu_info': { u 'vendor': u 'Intel', u 'model': u 'Westmere', u 'arch': u 'x86_64', u 'features': [u 'rdtscp', u 'x2apic', u 'xtpr', u 'tm2', u 'est', u 'vmx', u 'ds_cpl', u 'monitor', u 'pbe', u 'tm', u 'ht', u 'ss', u 'acpi', u 'ds', u 'vme'], u 'topology': { u 'cores': u '2', u 'threads': u '2', u 'sockets': u '1' } }, u 'hypervisor_type': u 'QEMU', u 'vcpus_used': 0, u 'vcpus': 4, u 'host_memory_free': 1718, u 'disk_total': 375, u 'host_memory_total': 3845, u 'hypervisor_version': 15000, u 'disk_used': 149 } |
2. To publish capabilities
The method publish_service_capabilities() is another periodic task of ComputerManager. It will delegate scheduler api to send out the capabilities onto message queue. The message has topic 'scheduler', service name 'compute' and hostname besides the capabilities.
3. To Receive Capabilities
When the message is on the queue, nova-scheduler will get it and call the SchedulerManager. And then it will call Scheduler's update_service_capablilites() method, which will invoke the HostManager's update_service_capablilites() method. After that the capabilities data of that given service for that given host will be saved by HostManager until next update.
Summary
To wrap up, as a cloud scales to two hosts, the scheduler plays a role. More hosts there are in a cloud, more important the scheduler is. Among all the inputs to nova-scheduler, three are important. They are configuration in nova.conf, service capabilities of each host and the request spec. The configuration in nova.conf decides the static and run-time class structure, service capabilities works as base intelligent data and request spec is the service target. Nova-scheduler can schedule to certain hosts and skip some hosts according to request spec. In addition to the hosts specified in request spec, zone concept can also help scheduler to distribute requests to zone member hosts. After knowing the inside, default behavior of nova-scheduler can be easily modified in nova.conf.