Placement API

Placement API

Overview

Nova introduced the placement API service in the 14.0.0 Newton release. This is a separate REST API stack and data model used to track resource provider inventories and usages, along with different classes of resources. For example, a resource provider can be a compute node, a shared storage pool, or an IP allocation pool. The placement service tracks the inventory and usage of each provider. For example, an instance created on a compute node may be a consumer of resources such as RAM and CPU from a compute node resource provider, disk from an external shared storage pool resource provider and IP addresses from an external IP pool resource provider.

The types of resources consumed are tracked as classes. The service provides a set of standard resource classes (for example DISK_GB, MEMORY_MB, and VCPU) and provides the ability to define custom resource classes as needed.

Each resource provider may also have a set of traits which describe qualitative aspects of the resource provider. Traits describe an aspect of a resource provider that cannot itself be consumed but a workload may wish to specify. For example, available disk may be solid state drives (SSD).

References

The following specifications represent the stages of design and development of resource providers and the Placement service. Implementation details may have changed or be partially complete at this time.

Deployment

The placement-api service must be deployed at some point after you have upgraded to the 14.0.0 Newton release but before you can upgrade to the 15.0.0 Ocata release. This is so that the resource tracker in the nova-compute service can populate resource provider (compute node) inventory and allocation information which will be used by the nova-scheduler service in Ocata.

Steps

1. Deploy the API service

At this time the placement API code is still in Nova alongside the compute REST API code (nova-api). So once you have upgraded nova-api to Newton you already have the placement API code, you just need to install the service. Nova provides a nova-placement-api WSGI script for running the service with Apache, nginx or other WSGI-capable web servers. Depending on what packaging solution is used to deploy OpenStack, the WSGI script may be in /usr/bin or /usr/local/bin.

Note

The placement API service is currently developed within Nova but it is designed to be as separate as possible from the existing code so that it can eventually be split into a separate project.

nova-placement-api, as a standard WSGI script, provides a module level application attribute that most WSGI servers expect to find. This means it is possible to run it with lots of different servers, providing flexibility in the face of different deployment scenarios. Common scenarios include:

In all of these scenarios the host, port and mounting path (or prefix) of the application is controlled in the web server's configuration, not in the configuration (nova.conf) of the placement application.

When placement was first added to DevStack it used the mod_wsgi style. Later it was updated to use mod_proxy_uwsgi. Looking at those changes can be useful for understanding the relevant options.

DevStack is configured to host placement at /placement on either the default port for http or for https (80 or 443) depending on whether TLS is being used. Using a default port is desirable.

By default, the placement application will get its configuration for settings such as the database connection URL from /etc/nova/nova.conf. The directory the configuration file will be found in can be changed by setting OS_PLACEMENT_CONFIG_DIR in the environment of the process that starts the application.

Note

When using uwsgi with a front end (e.g., apache2 or nginx) something needs to ensure that the uwsgi process is running. In DevStack this is done with systemd. This is one of many different ways to manage uwsgi.

This document refrains from declaring a set of installation instructions for the placement service. This is because a major point of having a WSGI application is to make the deployment as flexible as possible. Because the placement API service is itself stateless (all state is in the database), it is possible to deploy as many servers as desired behind a load balancing solution for robust and simple scaling. If you familiarize yourself with installing generic WSGI applications (using the links in the common scenarios list, above), those techniques will be applicable here.

2. Synchronize the database

In the Newton release the Nova api database is the only deployment option for the placement API service and the resources it manages. After upgrading the nova-api service for Newton and running the nova-manage api_db sync command the placement tables will be created.

Note

There are plans to add the ability to run the placement service with a separate placement database that would just have the tables necessary for that service and not everything else that goes into the Nova api database.

3. Create accounts and update the service catalog

Create a placement service user with an admin role in Keystone.

The placement API is a separate service and thus should be registered under a placement service type in the service catalog as that is what the resource tracker in the nova-compute node will use to look up the endpoint.

Devstack sets up the placement service on the default HTTP port (80) with a /placement prefix instead of using an independent port.

4. Configure and restart nova-compute services

The 14.0.0 Newton nova-compute service code will begin reporting resource provider inventory and usage information as soon as the placement API service is in place and can respond to requests via the endpoint registered in the service catalog.

nova.conf on the compute nodes must be updated in the [placement] group to contain credentials for making requests from nova-compute to the placement-api service.

Note

After upgrading nova-compute code to Newton and restarting the service, the nova-compute service will attempt to make a connection to the placement API and if that is not yet available a warning will be logged. The nova-compute service will keep attempting to connect to the placement API, warning periodically on error until it is successful. Keep in mind that Placement is optional in Newton, but required in Ocata, so the placement service should be enabled before upgrading to Ocata. nova.conf on the compute nodes will need to be updated in the [placement] group for credentials to make requests from nova-compute to the placement-api service.

Upgrade Notes

The following sub-sections provide notes on upgrading to a given target release.

Note

As a reminder, the nova-status upgrade check tool can be used to help determine the status of your deployment and how ready it is to perform an upgrade.

Ocata (15.0.0)

  • The nova-compute service will fail to start in Ocata unless the [placement] section of nova.conf on the compute is configured. As mentioned in the deployment steps above, the Placement service should be deployed by this point so the computes can register and start reporting inventory and allocation information. If the computes are deployed and configured before the Placement service, they will continue to try and reconnect in a loop so that you do not need to restart the nova-compute process to talk to the Placement service after the compute is properly configured.
  • The nova.scheduler.filter_scheduler.FilterScheduler in Ocata will fallback to not using the Placement service as long as there are older nova-compute services running in the deployment. This allows for rolling upgrades of the computes to not affect scheduling for the FilterScheduler. However, the fallback mechanism will be removed in the 16.0.0 Pike release such that the scheduler will make decisions based on the Placement service and the resource providers (compute nodes) registered there. This means if the computes are not reporting into Placement by Pike, build requests will fail with NoValidHost errors.
  • While the FilterScheduler technically depends on the Placement service in Ocata, if you deploy the Placement service after you upgrade the nova-scheduler service to Ocata and restart it, things will still work. The scheduler will gracefully handle the absence of the Placement service. However, once all computes are upgraded, the scheduler not being able to make requests to Placement will result in NoValidHost errors.
  • It is currently possible to exclude the CoreFilter, RamFilter and DiskFilter from the list of enabled FilterScheduler filters such that scheduling decisions are not based on CPU, RAM or disk usage. Once all computes are reporting into the Placement service, however, and the FilterScheduler starts to use the Placement service for decisions, those excluded filters are ignored and the scheduler will make requests based on VCPU, MEMORY_MB and DISK_GB inventory. If you wish to effectively ignore that type of resource for placement decisions, you will need to adjust the corresponding cpu_allocation_ratio, ram_allocation_ratio, and/or disk_allocation_ratio configuration options to be very high values, e.g. 9999.0.
  • Users of CellsV1 will need to deploy a placement per cell, matching the scope and cardinality of the regular nova-scheduler process.

Pike (16.0.0)

  • The nova.scheduler.filter_scheduler.FilterScheduler in Pike will no longer fall back to not using the Placement Service, even if older computes are running in the deployment.

  • The FilterScheduler now requests allocation candidates from the Placement service during scheduling. The allocation candidates information was introduced in the Placement API 1.10 microversion, so you should upgrade the placement service before the Nova scheduler service so that the scheduler can take advantage of the allocation candidate information.

    The scheduler gets the allocation candidates from the placement API and uses those to get the compute nodes, which come from the cell(s). The compute nodes are passed through the enabled scheduler filters and weighers. The scheduler then iterates over this filtered and weighed list of hosts and attempts to claim resources in the placement API for each instance in the request. Claiming resources involves finding an allocation candidate that contains an allocation against the selected host's UUID and asking the placement API to allocate the requested instance resources. We continue performing this claim request until success or we run out of allocation candidates, resulting in a NoValidHost error.

    For a move operation, such as migration, allocations are made in Placement against both the source and destination compute node. Once the move operation is complete, the resource tracker in the nova-compute service will adjust the allocations in Placement appropriately.

    For a resize to the same host, allocations are summed on the single compute node. This could pose a problem if the compute node has limited capacity. Since resizing to the same host is disabled by default, and generally only used in testing, this is mentioned for completeness but should not be a concern for production deployments.

Queens (17.0.0)

  • The minimum Placement API microversion required by the nova-scheduler service is 1.17 in order to support Request Traits During Scheduling. This means you must upgrade the placement service before upgrading any nova-scheduler services to Queens.

REST API

The placement API service has its own REST API and data model. One can get a sample of the REST API via the functional test gabbits.

Microversions

The placement API uses microversions for making incremental changes to the API which client requests must opt into.

It is especially important to keep in mind that nova-compute is a client of the placement REST API and based on how Nova supports rolling upgrades the nova-compute service could be Newton level code making requests to an Ocata placement API, and vice-versa, an Ocata compute service in a cells v2 cell could be making requests to a Newton placement API.

REST API Version History

This documents the changes made to the REST API with every microversion change. The description for each version should be a verbose one which has enough information to be suitable for use in user documentation.

1.0 (Maximum in Newton)

This is the initial version of the placement REST API that was released in Nova 14.0.0 (Newton). This contains the following routes:

  • /resource_providers
  • /resource_providers/allocations
  • /resource_providers/inventories
  • /resource_providers/usages
  • /allocations

1.1 Resource provider aggregates

The 1.1 version adds support for associating aggregates with resource providers with GET and PUT methods on one new route:

  • /resource_providers/{uuid}/aggregates

1.2 Custom resource classes

Placement API version 1.2 adds basic operations allowing an admin to create, list and delete custom resource classes.

The following new routes are added:

  • GET /resource_classes: return all resource classes
  • POST /resource_classes: create a new custom resource class
  • PUT /resource_classes/{name}: update name of custom resource class
  • DELETE /resource_classes/{name}: deletes a custom resource class
  • GET /resource_classes/{name}: get a single resource class

Custom resource classes must begin with the prefix "CUSTOM_" and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character.

1.3 member_of query parameter

Version 1.3 adds support for listing resource providers that are members of any of the list of aggregates provided using a member_of query parameter:

  • /resource_providers?member_of=in:{agg1_uuid},{agg2_uuid},{agg3_uuid}

1.4 Filter resource providers by requested resource capacity (Maximum in Ocata)

The 1.4 version adds support for querying resource providers that have the ability to serve a requested set of resources. A new "resources" query string parameter is now accepted to the GET /resource_providers API call. This parameter indicates the requested amounts of various resources that a provider must have the capacity to serve. The "resources" query string parameter takes the form:

?resources=$RESOURCE_CLASS_NAME:$AMOUNT,$RESOURCE_CLASS_NAME:$AMOUNT

For instance, if the user wishes to see resource providers that can service a request for 2 vCPUs, 1024 MB of RAM and 50 GB of disk space, the user can issue a request to:

GET /resource_providers?resources=VCPU:2,MEMORY_MB:1024,DISK_GB:50

If the resource class does not exist, then it will return a HTTP 400.

Note

The resources filtering is also based on the min_unit, max_unit and step_size of the inventory record. For example, if the max_unit is 512 for the DISK_GB inventory for a particular resource provider and a GET request is made for DISK_GB:1024, that resource provider will not be returned. The min_unit is the minimum amount of resource that can be requested for a given inventory and resource provider. The step_size is the increment of resource that can be requested for a given resource on a given provider.

1.5 DELETE all inventory for a resource provider

Placement API version 1.5 adds DELETE method for deleting all inventory for a resource provider. The following new method is supported:

  • DELETE /resource_providers/{uuid}/inventories

1.6 Traits API

The 1.6 version adds basic operations allowing an admin to create, list, and delete custom traits, also adds basic operations allowing an admin to attach traits to a resource provider.

The following new routes are added:

  • GET /traits: Returns all resource classes.
  • PUT /traits/{name}: To insert a single custom trait.
  • GET /traits/{name}: To check if a trait name exists.
  • DELETE /traits/{name}: To delete the specified trait.
  • GET /resource_providers/{uuid}/traits: a list of traits associated with a specific resource provider
  • PUT /resource_providers/{uuid}/traits: Set all the traits for a specific resource provider
  • DELETE /resource_providers/{uuid}/traits: Remove any existing trait associations for a specific resource provider

Custom traits must begin with the prefix "CUSTOM_" and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character.

1.7 Idempotent PUT /resource_classes/{name}

The 1.7 version changes handling of PUT /resource_classes/{name} to be a create or verification of the resource class with {name}. If the resource class is a custom resource class and does not already exist it will be created and a 201 response code returned. If the class already exists the response code will be 204. This makes it possible to check or create a resource class in one request.

1.8 Require placement 'project_id', 'user_id' in PUT /allocations

The 1.8 version adds project_id and user_id required request parameters to PUT /allocations.

1.9 Add GET /usages

The 1.9 version adds usages that can be queried by a project or project/user.

The following new routes are added:

GET /usages?project_id=<project_id>

Returns all usages for a given project.

GET /usages?project_id=<project_id>&user_id=<user_id>

Returns all usages for a given project and user.

1.10 Allocation candidates (Maximum in Pike)

The 1.10 version brings a new REST resource endpoint for getting a list of allocation candidates. Allocation candidates are collections of possible allocations against resource providers that can satisfy a particular request for resources.

1.12 PUT dict format to /allocations/{consumer_uuid}

In version 1.12 the request body of a PUT /allocations/{consumer_uuid} is expected to have an object for the allocations property, not as array as with earlier microversions. This puts the request body more in alignment with the structure of the GET /allocations/{consumer_uuid} response body. Because the PUT request requires user_id and project_id in the request body, these fields are added to the GET response. In addition, the response body for GET /allocation_candidates is updated so the allocations in the alocation_requests object work with the new PUT format.

1.13 POST multiple allocations to /allocations

Version 1.13 gives the ability to set or clear allocations for more than one consumer uuid with a request to POST /allocations.

1.14 Add nested resource providers

The 1.14 version introduces the concept of nested resource providers. The resource provider resource now contains two new attributes:

  • parent_provider_uuid indicates the provider's direct parent, or null if there is no parent. This attribute can be set in the call to POST /resource_providers and PUT /resource_providers/{uuid} if the attribute has not already been set to a non-NULL value (i.e. we do not support "reparenting" a provider)
  • root_provider_uuid indicates the UUID of the root resource provider in the provider's tree. This is a read-only attribute

A new in_tree=<UUID> parameter is now available in the GET /resource-providers API call. Supplying a UUID value for the in_tree parameter will cause all resource providers within the "provider tree" of the provider matching <UUID> to be returned.

1.15 Add 'last-modified' and 'cache-control' headers

Throughout the API, 'last-modified' headers have been added to GET responses and those PUT and POST responses that have bodies. The value is either the actual last modified time of the most recently modified associated database entity or the current time if there is no direct mapping to the database. In addition, 'cache-control: no-cache' headers are added where the 'last-modified' header has been added to prevent inadvertent caching of resources.

1.16 Limit allocation candidates

Add support for a limit query parameter when making a GET /allocation_candidates request. The parameter accepts an integer value, N, which limits the maximum number of candidates returned.

1.17 Add 'required' parameter to the allocation candidates (Maximum in Queens)

Add the required parameter to the GET /allocation_candidates API. It accepts a list of traits separated by ,. The provider summary in the response will include the attached traits also.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.