HA Deployment

HA Deployment

Overview

This section shows how to deploy Congress with High Availability (HA). For an architectural overview, please see the HA Overview.

An HA deployment of Congress involves five main steps.

  1. Deploy messaging and database infrastructure to be shared by all the Congress nodes.
  2. Prepare the hosts to run Congress nodes.
  3. Deploy N (at least 2) policy-engine nodes.
  4. Deploy one datasource-drivers node.
  5. Deploy a load-balancer to load-balance between the N policy-engine nodes.

The following sections describe each step in more detail.

Shared Services

All the Congress nodes share a database backend. To setup a database backend for Congress, please follow the database portion of separate install instructions.

Various solutions exist to avoid creating a single point of failure with the database backend.

Note: If a replicated database solution is used, it must support table locking. Galera, for example, would not work. This limitation is expected to be removed in the Ocata release.

A shared messaging service is also required. Refer to Shared Messaging for instructions for installing and configuring RabbitMQ.

Hosts Preparation

Congress should be installed on each host expected to run a Congress node. Please follow the directions in separate install instructions to install Congress on each host, skipping the local database portion.

In the configuration file, a transport_url should be specified to use the RabbitMQ messaging service configured in step 1.

For example:

[DEFAULT]
transport_url = rabbit://<rabbit-userid>:<rabbit-password>@<rabbit-host-address>:5672

In addition, the replicated_policy_engine option should be set to True.

[DEFAULT]
replicated_policy_engine = True

All hosts should be configured with a database connection that points to the shared database deployed in step 1, not the local address shown in separate install instructions.

For example:

[database]
connection = mysql+pymysql://root:<database-password>@<shared-database-ip-address>/congress?charset=utf8

Datasource Drivers Node

In this step, we deploy a single datasource-drivers node in warm-standby style.

The datasource-drivers node can be started directly with the following command:

$ python /usr/local/bin/congress-server --datasources --node-id=<unique_node_id>

A unique node-id (distinct from all the policy-engine nodes) must be specified.

For warm-standby deployment, an external manager is used to launch and manage the datasource-drivers node. In this document, we sketch how to deploy the datasource-drivers node with Pacemaker .

See the OpenStack High Availability Guide for general usage of Pacemaker and how to deploy Pacemaker cluster stack. The guide also has some HA configuration guidance for other OpenStack projects.

Prepare OCF resource agent

You need a custom Resource Agent (RA) for DataSoure Node HA. The custom RA is located in Congress repository, /path/to/congress/script/ocf/congress-datasource. Install the RA with following steps.

$ cd /usr/lib/ocf/resource.d
$ mkdir openstack
$ cd openstack
$ cp /path/to/congress/script/ocf/congress-datasource ./congress-datasource
$ chmod a+rx congress-datasource

Configuring the Resource Agent

You can now add the Pacemaker configuration for Congress DataSource Node resource. Connect to the Pacemaker cluster with the crm configure command and add the following cluster resources. After adding the resource make sure commit the change.

primitive ds-node ocf:openstack:congress-datasource \
   params config="/etc/congress/congress.conf" \
   node_id="datasource-node" \
   op monitor interval="30s" timeout="30s"

Make sure that all nodes in the cluster have same config file with same name and path since DataSource Node resource, ds-node, uses config file defined at config parameter to launch the resource.

The RA has following configurable parameters.

  • config: a path of Congress’s config file
  • node_id(Option): a node id of the datasource node. Default is “datasource-node”.
  • binary(Option): a path of Congress binary Default is “/usr/local/bin/congress-server”.
  • additional_parameters(Option): additional parameters of congress-server

Policy Engine Nodes

In this step, we deploy N (at least 2) policy-engine nodes, each with an associated API server. This step should be done only after the Datasource Drivers Node is deployed. Each node can be started as follows:

$ python /usr/local/bin/congress-server --api --policy-engine --node-id=<unique_node_id>

Each node must have a unique node-id specified as a commandline option.

For high availability, each node is usually deployed on a different host. If multiple nodes are to be deployed on the same host, each node must have a different port specified using the bind_port configuration option in the congress configuration file.

Load-balancer

A load-balancer should be used to distribute incoming API requests to the N policy-engine (and API service) nodes deployed in step 3. It is recommended that a sticky configuration be used to avoid exposing a user to out-of-sync artifacts when the user hits different policy-engine nodes.

HAProxy is a popular load-balancer for this purpose. The HAProxy section of the OpenStack High Availability Guide has instructions for deploying HAProxy for high availability.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.