This section shows how to deploy Congress with High Availability (HA). For an architectural overview, please see the HA Overview.
An HA deployment of Congress involves five main steps.
The following sections describe each step in more detail.
Congress should be installed on each host expected to run a Congress node. Please follow the directions in separate install instructions to install Congress on each host, skipping the local database portion.
In the configuration file, a transport_url should be specified to use the RabbitMQ messaging service configured in step 1.
For example:
[DEFAULT]
transport_url = rabbit://<rabbit-userid>:<rabbit-password>@<rabbit-host-address>:5672
All hosts should be configured with a database connection that points to the shared database deployed in step 1, not the local address shown in separate install instructions.
For example:
[database]
connection = mysql+pymysql://root:<database-password>@<shared-database-ip-address>/congress?charset=utf8
In this step, we deploy N (at least 2) policy-engine nodes, each with an associated API server. Each node can be started as follows:
$ python /usr/local/bin/congress-server --api --policy-engine --node-id=<unique_node_id>
Each node must have a unique node-id specified as a commandline option.
For high availability, each node is usually deployed on a different host. If multiple nodes are to be deployed on the same host, each node must have a different port specified using the bind_port configuration option in the congress configuration file.
In this step, we deploy a single datasource-drivers node in warm-standby style.
The datasource-drivers node can be started directly with the following command:
$ python /usr/local/bin/congress-server --datasources --node-id=<unique_node_id>
A unique node-id (distinct from all the policy-engine nodes) must be specified.
For warm-standby deployment, an external manager is used to launch and manage the datasource-drivers node. In this document, we sketch how to deploy the datasource-drivers node with Pacemaker .
See the OpenStack High Availability Guide for general usage of Pacemaker and how to deploy Pacemaker cluster stack. The guide also has some HA configuration guidance for other OpenStack projects.
You need a custom Resource Agent (RA) for DataSoure Node HA. The custom RA is located in Congress repository, /path/to/congress/script/ocf/congress-datasource. Install the RA with following steps.
$ cd /usr/lib/ocf/resource.d
$ mkdir openstack
$ cd openstack
$ cp /path/to/congress/script/ocf/congress-datasource ./congress-datasource
$ chmod a+rx congress-datasource
You can now add the Pacemaker configuration for Congress DataSource Node resource. Connect to the Pacemaker cluster with the crm configure command and add the following cluster resources. After adding the resource make sure commit the change.
primitive ds-node ocf:openstack:congress-datasource \
params config="/etc/congress/congress.conf" \
node_id="datasource-node" \
op monitor interval="30s" timeout="30s"
Make sure that all nodes in the cluster have same config file with same name and path since DataSource Node resource, ds-node, uses config file defined at config parameter to launch the resource.
The RA has following configurable parameters.
A load-balancer should be used to distribute incoming API requests to the N policy-engine (and API service) nodes deployed in step 3. It is recommended that a sticky configuration be used to avoid exposing a user to out-of-sync artifacts when the user hits different policy-engine nodes.
HAProxy is a popular load-balancer for this purpose. The HAProxy section of the OpenStack High Availability Guide has instructions for deploying HAProxy for high availability.