Congress has two modes for deployment: single-process and multi-process. If you are interested in test-driving Congress or are not concerned about high-availability, the single-process deployment is best because it is easiest to set up. If you are interested in making Congress highly-available you want the multi-process deployment.
In the single-process version, you run Congress as a single operating-system process on one node (i.e. container, VM, physical machine).
In the multi-process version, you start with the 3 components of Congress (the API, the policy engine, and the datasource drivers). You choose how many copies of each component you want to run, how you want to distribute those components across processes, and how you want to distribute those processes across nodes.
Section Configuration Options describes the common configuration options for both single-process and multi-process deployments. After that HA Overview and HA Deployment describe how to set up the multi-process deployment.
In this section we highlight the configuration options that are specific to Congress. To generate a sample configuration file that lists all available options, along with descriptions run the following commands:
$ cd /path/to/congress
$ tox -egenconfig
The tox command will create the file etc/congress.conf.sample, which has a comprehensive list of options. All options have default values, which means that even if you specify no options Congress will run.
The options most important to Congress are described below, all of which appear under the [DEFAULT] section of the configuration file.
One of Congress’s new experimental features is distributing its services across multiple services and even hosts. Here are the options for using that feature.
Here are the most often-used, but standard OpenStack options. These are specified in the [DEFAULT] section of the configuration file.
Some applications require Congress to be highly available. Some applications require a Congress Policy Engine (PE) to handle a high volume of queries. This guide describes Congress support for High Availability (HA) High Throughput (HT) deployment.
Please see the OpenStack High Availability Guide for details on how to install and configure OpenStack for High Availability.
Warm Standby is when a software component is installed and available on the secondary node. The secondary node is up and running. In the case of a failure on the primary node, the software component is started on the secondary node. This process is usually automated using a cluster manager. Data is regularly mirrored to the secondary system using disk based replication or shared disk. This generally provides a recovery time of a few minutes.
In this method, both the primary and secondary systems are active and processing requests in parallel. Data replication happens through software capabilities and would be bi-directional. This generally provides a recovery time that is instantaneous.
Congress provides Active-Active for the Policy Engine and Warm Standby for the Datasource Drivers.
Run N instances of the Congress Policy Engine in active-active configuration, so both the primary and secondary systems are active and processing requests in parallel.
One Datasource Driver (DSD) per physical datasource, publishing data on oslo-messaging to all policy engines.
+-------------------------------------+ +--------------+
| Load Balancer (eg. HAProxy) | <----+ Push client |
+----+-------------+-------------+----+ +--------------+
| | |
PE | PE | PE | all+DSDs node
+---------+ +---------+ +---------+ +-----------------+
| +-----+ | | +-----+ | | +-----+ | | +-----+ +-----+ |
| | API | | | | API | | | | API | | | | DSD | | DSD | |
| +-----+ | | +-----+ | | +-----+ | | +-----+ +-----+ |
| +-----+ | | +-----+ | | +-----+ | | +-----+ +-----+ |
| | PE | | | | PE | | | | PE | | | | DSD | | DSD | |
| +-----+ | | +-----+ | | +-----+ | | +-----+ +-----+ |
+---------+ +---------+ +---------+ +--------+--------+
| | | |
| | | |
+--+----------+-------------+--------+--------+
| |
| |
+-------+----+ +------------------------+-----------------+
| Oslo Msg | | DBs (policy, config, push data, exec log)|
+------------+ +------------------------------------------+
Different PE instances may be out-of-sync in their data and policies (eventual consistency). The issue is generally made transparent to the end user by making each user sticky to a particular PE instance. But if a PE instance goes down, the end user reaches a different instance and may experience out-of-sync artifacts.
This section shows how to deploy Congress with High Availability (HA). Congress is divided to 2 parts in HA. First part is API and PolicyEngine Node which is replicated with Active-Active style. Another part is DataSource Node which is deployed with warm-standby style. Please see the HA Overview for details.
+-------------------------------------+ +--------------+
| Load Balancer (eg. HAProxy) | <----+ Push client |
+----+-------------+-------------+----+ +--------------+
| | |
PE | PE | PE | all+DSDs node
+---------+ +---------+ +---------+ +-----------------+
| +-----+ | | +-----+ | | +-----+ | | +-----+ +-----+ |
| | API | | | | API | | | | API | | | | DSD | | DSD | |
| +-----+ | | +-----+ | | +-----+ | | +-----+ +-----+ |
| +-----+ | | +-----+ | | +-----+ | | +-----+ +-----+ |
| | PE | | | | PE | | | | PE | | | | DSD | | DSD | |
| +-----+ | | +-----+ | | +-----+ | | +-----+ +-----+ |
+---------+ +---------+ +---------+ +--------+--------+
| | | |
| | | |
+--+----------+-------------+--------+--------+
| |
| |
+-------+----+ +------------------------+-----------------+
| Oslo Msg | | DBs (policy, config, push data, exec log)|
+------------+ +------------------------------------------+
New config settings for setting the DSE node type:
N (>=2 even okay) nodes of PE+API node
$ python /usr/local/bin/congress-server --api --policy-engine --node-id=<api_unique_id>
One single DSD node
$ python /usr/local/bin/congress-server --datasources --node-id=<datasource_unique_id>
Nodes which DataSourceDriver runs on takes warm-standby style. Congress assumes cluster manager handles the active-standby cluster. In this document, we describe how to make HA of DataSourceDriver node by Pacemaker .
See the OpenStack High Availability Guide for general usage of Pacemaker and how to deploy Pacemaker cluster stack. The guide has some HA configuration for other OpenStack projects.
You need a custom Resource Agent (RA) for DataSoure Node HA. The custom RA is located in Congress repository, /path/to/congress/script/ocf/congress-datasource. Install the RA with following steps.
$ cd /usr/lib/ocf/resource.d
$ mkdir openstack
$ cd openstack
$ cp /path/to/congress/script/ocf/congress-datasource ./congress-datasource
$ chmod a+rx congress-datasource
You can now add the Pacemaker configuration for Congress DataSource Node resource. Connect to the Pacemaker cluster with the crm configure command and add the following cluster resources. After adding the resource make sure commit the change.
primitive ds-node ocf:openstack:congress-datasource \
params config="/etc/congress/congress.conf" \
node_id="datasource-node" \
op monitor interval="30s" timeout="30s"
Make sure that all nodes in the cluster have same config file with same name and path since DataSource Node resource, ds-node, uses config file defined at config parameter to launch the resource.
The RA has following configurable parameters.