Congress will work with any cloud service, as long as Congress can represent the service’s state in table format. A table is a collection of rows, where each row is a collection of columns, and each row-column entry contains a string or a number.
For example, Neutron contains a mapping between IP addresses and the ports they are assigned to; neutron represents this state as the following table.
====================================== ==========
ID IP
====================================== ==========
"66dafde0-a49c-11e3-be40-425861b86ab6" "10.0.0.1"
"66dafde0-a49c-11e3-be40-425861b86ab6" "10.0.0.2"
"73e31d4c-a49c-11e3-be40-425861b86ab6" "10.0.0.3"
====================================== ==========
To plug a new service into Congress, you write a small piece of code, called a driver, that queries the new service (usually through API calls) and translates the service state into tables of data. Out of the box Congress includes drivers for a number of common services (see below).
For example, the driver for Neutron invokes the Neutron API calls that list networks, ports, security groups, and routers. The driver translates each of the JSON objects that the API calls return into tables (where in Python a table is a list of tuples). The Neutron driver is implemented here:
congress/datasources/neutronv2_driver.py
Once the driver is available, you install it into Congress, you configure it (such as with an IP address/port/username), and you write policy that references the tables populated by that driver.
All the existing drivers would be automatically loaded by congress on startup.
To disable any of the existing driver, it needs to be added to disable_drivers
config option:
disabled_drivers = nova, plexxi
To list all the currently supported drivers by congress:
openstack congress driver list
To install any new driver, you must add its entry point to congress setup.cfg and install. On service restart, it would be automatically loaded to congress.
For any downstream driver, entry point can be added to custom_driver_endpoints
config option. On service restart, the Congress will load the same along with
other drivers:
custom_driver_endpoints = 'test=congress.datasources.test_driver:TestDriver'
Once the driver code is in place, you can use it to create a datasource whose data is available to Congress policies. To create a datasource, you use the API and provide a unique name (the name you will use in policy to refer to the service), the name of the datasource driver you want to use, and additional connection details needed by your service (such as an IP and a username/password).
For example, using the Congress CLI, you can create a datasource named ‘neutron_test’ using the ‘neutronv2’ driver:
$ openstack congress datasource create <driver_name> <datasource_name>
--config username=<username>
--config password=<password>
--config tenant_name=<tenant>
--config auth_url=<url_authentication>
$ openstack congress datasource create neutronv2 neutron_test
--config username=neutron
--config password=password
--config tenant_name=cloudservices
--config auth_url=http://10.10.10.10:5000/v2.0
And if you had a second instance of Neutron running to manage your production network, you could create a second datasource (named say ‘neutron_prod’) using the neutronv2 driver so that you could write policy over both instances of Neutron.
When you write policy, you would use the name ‘neutron_test:ports’ to reference the ‘ports’ table generated by the ‘neutron_test’ datasource, and you use ‘neutron_test:networks’ to reference the ‘networks’ table generated by the ‘neutron_test’ datasource. Similarly, you use ‘neutron_prod:ports’ and ‘neutron_prod:networks’ to reference the tables populated by the ‘neutron_prod’ datasource. (More details about writing policy can be found in the Policy section.)
Congress currently has drivers for each of the following services. Each driver has a differing degree of coverage for the available API calls.
- OpenStack Aodh
- OpenStack Cinder
- OpenStack Glance (v2)
- OpenStack Heat
- OpenStack Ironic
- OpenStack Keystone (v2 & v3)
- OpenStack Mistral
- OpenStack Monasca
- OpenStack Monasca Webhook (unstable schema: may change in future release)
- OpenStack Murano
- OpenStack Neutron (v2)
- OpenStack Neutron QoS
- OpenStack Nova
- OpenStack Swift
- OpenStack Tacker
- OpenStack Vitrage (unstable schema: may change in future release)
- OPNFV Doctor
- Cloud Foundry (unofficial)
- Plexxi (unofficial)
- vCenter (unofficial)
Using the API or CLI, you can review the list of tables and columns that a driver supports. Roughly, you can think of each table as a collection of objects (like networks or servers), and the columns of that table as the attributes of those objects (like name, status, or ID). The value of each row-column entry is a (Python) string or number. If the attribute as returned by the API call is a complex object, that object is flattened into its own table (or tables).
For example:
$ openstack congress datasource schema show nova
+--------------+------------------------------------------------+
| table | columns |
+--------------+------------------------------------------------+
| flavors | {'name': 'id', 'description': 'None'}, |
| | {'name': 'name', 'description': 'None'}, |
| | {'name': 'vcpus', 'description': 'None'}, |
| | {'name': 'ram', 'description': 'None'}, |
| | {'name': 'disk', 'description': 'None'}, |
| | {'name': 'ephemeral', 'description': 'None'}, |
| | {'name': 'rxtx_factor', 'description': 'None'} |
| | |
| hosts | {'name': 'host_name', 'description': 'None'}, |
| | {'name': 'service', 'description': 'None'}, |
| | {'name': 'zone', 'description': 'None'} |
| | |
| floating_IPs | {'name': 'fixed_ip', 'description': 'None'}, |
| | {'name': 'id', 'description': 'None'}, |
| | {'name': 'ip', 'description': 'None'}, |
| | {'name': 'host_id', 'description': 'None'}, |
| | {'name': 'pool', 'description': 'None'} |
| | |
| servers | {'name': 'id', 'description': 'None'}, |
| | {'name': 'name', 'description': 'None'}, |
| | {'name': 'host_id', 'description': 'None'}, |
| | {'name': 'status', 'description': 'None'}, |
| | {'name': 'tenant_id', 'description': 'None'}, |
| | {'name': 'user_id', 'description': 'None'}, |
| | {'name': 'image_id', 'description': 'None'}, |
| | {'name': 'flavor_id', 'description': 'None'} |
| | |
+--------------+------------------------------------------------+
This section is a tutorial for those of you interested in writing your own datasource driver. It can be safely skipped otherwise.
All the Datasource drivers extend the code found in:
congress/datasources/datasource_driver.py
Typically, you will create a subclass of
datasource_driver.PollingDataSourceDriver
or
datasource_driver.PushedDataSourceDriver
depending on the type of your
datasource driver. Each instance of that class will correspond to a different
service using that driver.
The following steps detail how to implement a polling datasource driver.
congress/datasources/new_driver.py
PollingDataSourceDriver
.
from congress.datasources.datasource_driver import PollingDataSourceDriver
class MyDriver(PollingDataSourceDriver)
MyDriver.__init__()
def __init__(name, args)
You must call the DataSourceDriver’s constructor.
super(MyDriver, self).__init__(name, args)
MyDriver.update_from_datasource()
def update_from_datasource(self)
This function is called to update
self.state
to reflect the new state of the service.self.state
is a dictionary that maps a tablename (as a string) to a set of tuples (to a collection of tables). Each tuple element must be either a number or string. This function implements the polling logic for the service.
5. By convention, it is useful for debugging purposes to include a
main
that calls update_from_datasource, and prints out the raw
API results along with the tables that were generated.
To install and test the newly written driver, please follow the new driver installation procedure mentioned in Driver installation section.
Since Congress requires the state of each dataservice to be represented as tables, we must convert the results of each API call (which may be comprised of dictionaries, lists, dictionaries embedded within lists, etc.) into tables.
Congress provides a translation method to make the translation from API results into tables convenient. The translation method takes a description of the API data structure, and converts objects of that structure into rows of one or more tables (depending on the data structure). For example, this is a partial snippet from the Neutron driver:
networks_translator = {
'translation-type': 'HDICT',
'table-name': 'networks',
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'tenant_id', 'translator': value_trans},
{'fieldname': 'subnets', 'col': 'subnet_group_id',
'translator': {'translation-type': 'LIST',
'table-name': 'networks.subnets',
'id-col': 'subnet_group_id',
'val-col': 'subnet',
'translator': value_trans}})}
This networks_translator describes a python dictionary data structure that contains four keys: id, name, tenant_id, and subnets. The value for the subnets key is a list of subnet_group_ids each of which is a number. For example:
{ "id": 1234,
"name": "Network Foo",
"tenant_id": 5678,
"subnets": [ 100, 101 ] }
Given the networks_translator description, the translator creates two tables. The first table is named “networks” with a column for name, subnets, tenant_id, and id. The second table will be named “networks.subnet” and will contain two columns, one containing the subnet_group_id, and the second containing an ID that associates the row in the network to the rows in the networks.subnets table.
To use the translation methods, the driver defines a translator such as networks_translator and then passes the API response objects to translate_objs() which is defined in congress/datasources/datasource_driver.py See congress/datasources/neutron_driver.py as an example.
The convenience translators may be insufficient in some cases, for example, the data source may provide data in an unusual format, the convenience translators may be inefficient, or the fixed translation method may result in an unsuitable table schema. In such cases, a driver may need to implement its own translation. In those cases, we have a few recommendations.
Recommendation 1: Row = object. Typically an API call will return a collection of objects (e.g. networks, virtual machines, disks). Conceptually it is convenient to represent each object with a row in a table. The columns of that row are the attributes of each object. For example, a table of all virtual machines will have columns for memory, disk, flavor, and image.
Table: virtual_machine
ID | Memory | Disk | Flavor | Image |
---|---|---|---|---|
66dafde0-a49c-11e3-be40-425861b86ab6 | 256GB | 1TB | 1 | 83e31d4c-a49c-11e3-be40-425861b86ab6 |
73e31d4c-a49c-11e3-be40-425861b86ab6 | 10GB | 2TB | 2 | 93e31d4c-a49c-11e3-be40-425861b86ab6 |
Recommendation 2. Avoid wide tables. Wide tables (i.e. tables with many columns) are hard to use for a policy-writer. Breaking such tables up into smaller ones is often a good idea. In the above example, we could create 4 tables with 2 columns instead of 1 table with 5 columns.
Table: virtual_machine.memory
ID | Memory |
---|---|
66dafde0-a49c-11e3-be40-425861b86ab6 | 256GB |
73e31d4c-a49c-11e3-be40-425861b86ab6 | 10GB |
Table: virtual_machine.disk
ID | Disk |
---|---|
66dafde0-a49c-11e3-be40-425861b86ab6 | 1TB |
73e31d4c-a49c-11e3-be40-425861b86ab6 | 2TB |
Table: virtual_machine.flavor
ID | Flavor |
---|---|
66dafde0-a49c-11e3-be40-425861b86ab6 | 1 |
73e31d4c-a49c-11e3-be40-425861b86ab6 | 2 |
Table: virtual_machine.image
ID | Image |
---|---|
66dafde0-a49c-11e3-be40-425861b86ab6 | 83e31d4c-a49c-11e3-be40-425861b86ab6 |
73e31d4c-a49c-11e3-be40-425861b86ab6 | 93e31d4c-a49c-11e3-be40-425861b86ab6 |
Recommendation 3. Try these design patterns. Below we give a few design patterns. Notice that when an object has an attribute whose value is a structured object itself (e.g. a list of dictionaries), we must recursively flatten that subobject into tables.
A List of dictionary converted to tuples
Original data:
[{'key1':'value1','key2':'value2'}, {'key1':'value3','key2':'value4'} ]Tuple:
[('value1', 'value2'), ('value3', 'value4') ]
List of Dictionary with a nested List
Original data:
[{'key1':'value1','key2':['v1','v2']}, {'key1':'value2','key2':['v3','v4']} ]Tuple:
[('value1', 'uuid1'), ('value1', 'uuid2'), ('value2', 'uuid3'), ('value2', 'uuid4') ] [('uuid1', 'v1'), ('uuid2', 'v2'), ('uuid3', 'v3'), ('uuid4', 'v4') ]Note : uuid* are congress generated uuids
List of Dictionary with a nested dictionary
Original data:
[{'key1':'value1','key2':{'k1':'v1'}}, {'key1':'value2','key2':{'k1':'v2'}} ]Tuple:
[('value1', 'uuid1'), ('value2', 'uuid2') ] [('uuid1', 'k1', 'v1'), ('uuid2', 'k1', 'v2'), ]Note : uuid* are congress generated uuids
Once you’ve written a driver, you’ll want to add a unit test for it. To help, this section describes how the unit test for the Glance driver works. Here are the relevant files.
The test code has two methods: setUp() and test_update_from_datasource().
We begin our description with the setUp() method of the test.
def setUp(self):
First the test creates a fake (actually a mock) Keystone. Most clients talk to Keystone, so having a fake one seems to be necessary to make the Glance client work properly.
self.keystone_client_p = mock.patch(
"keystoneclient.v2_0.client.Client")
self.keystone_client_p.start()
Next the test creates a fake Glance client. Glance is an OpenStack service that stores (among other things) operating system Images that you can use to create a new VM. The Glance datasource driver makes a call to <glance-client>.images.list() to retrieve the list of those images, and then turns that list of images into tables. The test creates a fake Glance client so it can control the return value of <glance-client>.images.list().
self.glance_client_p = mock.patch("glanceclient.v2.client.Client")
self.glance_client_p.start()
Next the test instantiates the GlanceV2Driver class, which contains the code for the Glance driver. Passing ‘poll_time’ as 0 is probably unnecessary here, but it tells the driver not to poll automatically. Passing ‘client’ is important because it tells the GlanceV2Driver class to use a mocked version of the Glance client instead of creating its own.
args = helper.datasource_openstack_args()
args['poll_time'] = 0
args['client'] = mock.MagicMock()
self.driver = glancev2_driver.GlanceV2Driver(args=args)
Next the test defines which value it wants <glance-client>.images.list() to return. The test itself will check if the Glance driver code properly translates this return value into tables. So this is the actual input to the test. Either you can write this by hand, or you can run the heat-client and print out the results.
self.mock_images = {'images': [
{u'checksum': u'9e486c3bf76219a6a37add392e425b36',
u'container_format': u'bare',
u'created_at': u'2014-10-01T20:28:08Z',
...
test_update_from_datasource() is the actual test, where we have the datasource driver grab the list of Glance images and translate them to tables. The test runs the update_from_datasource() method like normal except it ensures the return value of <glance-client>.images.list() is self.mock_images.
def test_update_from_datasource(self):
The first thing the method does is set the return value of self.driver.glance.images.list() to self.mock_images[‘images’]. Then it calls update_from_datasource() in the usual way, which translates self.mock_images[‘images’] into tables and stores the result into the driver’s self.state dictionary.
with mock.patch.object(self.driver.glance.images, "list") as img_list:
img_list.return_value = self.mock_images['images']
self.driver.update_from_datasource()
Next the test defines the tables that update_from_datasource() should construct. Actually, the test defines the expected value of Glance’s self.state when update_from_datasource() finishes. Remember that self.state is a dictionary mapping a table name to the set of tuples that belong to the table. For Glance, there’s just one table: ‘images’, and so the expected self.state is a dictionary with one key ‘images’ and one value: a set of tuples.
expected = {'images': set([
(u'6934941f-3eef-43f7-9198-9b3c188e4aab',
u'active',
u'cirros-0.3.2-x86_64-uec',
u'ami',
u'2014-10-01T20:28:06Z',
u'2014-10-01T20:28:07Z',
u'ami',
u'4dfdcf14a20940799d89c7a5e7345978',
'False',
0,
0,
u'4eada48c2843d2a262c814ddc92ecf2c',
25165824,
u'/v2/images/6934941f-3eef-43f7-9198-9b3c188e4aab/file',
u'15ed89b8-588d-47ad-8ee0-207ed8010569',
u'c244d5c7-1c83-414c-a90d-af7cea1dd3b5',
u'/v2/schemas/image',
u'public'),
...
At this point in the test, update_from_datasource() has already been run, so all it does is check that the driver’s self.state has the expected value.
self.assertEqual(self.driver.state, expected)
import mock
from congress.datasources import glancev2_driver
from congress.tests import base
from congress.tests import helper
class TestGlanceV2Driver(base.TestCase):
def setUp(self):
super(TestGlanceV2Driver, self).setUp()
self.keystone_client_p = mock.patch(
"keystoneclient.v2_0.client.Client")
self.keystone_client_p.start()
self.glance_client_p = mock.patch("glanceclient.v2.client.Client")
self.glance_client_p.start()
args = helper.datasource_openstack_args()
args['poll_time'] = 0
args['client'] = mock.MagicMock()
self.driver = glancev2_driver.GlanceV2Driver(args=args)
self.mock_images = {'images': [
{u'checksum': u'9e486c3bf76219a6a37add392e425b36',
u'container_format': u'bare',
u'created_at': u'2014-10-01T20:28:08Z',
u'disk_format': u'qcow2',
u'file': u'/v2/images/c42736e7-8b09-4906-abd2-d6dc8673c297/file',
u'id': u'c42736e7-8b09-4906-abd2-d6dc8673c297',
u'min_disk': 0,
u'min_ram': 0,
u'name': u'Fedora-x86_64-20-20140618-sda',
u'owner': u'4dfdcf14a20940799d89c7a5e7345978',
u'protected': False,
u'schema': u'/v2/schemas/image',
u'size': 209649664,
u'status': u'active',
u'tags': ['type=xen2', 'type=xen'],
u'updated_at': u'2014-10-01T20:28:09Z',
u'visibility': u'public'},
{u'checksum': u'4eada48c2843d2a262c814ddc92ecf2c',
u'container_format': u'ami',
u'created_at': u'2014-10-01T20:28:06Z',
u'disk_format': u'ami',
u'file': u'/v2/images/6934941f-3eef-43f7-9198-9b3c188e4aab/file',
u'id': u'6934941f-3eef-43f7-9198-9b3c188e4aab',
u'kernel_id': u'15ed89b8-588d-47ad-8ee0-207ed8010569',
u'min_disk': 0,
u'min_ram': 0,
u'name': u'cirros-0.3.2-x86_64-uec',
u'owner': u'4dfdcf14a20940799d89c7a5e7345978',
u'protected': False,
u'ramdisk_id': u'c244d5c7-1c83-414c-a90d-af7cea1dd3b5',
u'schema': u'/v2/schemas/image',
u'size': 25165824,
u'status': u'active',
u'tags': [],
u'updated_at': u'2014-10-01T20:28:07Z',
u'visibility': u'public'}]}
def test_update_from_datasource(self):
with mock.patch.object(self.driver.glance.images, "list") as img_list:
img_list.return_value = self.mock_images['images']
self.driver.update_from_datasource()
expected = {'images': set([
(u'6934941f-3eef-43f7-9198-9b3c188e4aab',
u'active',
u'cirros-0.3.2-x86_64-uec',
u'ami',
u'2014-10-01T20:28:06Z',
u'2014-10-01T20:28:07Z',
u'ami',
u'4dfdcf14a20940799d89c7a5e7345978',
'False',
0,
0,
u'4eada48c2843d2a262c814ddc92ecf2c',
25165824,
u'/v2/images/6934941f-3eef-43f7-9198-9b3c188e4aab/file',
u'15ed89b8-588d-47ad-8ee0-207ed8010569',
u'c244d5c7-1c83-414c-a90d-af7cea1dd3b5',
u'/v2/schemas/image',
u'public'),
(u'c42736e7-8b09-4906-abd2-d6dc8673c297',
u'active',
u'Fedora-x86_64-20-20140618-sda',
u'bare',
u'2014-10-01T20:28:08Z',
u'2014-10-01T20:28:09Z',
u'qcow2',
u'4dfdcf14a20940799d89c7a5e7345978',
'False',
0,
0,
u'9e486c3bf76219a6a37add392e425b36',
209649664,
u'/v2/images/c42736e7-8b09-4906-abd2-d6dc8673c297/file',
'None',
'None',
u'/v2/schemas/image',
u'public')]),
'tags': set([
(u'c42736e7-8b09-4906-abd2-d6dc8673c297', 'type=xen'),
(u'c42736e7-8b09-4906-abd2-d6dc8673c297', 'type=xen2')])}
self.assertEqual(self.driver.state, expected)
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.