Installation Guide

Now the Tricircle can be played with Devstack for all-in-one single pod and multi-pod. You can build different Tricircle environments with Devstack according to your needs. In the near future there will be a manual installation guide in this installation guide that discussing how to install the Tricircle step by step without DevStack for users who install OpenStack manually.

Single pod installation with DevStack

Now the Tricircle can be played with all-in-one single pod DevStack. For the resource requirement to setup single pod DevStack, please refer to All-In-One Single Machine for installing DevStack in bare metal server or All-In-One Single VM for installing DevStack in virtual machine.

  • 1 Install DevStack. Please refer to DevStack document on how to install DevStack into single VM or bare metal server.

  • 2 In DevStack folder, create a file local.conf, and copy the content of https://github.com/openstack/tricircle/blob/master/devstack/local.conf.sample to local.conf, change password in the file if needed.

  • 3 Run DevStack. In DevStack folder, run

    ./stack.sh
    
  • 4 After DevStack successfully starts, we need to create environment variables for the user (admin user as example in this document). In DevStack folder

    source openrc admin demo
    
  • 5 Unset the region name environment variable, so that the command can be issued to specified region in following commands as needed

    unset OS_REGION_NAME
    
  • 6 Check if services have been correctly registered. Run

    openstack --os-region-name=RegionOne endpoint list
    

    you should get output looks like as following

    +----------------------------------+---------------+--------------+----------------+
    | ID                               | Region        | Service Name | Service Type   |
    +----------------------------------+---------------+--------------+----------------+
    | 3944592550764e349d0e82dba19a8e64 | RegionOne     | cinder       | volume         |
    | 2ce48c73cca44e66a558ad69f1aa4436 | CentralRegion | tricircle    | Tricircle      |
    | d214b688923a4348b908525266db66ed | RegionOne     | nova_legacy  | compute_legacy |
    | c5dd60f23f2e4442865f601758a73982 | RegionOne     | keystone     | identity       |
    | a99d5742c76a4069bb8621e0303c6004 | RegionOne     | cinderv3     | volumev3       |
    | 8a3c711a24b2443a9a4420bcc302ed2c | RegionOne     | glance       | image          |
    | e136af00d64a4cdf8b6b367210476f49 | RegionOne     | nova         | compute        |
    | 4c3e5d52a90e493ab720213199ab22cd | RegionOne     | neutron      | network        |
    | 8a1312afb6944492b47c5a35f1e5caeb | RegionOne     | cinderv2     | volumev2       |
    | e0a5530abff749e1853a342b5747492e | CentralRegion | neutron      | network        |
    +----------------------------------+---------------+--------------+----------------+
    

    “CentralRegion” is the region you set in local.conf via CENTRAL_REGION_NAME, whose default value is “CentralRegion”, we use it as the region for the central Neutron server and Tricircle Admin API(ID is 2ce48c73cca44e66a558ad69f1aa4436 in the above list). “RegionOne” is the normal OpenStack region which includes Nova, Cinder, Neutron.

  • 7 Create pod instances for the Tricircle to manage the mapping between availability zone and OpenStack instances

    openstack multiregion networking pod create --region-name CentralRegion
    
    openstack multiregion networking pod create --region-name RegionOne --availability-zone az1
    

    Pay attention to “region_name” parameter we specify when creating pod. Pod name should exactly match the region name registered in Keystone. In the above commands, we create pods named “CentralRegion” and “RegionOne”.

  • 8 Create necessary resources in central Neutron server

    neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionOne net1
    neutron --os-region-name=CentralRegion subnet-create net1 10.0.0.0/24
    

    Please note that the net1 ID will be used in later step to boot VM.

  • 9 Get image ID and flavor ID which will be used in VM booting

    glance --os-region-name=RegionOne image-list
    nova --os-region-name=RegionOne flavor-list
    
  • 10 Boot a virtual machine

    nova --os-region-name=RegionOne boot --flavor 1 --image $image_id --nic net-id=$net_id vm1
    
  • 11 Verify the VM is connected to the net1

    neutron --os-region-name=CentralRegion port-list
    neutron --os-region-name=RegionOne port-list
    nova --os-region-name=RegionOne list
    

    The IP address of the VM could be found in local Neutron server and central Neutron server. The port has same uuid in local Neutron server and central Neutron Server.

Multi-pod Installation with DevStack

Introduction

In the single pod installation guide, we discuss how to deploy the Tricircle in one single pod with DevStack. Besides the Tricircle API and the central Neutron server, only one pod(one pod means one OpenStack instance) is running. Network is created with the default network type: local. Local type network will be only presented in one pod. If a local type network is already hosting virtual machines in one pod, you can not use it to boot virtual machine in another pod. That is to say, local type network doesn’t support cross-Neutron l2 networking.

With multi-pod installation of the Tricircle, you can try out cross-Neutron l2 networking and cross-Neutron l3 networking features.

To support cross-Neutron l2 networking, we have added both VLAN and VxLAN network type to the Tricircle. When a VLAN type network created via the central Neutron server is used to boot virtual machines in different pods, local Neutron server in each pod will create a VLAN type network with the same VLAN ID and physical network as the central network, so each pod should be configured with the same VLAN allocation pool and physical network. Then virtual machines in different pods can communicate with each other in the same physical network with the same VLAN tag. Similarly, for VxLAN network type, each pod should be configured with the same VxLAN allocation pool, so local Neutron server in each pod can create a VxLAN type network with the same VxLAN ID as is allocated by the central Neutron server.

Cross-Neutron l3 networking is supported in two ways in the Tricircle. If two networks connected to the router are of local type, we utilize a shared VLAN or VxLAN network to achieve cross-Neutron l3 networking. When a subnet is attached to a router via the central Neutron server, the Tricircle not only creates corresponding subnet and router in the pod, but also creates a “bridge” network. Both tenant network and “bridge” network are attached to the router. Each tenant will have one allocated VLAN or VxLAN ID, which is shared by the tenant’s “bridge” networks across Neutron servers. The CIDRs of “bridge” networks for one tenant are also the same, so the router interfaces in “bridge” networks across different Neutron servers can communicate with each other. By adding an extra route as following:

destination: CIDR of tenant network in another pod
nexthop: "bridge" network interface ip in another pod

When a virtual machine sends a packet whose receiver is in another network and in another pod, the packet first goes to router, then is forwarded to the router in another pod according to the extra route, at last the packet is sent to the target virtual machine. This route configuration job is triggered when user attaches a subnet to a router via the central Neutron server and the job is finished asynchronously.

If one of the network connected to the router is not local type, meaning that cross-Neutron l2 networking is supported in this network(like VLAN type), and the l2 network can be stretched into current pod, packets sent to the virtual machine in this network will not pass through the “bridge” network. Instead, packets first go to router, then are directly forwarded to the target virtual machine via the l2 network. A l2 network’s presence scope is determined by the network’s availability zone hint. If the l2 network is not able to be stretched into the current pod, the packets will still pass through the “bridge network”. For example, let’s say we have two pods, pod1 and pod2, and two availability zones, az1 and az2. Pod1 belongs to az1 and pod2 belongs to az2. If the availability zone hint of one VLAN type network is set to az1, this network can not be stretched to pod2. So packets sent from pod2 to virtual machines in this network still need to pass through the “bridge network”.

Prerequisite

In this guide we take two nodes deployment as an example. One node to run the Tricircle API, the central Neutron server and one pod, the other one node to run another pod. For VLAN network, both nodes should have two network interfaces, which are connected to the management network and provider VLAN network. The physical network infrastructure should support VLAN tagging. For VxLAN network, you can combine the management plane and data plane, in this case, only one network interface is needed. If you would like to try north-south networking, too, you should prepare one more network interface in the second node for the external network. In this guide, the external network is also VLAN type, so the local.conf sample is based on VLAN type external network setup. For the resource requirements to setup each node, please refer to All-In-One Single Machine for installing DevStack in bare metal server and All-In-One Single VM for installing DevStack in virtual machine.

If you want to experience cross Neutron VxLAN network, please make sure compute nodes are routable to each other on data plane, and enable L2 population mechanism driver in OpenStack RegionOne and OpenStack RegionTwo.

Setup

In pod1 in node1 for Tricircle service, central Neutron and OpenStack RegionOne,

  • 1 Install DevStack. Please refer to DevStack document on how to install DevStack into single VM or bare metal server.

  • 2 In DevStack folder, create a file local.conf, and copy the content of local.conf node1 sample to local.conf, change password in the file if needed.

  • 3 Change the following options according to your environment

    • change HOST_IP to your management interface ip:

      HOST_IP=10.250.201.24
      
    • the format of Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS is (network_vlan_ranges=<physical network name>:<min vlan>:<max vlan>), you can change physical network name, but remember to adapt your change to the commands showed in this guide; also, change min VLAN and max vlan to adapt the VLAN range your physical network supports. You need to additionally specify the physical network “extern” to ensure the central neutron can create “extern” physical network which located in other pods:

      Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
      
    • if you would like to also configure vxlan network, you can set Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS. the format of it is (vni_ranges=<min vxlan>:<max vxlan>):

      Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
      
    • the format of OVS_BRIDGE_MAPPINGS is <physical network name>:<ovs bridge name>, you can change these names, but remember to adapt your change to the commands showed in this guide. You do not need specify the bridge mapping for “extern”, because this physical network is located in other pods:

      OVS_BRIDGE_MAPPINGS=bridge:br-vlan
      

      this option can be omitted if only VxLAN networks are needed

    • if you would like to also configure flat network, you can set Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS, the format of it is (flat_networks=phy_net1,phy_net2,…). Besides specifying a list of physical network names, you can also use ‘*’ to allow flat networks with arbitrary physical network names; or use an empty list to disable flat networks. For simplicity, we use the same physical networks and bridge mappings for vlan and flat network configuration. Similar to vlan network, You need to additionally specify the physical network “extern” to ensure the central neutron can create “extern” physical network which located in other pods:

      Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS=(flat_networks=bridge,extern)
      
    • set TRICIRCLE_START_SERVICES to True to install the Tricircle service and central Neutron in node1:

      TRICIRCLE_START_SERVICES=True
      
  • 4 Create OVS bridge and attach the VLAN network interface to it

    sudo ovs-vsctl add-br br-vlan
    sudo ovs-vsctl add-port br-vlan eth1
    

    br-vlan is the OVS bridge name you configure on OVS_PHYSICAL_BRIDGE, eth1 is the device name of your VLAN network interface, this step can be omitted if only VxLAN networks are provided to tenants.

  • 5 Run DevStack. In DevStack folder, run

    ./stack.sh
    
  • 6 After DevStack successfully starts, begin to setup node2.

In pod2 in node2 for OpenStack RegionTwo,

  • 1 Install DevStack. Please refer to DevStack document on how to install DevStack into single VM or bare metal server.

  • 2 In DevStack folder, create a file local.conf, and copy the content of local.conf node2 sample to local.conf, change password in the file if needed.

  • 3 Change the following options according to your environment

    • change HOST_IP to your management interface ip:

      HOST_IP=10.250.201.25
      
    • change KEYSTONE_SERVICE_HOST to management interface ip of node1:

      KEYSTONE_SERVICE_HOST=10.250.201.24
      
    • change KEYSTONE_AUTH_HOST to management interface ip of node1:

      KEYSTONE_AUTH_HOST=10.250.201.24
      
    • the format of Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS is (network_vlan_ranges=<physical network name>:<min vlan>:<max vlan>), you can change physical network name, but remember to adapt your change to the commands showed in this guide; also, change min vlan and max vlan to adapt the vlan range your physical network supports:

      Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
      
    • if you would like to also configure vxlan network, you can set Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS. the format of it is (vni_ranges=<min vxlan>:<max vxlan>):

      Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
      
    • the format of OVS_BRIDGE_MAPPINGS is <physical network name>:<ovs bridge name>, you can change these names, but remember to adapt your change to the commands showed in this guide:

      OVS_BRIDGE_MAPPINGS=bridge:br-vlan,extern:br-ext
      

      if you only use vlan network for external network, it can be configured like:

      OVS_BRIDGE_MAPPINGS=extern:br-ext
      
    • if you would like to also configure flat network, you can set Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS, the format of it is (flat_networks=phy_net1,phy_net2,…). Besides specifying a list of physical network names, you can also use ‘*’ to allow flat networks with arbitrary physical network names; or use an empty list to disable flat networks. For simplicity, we use the same physical networks and bridge mappings for vlan and flat network configuration:

      Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS=(flat_networks=bridge,extern)
      
    • set TRICIRCLE_START_SERVICES to False(it’s True by default) so Tricircle services and central Neutron will not be started in node2:

      TRICIRCLE_START_SERVICES=False
      

    In this guide, we define two physical networks in node2, one is “bridge” for bridge network, the other one is “extern” for external network. If you do not want to try l3 north-south networking, you can simply remove the “extern” part. The external network type we use in the guide is VLAN, if you want to use other network type like flat, please refer to DevStack document.

  • 4 Create OVS bridge and attach the VLAN network interface to it

    sudo ovs-vsctl add-br br-vlan
    sudo ovs-vsctl add-port br-vlan eth1
    sudo ovs-vsctl add-br br-ext
    sudo ovs-vsctl add-port br-ext eth2
    

    br-vlan and br-ext are the OVS bridge names you configure on OVS_PHYSICAL_BRIDGE, eth1 and eth2 are the device names of your VLAN network interfaces, for the “bridge” network and the external network. Omit br-vlan if you only use vxlan network as tenant network.

  • 5 Run DevStack. In DevStack folder, run

    ./stack.sh
    
  • 6 After DevStack successfully starts, the setup is finished.

Note

In the newest version of codes, we may fail to boot an instance in node2. The reason is that Apache configuration file of Nova placement API doesn’t grant access right to the placement API bin folder. You can use “screen -r” to check placement API is working well or not. If placement API is in stuck status, manually update “/etc/apache2/sites-enabled/placement-api.conf” placement API configuration file in node2 to add the following section:

<Directory /usr/local/bin>
    Require all granted
</Directory>

After update, restart Apache service first, and then placement API.

This problem no longer exists after this patch:

https://github.com/openstack-dev/devstack/commit/6ed53156b6198e69d59d1cf3a3497e96f5b7a870

How to play

  • 1 After DevStack successfully starts, we need to create environment variables for the user (admin user as example in this guide). In DevStack folder

    source openrc admin demo
    
  • 2 Unset the region name environment variable, so that the command can be issued to specified region in following commands as needed

    unset OS_REGION_NAME
    
  • 3 Check if services have been correctly registered. Run

    openstack --os-region-name=RegionOne endpoint list
    

    you should get output looks like as following

    +----------------------------------+---------------+--------------+----------------+
    | ID                               | Region        | Service Name | Service Type   |
    +----------------------------------+---------------+--------------+----------------+
    | 4adaab1426d94959be46314b4bd277c2 | RegionOne     | glance       | image          |
    | 5314a11d168042ed85a1f32d40030b31 | RegionTwo     | nova_legacy  | compute_legacy |
    | ea43c53a8ab7493dacc4db079525c9b1 | RegionOne     | keystone     | identity       |
    | a1f263473edf4749853150178be1328d | RegionOne     | neutron      | network        |
    | ebea16ec07d94ed2b5356fb0a2a3223d | RegionTwo     | neutron      | network        |
    | 8d374672c09845f297755117ec868e11 | CentralRegion | tricircle    | Tricircle      |
    | e62e543bb9cf45f593641b2d00d72700 | RegionOne     | nova_legacy  | compute_legacy |
    | 540bdedfc449403b9befef3c2bfe3510 | RegionOne     | nova         | compute        |
    | d533429712954b29b9f37debb4f07605 | RegionTwo     | glance       | image          |
    | c8bdae9506cd443995ee3c89e811fb45 | CentralRegion | neutron      | network        |
    | 991d304dfcc14ccf8de4f00271fbfa22 | RegionTwo     | nova         | compute        |
    +----------------------------------+---------------+--------------+----------------+
    

    “CentralRegion” is the region you set in local.conf via CENTRAL_REGION_NAME, whose default value is “CentralRegion”, we use it as the region for the Tricircle API and central Neutron server. “RegionOne” and “RegionTwo” are the normal OpenStack regions which includes Nova, Neutron and Glance. Shared Keystone service is registered in “RegionOne”.

  • 4 Create pod instances for the Tricircle to manage the mapping between availability zones and OpenStack instances

    openstack multiregion networking pod create --region-name CentralRegion
    
    openstack multiregion networking pod create --region-name RegionOne --availability-zone az1
    
    openstack multiregion networking pod create --region-name RegionTwo --availability-zone az2
    

    Pay attention to “region_name” parameter we specify when creating pod. Pod name should exactly match the region name registered in Keystone. In the above commands, we create pods named “CentralRegion”, “RegionOne” and “RegionTwo”.

  • 5 Create necessary resources in central Neutron server

    neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionOne net1
    neutron --os-region-name=CentralRegion subnet-create net1 10.0.1.0/24
    neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionTwo net2
    neutron --os-region-name=CentralRegion subnet-create net2 10.0.2.0/24
    

    Please note that the net1 and net2 ID will be used in later step to boot VM.

  • 6 Get image ID and flavor ID which will be used in VM booting

    glance --os-region-name=RegionOne image-list
    nova --os-region-name=RegionOne flavor-list
    glance --os-region-name=RegionTwo image-list
    nova --os-region-name=RegionTwo flavor-list
    
  • 7 Boot virtual machines

    nova --os-region-name=RegionOne boot --flavor 1 --image $image1_id --nic net-id=$net1_id vm1
    nova --os-region-name=RegionTwo boot --flavor 1 --image $image2_id --nic net-id=$net2_id vm2
    
  • 8 Verify the VMs are connected to the networks

    neutron --os-region-name=CentralRegion port-list
    neutron --os-region-name=RegionOne port-list
    nova --os-region-name=RegionOne list
    neutron --os-region-name=RegionTwo port-list
    nova --os-region-name=RegionTwo list
    

    The ip address of each VM could be found in local Neutron server and central Neutron server. The port has same uuid in local Neutron server and central Neutron Server.

  • 9 Create external network and subnet

    neutron --os-region-name=CentralRegion net-create --router:external --provider:network_type vlan --provider:physical_network extern --availability-zone-hint RegionTwo ext-net
    neutron --os-region-name=CentralRegion subnet-create --name ext-subnet --disable-dhcp ext-net 163.3.124.0/24
    

    Pay attention that when creating external network, we need to pass “availability_zone_hints” parameter, which is the name of the pod that will host external network.

    Currently external network needs to be created before attaching subnet to the router, because plugin needs to utilize external network information to setup bridge network when handling interface adding operation. This limitation will be removed later.

  • 10 Create router and attach subnets in central Neutron server

    neutron --os-region-name=CentralRegion router-create router
    neutron --os-region-name=CentralRegion router-interface-add router $subnet1_id
    neutron --os-region-name=CentralRegion router-interface-add router $subnet2_id
    
  • 11 Set router external gateway in central Neutron server

    neutron --os-region-name=CentralRegion router-gateway-set router ext-net
    

    Now virtual machine in the subnet attached to the router should be able to ping machines in the external network. In our test, we use hypervisor tool to directly start a virtual machine in the external network to check the network connectivity.

  • 12 Launch VNC console and test connection

    nova --os-region-name=RegionOne get-vnc-console vm1 novnc
    nova --os-region-name=RegionTwo get-vnc-console vm2 novnc
    

    You should be able to ping vm1 from vm2 and vice versa.

  • 13 Create floating ip in central Neutron server

    neutron --os-region-name=CentralRegion floatingip-create ext-net
    
  • 14 Associate floating ip

    neutron --os-region-name=CentralRegion floatingip-list
    neutron --os-region-name=CentralRegion port-list
    neutron --os-region-name=CentralRegion floatingip-associate $floatingip_id $port_id
    

    Now you should be able to access virtual machine with floating ip bound from the external network.

Manual Installation

The Tricircle works with Neutron to provide networking automation functionality across Neutron in multi-region OpenStack deployment. In this guide we discuss how to manually install the Tricircle with local and central Neutron server.

Local Neutron server, running with the Tricircle local plugin, is responsible for triggering cross-Neutron networking automation. Every OpenStack instance has one local Neutron service, registered in the same region with other core services like Nova, Cinder, Glance, etc. Central Neutron server, running with the Tricircle central plugin, is responsible for unified resource allocation and cross-Neutron networking building. Besides regions for each OpenStack instance, we also need one specific region for central Neutron service. Only the Tricircle administrator service needs to be registered in this region along with central Neutron service while other core services are not mandatory.

Installation with Central Neutron Server

  • 1 Install the Tricircle package:

    git clone https://github.com/openstack/tricircle.git
    cd tricircle
    pip install -e .
    
  • 2 Register the Tricircle administrator API to Keystone:

    openstack user create tricircle --password password
    openstack role add --project service --user tricircle service
    openstack service create tricircle --name tricircle --description "Cross Neutron Networking Automation Service"
    service_id=$(openstack service show tricircle -f value -c id)
    service_host=162.3.124.201
    service_port=19999
    service_region=CentralRegion
    service_url=http://$service_host:$service_port/v1.0
    openstack endpoint create $service_id  public $service_url --region $service_region
    openstack endpoint create $service_id  admin $service_url --region $service_region
    openstack endpoint create $service_id  internal $service_url --region $service_region
    

    change password, service_host, service_port and service_region in the above commands to adapt your deployment. OpenStack CLI tool will automatically find the endpoints to send to registration requests. If you would like to specify the region for endpoints, use:

    openstack --os-region-name <region_name> <command>
    
  • 3 Generate the Tricircle configuration sample:

    cd tricircle
    oslo-config-generator --config-file=etc/api-cfg-gen.conf
    oslo-config-generator --config-file=etc/xjob-cfg-gen.conf
    

    The generated sample files are located in tricircle/etc

  • 4 Configure the Tricircle administrator API:

    cd tricircle/etc
    cp api.conf.sample api.conf
    

    Edit etc/api.conf, for detail configuration information, please refer to the configuration guide. Below only options necessary to be changed are listed.

Option

Description

Example

[DEFAULT] tricircle_db_connection

database connection string for tricircle

mysql+pymysql://root:password@ 127.0.0.1/tricircle?charset=utf8

[DEFAULT] transport_url

a URL representing the used messaging driver and its full configuration

rabbit://user:password@ 127.0.0.1:5672

[keystone_authtoken] auth_type

authentication method

password

[keystone_authtoken] auth_url

keystone authorization url

http://$keystone_service_host/identity

[keystone_authtoken] username

username of service account, needed for password authentication

tricircle

[keystone_authtoken] password

password of service account, needed for password authentication

password

[keystone_authtoken] user_domain_name

user domain name of service account, needed for password authentication

Default

[keystone_authtoken] project_name

project name of service account, needed for password authentication

service

[keystone_authtoken] project_domain_name

project domain name of service account, needed for password authentication

Default

[keystone_authtoken] www_authenticate_uri

complete public Identity API endpoint

http://$keystone_service_host/identity

[keystone_authtoken] cafile

A PEM encoded Certificate Authority to use when verifying HTTPs

/opt/stack/data/ca-bundle.pem

[keystone_authtoken] signing_dir

Directory used to cache files related to PKI tokens

/var/cache/tricircle

[keystone_authtoken] memcached_servers

Optionally specify a list of memcached server(s) to use for caching

$keystone_service_host:11211

[client] auth_url

keystone authorization url

http://$keystone_service_host/identity

[client] identity_url

keystone service url

http://$keystone_service_host/identity/v3

[client] auto_refresh_endpoint

if set to True, endpoint will be automatically refreshed if timeout accessing

True

[client] top_region_name

name of central region which client needs to access

CentralRegion

[client] admin_username

username of admin account

admin

[client] admin_password

password of admin account

password

[client] admin_tenant

project name of admin account

demo

[client] admin_user_domain_name

user domain name of admin account

Default

[client] admin_tenant_domain_name

project name of admin account

Default

Note

The Tricircle utilizes the Oslo library to setup service, database, log and RPC, please refer to the configuration guide of the corresponding Oslo library if you need further configuration of these modules. Change keystone_service_host to the address of Keystone service.

Note

It’s worth explaining the following options that can easily make users confused. keystone_authtoken.auth_url is the keystone endpoint url used by services to validate user tokens. keystone_authtoken.www_authenticate_uri will be put in the “WWW-Authenticate: Keystone uri=%s” header in the 401 response to tell users where they can get authentication. These two URLs can be the same, but sometimes people would like to use an internal URL for auth_url and a public URL for www_authenticate_uri. client.auth_url is used by the common.client module to construct a client to get authentication and access other services, it can be the either internal or public endpoint of keystone, depends on how the module can reach keystone. client.identity_url is no longer used in code since Pike release so you can simply ignore it, we will deprecate and remove this option later.

  • 5 Create the Tricircle database(take mysql as an example):

    mysql -uroot -p -e "create database tricircle character set utf8;"
    cd tricircle
    tricircle-db-manage --config-file etc/api.conf db_sync
    
  • 6 Start the Tricircle administrator API:

    sudo mkdir /var/cache/tricircle
    sudo chown $(whoami) /var/cache/tricircle/
    cd tricircle
    tricircle-api --config-file etc/api.conf
    
  • 7 Configure the Tricircle Xjob daemon:

    cd tricircle/etc
    cp xjob.conf.sample xjob.conf
    

    Edit etc/xjob.conf, for detail configuration information, please refer to the configuration guide. Below only options necessary to be changed are listed.

Option

Description

Example

[DEFAULT] tricircle_db_connection

database connection string for tricircle

mysql+pymysql://root:password@ 127.0.0.1/tricircle?charset=utf8

[DEFAULT] transport_url

a URL representing the used messaging driver and its full configuration

rabbit://user:password@ 127.0.0.1:5672

[client] auth_url

keystone authorization url

http://$keystone_service_host/identity

[client] identity_url

keystone service url

http://$keystone_service_host/identity/v3

[client] auto_refresh_endpoint

if set to True, endpoint will be automatically refreshed if timeout accessing

True

[client] top_region_name

name of central region which client needs to access

CentralRegion

[client] admin_username

username of admin account

admin

[client] admin_password

password of admin account

password

[client] admin_tenant

project name of admin account

demo

[client] admin_user_domain_name

user domain name of admin account

Default

[client] admin_tenant_domain_name

project name of admin account

Default

Note

The Tricircle utilizes the Oslo library to setup service, database, log and RPC, please refer to the configuration guide of the corresponding Oslo library if you need further configuration of these modules. Change keystone_service_host to the address of Keystone service.

  • 8 Start the Tricircle Xjob daemon:

    cd tricircle
    tricircle-xjob --config-file etc/xjob.conf
    
  • 9 Setup central Neutron server

    In this guide we assume readers are familiar with how to install Neutron server, so we just briefly discuss the steps and extra configuration needed by central Neutron server. For detail information about the configuration options in “client” and “tricircle” groups, please refer to the configuration guide. Neutron server can be installed alone, or you can install a full OpenStack instance then remove or stop other services.

    • install Neutron package

    • configure central Neutron server

      edit neutron.conf

    Option

    Description

    Example

    [database] connection

    database connection string for central Neutron server

    mysql+pymysql://root:password@ 127.0.0.1/neutron?charset=utf8

    [DEFAULT] bind_port

    Port central Neutron server binds to

    change to a different value rather than 9696 if you run central and local Neutron server in the same host

    [DEFAULT] core_plugin

    core plugin central Neutron server uses

    tricircle.network.central_plugin. TricirclePlugin

    [DEFAULT] service_plugins

    service plugin central Neutron server uses

    (leave empty)

    [DEFAULT] tricircle_db_connection

    database connection string for tricircle

    mysql+pymysql://root:password@ 127.0.0.1/tricircle?charset=utf8

    [client] auth_url

    keystone authorization url

    http://$keystone_service_host/identity

    [client] identity_url

    keystone service url

    http://$keystone_service_host/identity/v3

    [client] auto_refresh_endpoint

    if set to True, endpoint will be automatically refreshed if timeout accessing

    True

    [client] top_region_name

    name of central region which client needs to access

    CentralRegion

    [client] admin_username

    username of admin account

    admin

    [client] admin_password

    password of admin account

    password

    [client] admin_tenant

    project name of admin account

    demo

    [client] admin_user_domain_name

    user domain name of admin account

    Default

    [client] admin_tenant_domain_name

    project name of admin account

    Default

    [tricircle] type_drivers

    list of network type driver entry points to be loaded

    vxlan,vlan,flat,local

    [tricircle] tenant_network_types

    ordered list of network_types to allocate as tenant networks

    vxlan,vlan,flat,local

    [tricircle] network_vlan_ranges

    physical network names and VLAN tags range usable of VLAN provider

    bridge:2001:3000

    [tricircle] vni_ranges

    VxLAN VNI range

    1001:2000

    [tricircle] flat_networks

    physical network names with which flat networks can be created

    bridge

    [tricircle] bridge_network_type

    l3 bridge network type which is enabled in tenant_network_types and is not local type

    vxlan

    [tricircle] default_region_for_external_network

    Default Region where the external network belongs to

    RegionOne

    [tricircle] enable_api_gateway

    whether the API gateway is enabled

    False

    Note

    Change keystone_service_host to the address of Keystone service.

    • create database for central Neutron server

    • register central Neutron server endpoint in Keystone, central Neutron should be registered in the same region with the Tricircle

    • start central Neutron server

Installation with Local Neutron Server

  • 1 Install the Tricircle package:

    git clone https://github.com/openstack/tricircle.git
    cd tricircle
    pip install -e .
    
  • 2 Setup local Neutron server

    In this guide we assume readers have already installed a complete OpenStack instance running services like Nova, Cinder, Neutron, etc, so we just discuss how to configure Neutron server to work with the Tricircle. For detail information about the configuration options in “client” and “tricircle” groups, please refer to the configuration guide. After the change, you just restart the Neutron server.

    edit neutron.conf.

    Note

    Pay attention to the service_plugins configuration item, make sure the plugin which is configured can support the association of floating IP to a port whose network is not directly attached to the router. To support it, TricircleL3Plugin is inherited from Neutron original L3RouterPlugin and overrides the original “get_router_for_floatingip” implementation. In order to configure local Neutron to use original L3RouterPlugin, you will need to patch the function “get_router_for_floatingip” in the same way that has been done for TricircleL3Plugin.

    It’s not necessary to configure the service plugins if cross Neutron L2 networking is the only need in the deployment.

    Option

    Description

    Example

    [DEFAULT] core_plugin

    core plugin local Neutron server uses

    tricircle.network.local_plugin. TricirclePlugin

    [DEFAULT] service_plugins

    service plugins local Neutron server uses

    tricircle.network.local_l3_plugin. TricircleL3Plugin

    [client] auth_url

    keystone authorization url

    http://$keystone_service_host/identity

    [client] identity_url

    keystone service url

    http://$keystone_service_host/identity/v3

    [client] auto_refresh_endpoint

    if set to True, endpoint will be automatically refreshed if timeout accessing

    True

    [client] top_region_name

    name of central region which client needs to access

    CentralRegion

    [client] admin_username

    username of admin account

    admin

    [client] admin_password

    password of admin account

    password

    [client] admin_tenant

    project name of admin account

    demo

    [client] admin_user_domain_name

    user domain name of admin account

    Default

    [client] admin_tenant_domain_name

    project name of admin account

    Default

    [tricircle] real_core_plugin

    the core plugin the Tricircle local plugin invokes

    neutron.plugins.ml2.plugin. Ml2Plugin

    [tricircle] central_neutron_url

    central Neutron server url

    http://$neutron_service_host :9696

    Note

    Change keystone_service_host to the address of Keystone service, and neutron_service_host to the address of central Neutron service.

    edit ml2_conf.ini

    Option

    Description

    Example

    [ml2] mechanism_drivers

    add l2population if vxlan network is used

    openvswitch,l2population

    [agent] l2_population

    set to True if vxlan network is used

    True

    [agent] tunnel_types

    set to vxlan if vxlan network is used

    vxlan

    [ml2_type_vlan] network_vlan_ranges

    for a specific physical network, the vlan range should be the same with tricircle.network_vlan_ranges option for central Neutron, configure this option if vlan network is used

    bridge:2001:3000

    [ml2_type_vxlan] vni_ranges

    should be the same with tricircle.vni_ranges option for central Neutron, configure this option if vxlan network is used

    1001:2000

    [ml2_type_flat] flat_networks

    should be part of the tricircle.network_vlan_ranges option for central Neutron, configure this option if flat network is used

    bridge

    [ovs] bridge_mappings

    map the physical network to an ovs bridge

    bridge:br-bridge

    Note

    In tricircle.network_vlan_ranges option for central Neutron, all the available physical networks in all pods and their vlan ranges should be configured without duplication. It’s possible that one local Neutron doesn’t contain some of the physical networks configured in tricircle.network_vlan_ranges, in this case, users need to specify availability zone hints when creating network or booting instances in the correct pod, to ensure that the required physical network is available in the target pod.

Work with Nova cell v2(experiment)

Note

Multi-cell support of Nova cell v2 is under development. DevStack doesn’t support multi-cell deployment currently, so the steps discussed in this document may seem not that elegant. We will keep updating this document according to the progress of multi-cell development by Nova team.

Setup

  • 1 Follow “Multi-pod Installation with DevStack” document to prepare your local.conf for both nodes, and set TRICIRCLE_DEPLOY_WITH_CELL to True for both nodes. Start DevStack in node1, then node2.

Note

After running DevStack in both nodes, a multi-cell environment will be prepared: there is one CentralRegion, where Nova API and central Neutron will be registered. Nova has two cells, node1 belongs to cell1, node2 belongs to cell2, and each cell will be configured to use a dedicated local Neutron. For cell1, it’s RegionOne Neutron in node1; for cell2, it’s RegionTwo Neutron in node2(you can set the region name in local.conf to make the name more friendly). End user can access CentralRegion endpoint of Nova and Neutron to experience the integration of Nova cell v2 and Tricircle.

  • 2 Stop the following services in node2:

    systemctl stop devstack@n-sch.service
    systemctl stop devstack@n-super-cond.service
    systemctl stop devstack@n-api.service
    

    if the service of devstack@n-api-meta.service exists, stop it:

    systemctl stop devstack@n-api-meta.service
    

Note

Actually for cell v2, only one Nova API is required. We enable n-api in node2 because we need DevStack to help us create the necessary cell database. If n-api is disabled, neither API database nor cell database will be created.

  • 3 In node2, run the following command:

    mysql -u$user -p$password -Dnova_cell1 -e 'select host, mapped from compute_nodes'
    

    you can see that this command returns you one row showing the host of node2 is already mapped:

    +-----------+--------+
    | host      | mapped |
    +-----------+--------+
    | zhiyuan-2 |      1 |
    +-----------+--------+
    

    This host is registered to Nova API in node2, which is already stopped by us, We need to update this row to set “mapped” to 0:

    mysql -u$user -p$password -Dnova_cell1 -e 'update compute_nodes set mapped = 0 where host = "zhiyuan-2"'
    

    then we can register this host again in step4.

  • 4 In node1, run the following commands to register the new cell:

    nova-manage cell_v2 create_cell --name cell2 \
      --transport-url rabbit://$rabbit_user:$rabbit_passwd@$node2_ip:5672/nova_cell1 \
      --database_connection mysql+pymysql://$db_user:$db_passwd@$node2_ip/nova_cell1?charset=utf8
    
    nova-manage cell_v2 discover_hosts
    

    then you can see the new cell and host are added in the database:

    mysql -u$user -p$password -Dnova_api -e 'select cell_id, host from host_mappings'
    
    +---------+-----------+
    | cell_id | host      |
    +---------+-----------+
    |       2 | zhiyuan-1 |
    |       3 | zhiyuan-2 |
    +---------+-----------+
    
    mysql -u$user -p$password -Dnova_api -e 'select id, name from cell_mappings'
    
    +----+-------+
    | id | name  |
    +----+-------+
    |  1 | cell0 |
    |  2 | cell1 |
    |  3 | cell2 |
    +----+-------+
    
  • 5 In node1, run the following commands:

    systemctl restart devstack@n-sch.service
    
  • 6 In node1, check if compute services in both hosts are registered:

    openstack --os-region-name CentralRegion compute service list
    
    +----+------------------+-----------+----------+---------+-------+----------------------------+
    | ID | Binary           | Host      | Zone     | Status  | State | Updated At                 |
    +----+------------------+-----------+----------+---------+-------+----------------------------+
    |  5 | nova-scheduler   | zhiyuan-1 | internal | enabled | up    | 2017-09-20T06:56:02.000000 |
    |  6 | nova-conductor   | zhiyuan-1 | internal | enabled | up    | 2017-09-20T06:56:09.000000 |
    |  8 | nova-consoleauth | zhiyuan-1 | internal | enabled | up    | 2017-09-20T06:56:01.000000 |
    |  1 | nova-conductor   | zhiyuan-1 | internal | enabled | up    | 2017-09-20T06:56:07.000000 |
    |  3 | nova-compute     | zhiyuan-1 | nova     | enabled | up    | 2017-09-20T06:56:10.000000 |
    |  1 | nova-conductor   | zhiyuan-2 | internal | enabled | up    | 2017-09-20T06:56:07.000000 |
    |  3 | nova-compute     | zhiyuan-2 | nova     | enabled | up    | 2017-09-20T06:56:09.000000 |
    +----+------------------+-----------+----------+---------+-------+----------------------------+
    
    zhiyuan-1 has two nova-conductor services, because one of them is a super
    conductor service.
    
  • 7 Create two aggregates and put the two hosts in each aggregate:

    nova --os-region-name CentralRegion aggregate-create ag1 az1
    nova --os-region-name CentralRegion aggregate-create ag2 az2
    nova --os-region-name CentralRegion aggregate-add-host ag1 zhiyuan-1
    nova --os-region-name CentralRegion aggregate-add-host ag2 zhiyuan-2
    
  • 8 Create pods, tricircle client is used:

    openstack --os-region-name CentralRegion multiregion networking pod create --region-name CentralRegion
    openstack --os-region-name CentralRegion multiregion networking pod create --region-name RegionOne --availability-zone az1
    openstack --os-region-name CentralRegion multiregion networking pod create --region-name RegionTwo --availability-zone az2
    
  • 9 Create network and boot virtual machines:

    net_id=$(openstack --os-region-name CentralRegion network create --provider-network-type vxlan net1 -c id -f value)
    openstack --os-region-name CentralRegion subnet create --subnet-range 10.0.1.0/24 --network net1 subnet1
    image_id=$(openstack --os-region-name CentralRegion image list -c ID -f value)
    
    openstack --os-region-name CentralRegion server create --flavor 1 --image $image_id --nic net-id=$net_id --availability-zone az1 vm1
    openstack --os-region-name CentralRegion server create --flavor 1 --image $image_id --nic net-id=$net_id --availability-zone az2 vm2
    

Trouble Shooting

  • 1 After you run “compute service list” in step5, you only see services in node1, like:

    +----+------------------+-----------+----------+---------+-------+----------------------------+
    | ID | Binary           | Host      | Zone     | Status  | State | Updated At                 |
    +----+------------------+-----------+----------+---------+-------+----------------------------+
    |  5 | nova-scheduler   | zhiyuan-1 | internal | enabled | up    | 2017-09-20T06:55:52.000000 |
    |  6 | nova-conductor   | zhiyuan-1 | internal | enabled | up    | 2017-09-20T06:55:59.000000 |
    |  8 | nova-consoleauth | zhiyuan-1 | internal | enabled | up    | 2017-09-20T06:56:01.000000 |
    |  1 | nova-conductor   | zhiyuan-1 | internal | enabled | up    | 2017-09-20T06:55:57.000000 |
    |  3 | nova-compute     | zhiyuan-1 | nova     | enabled | up    | 2017-09-20T06:56:00.000000 |
    +----+------------------+-----------+----------+---------+-------+----------------------------+
    

    Though new cell has been registered in the database, the running n-api process in node1 may not recognize it. We find that restarting n-api can solve this problem.

Installation guide for LBaaS in Tricircle

Note

Since Octavia does not support multiple region scenarios, some modifications are required to install the Tricircle and Octavia in multiple pods. As a result, we will keep updating this document, so as to support automatic installation and test for Tricircle and Octavia in multiple regions.

Setup & Installation

  • 1 For the node1 in RegionOne, clone the code from Octavia repository to /opt/stack/ . Then make some changes to Octavia, so that we can build the management network in multiple regions manually. Here we give the following comment example.

    • First, comment the following lines in the octavia_init function in octavia/devstack/plugin.sh .

      Line 586-588 :

      • build_mgmt_network

      • OCTAVIA_AMP_NETWORK_ID=$(openstack network show lb-mgmt-net -f value -c id)

      • iniset $OCTAVIA_CONF controller_worker amp_boot_network_list ${OCTAVIA_AMP_NETWORK_ID}

      Line 593-595 :

      • if is_service_enabled tempest; then

      • configure_octavia_tempest ${OCTAVIA_AMP_NETWORK_ID}

      • fi

      Line 602-604 :

      • if is_service_enabled tempest; then

      • configure_octavia_tempest ${OCTAVIA_AMP_NETWORK_ID}

      • fi

      Line 610 :

      • create_mgmt_network_interface

      Line 612 :

      • configure_lb_mgmt_sec_grp

    • Second, comment the following three lines in the octavia_start function in octavia/devstack/plugin.sh .

      Line 465-467 :

      • if ! ps aux | grep -q [o]-hm0 && [ $OCTAVIA_NODE != ‘api’ ] ; then

      • sudo dhclient -v o-hm0 -cf $OCTAVIA_DHCLIENT_CONF

      • fi

  • 2 Follow “Multi-pod Installation with DevStack” document Multi-pod Installation with DevStack to prepare your local.conf for the node1 in RegionOne, and add the following lines before installation. Start DevStack in node1.

    enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
    enable_plugin octavia https://github.com/openstack/octavia.git
    ENABLED_SERVICES+=,q-lbaasv2
    ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
    
  • 3 If users only want to deploy Octavia in RegionOne, the following two steps can be skipped. After the DevStack installation in node1 is completed. For the node2 in RegionTwo, clone the code from Octavia repository to /opt/stack/. Here we need to modify plugin.sh in five sub-steps.

    • First, since Keystone is installed in RegionOne and shared by other regions, we need to comment all add_load-balancer_roles lines in the octavia_init function in octavia/devstack/plugin.sh .

      Line 597 and Line 606 :

      • add_load-balancer_roles

    • Second, the same as Step 1, comment total fourteen lines of creating networking resources in the octavia_init function.

    • Third, replace all ‘openstack keypair’ with ‘openstack –os-region-name=$REGION_NAME keypair’.

    • Fourth, replace all ‘openstack image’ with ‘openstack –os-region-name=$REGION_NAME image’.

    • Fifth, replace all ‘openstack flavor’ with ‘openstack –os-region-name=$REGION_NAME flavor’.

  • 4 Follow “Multi-pod Installation with DevStack” document Multi-pod Installation with DevStack to prepare your local.conf for the node2 in RegionTwo, and add the following lines before installation. Start DevStack in node2.

    enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
    enable_plugin octavia https://github.com/openstack/octavia.git
    ENABLED_SERVICES+=,q-lbaasv2
    ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
    

Prerequisite

  • 1 After DevStack successfully starts, we must create environment variables for the admin user and use the admin project, since Octavia controller will use admin account to query and use the management network as well as security group created in the following steps

    $ source openrc admin admin
    
  • 2 Then unset the region name environment variable, so that the command can be issued to specified region in following commands as needed.

    $ unset OS_REGION_NAME
    
  • 3 Before configure LBaaS, we need to create pods in CentralRegion, i.e., node1.

    $ openstack multiregion networking pod create --region-name CentralRegion
    $ openstack multiregion networking pod create --region-name RegionOne --availability-zone az1
    $ openstack multiregion networking pod create --region-name RegionTwo --availability-zone az2
    

Configuration

  • 1 Create security groups.

    • Create security group and rules for load balancer management network.

      $ openstack --os-region-name CentralRegion security group create lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol icmp lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 80 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol icmpv6 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 22 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 80 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 9443 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp
      

      Note

      The output in the console is omitted.

    • Create security group and rules for healthy manager

      $ openstack --os-region-name CentralRegion security group create lb-health-mgr-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol udp --dst-port 5555 --ethertype IPv6 --remote-ip ::/0 lb-health-mgr-sec-grp
      

      Note

      The output in the console is omitted.

  • 2 Configure LBaaS in node1

    • Create an amphora management network in CentralRegion

      $ openstack --os-region-name CentralRegion network create lb-mgmt-net1
      
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | UP                                   |
      | availability_zone_hints   |                                      |
      | availability_zones        | None                                 |
      | created_at                | None                                 |
      | description               | None                                 |
      | dns_domain                | None                                 |
      | id                        | 9c3bd3f7-b581-4686-b35a-434b2fe5c1d5 |
      | ipv4_address_scope        | None                                 |
      | ipv6_address_scope        | None                                 |
      | is_default                | None                                 |
      | is_vlan_transparent       | None                                 |
      | location                  | None                                 |
      | mtu                       | None                                 |
      | name                      | lb-mgmt-net1                         |
      | port_security_enabled     | False                                |
      | project_id                | d3b83ed3f2504a8699c9528a2297fea7     |
      | provider:network_type     | vxlan                                |
      | provider:physical_network | None                                 |
      | provider:segmentation_id  | 1094                                 |
      | qos_policy_id             | None                                 |
      | revision_number           | None                                 |
      | router:external           | Internal                             |
      | segments                  | None                                 |
      | shared                    | False                                |
      | status                    | ACTIVE                               |
      | subnets                   |                                      |
      | tags                      |                                      |
      | updated_at                | None                                 |
      +---------------------------+--------------------------------------+
      
    • Create a subnet in lb-mgmt-net1

      $ openstack --os-region-name CentralRegion subnet create --subnet-range 192.168.10.0/24 --network lb-mgmt-net1 lb-mgmt-subnet1
      
      +-------------------+--------------------------------------+
      | Field             | Value                                |
      +-------------------+--------------------------------------+
      | allocation_pools  | 192.168.10.2-192.168.10.254          |
      | cidr              | 192.168.10.0/24                      |
      | created_at        | 2019-01-01T06:31:10Z                 |
      | description       |                                      |
      | dns_nameservers   |                                      |
      | enable_dhcp       | True                                 |
      | gateway_ip        | 192.168.10.1                         |
      | host_routes       |                                      |
      | id                | 84562c3a-55be-4c0f-9e50-3a5206670077 |
      | ip_version        | 4                                    |
      | ipv6_address_mode | None                                 |
      | ipv6_ra_mode      | None                                 |
      | location          | None                                 |
      | name              | lb-mgmt-subnet1                      |
      | network_id        | 9c3bd3f7-b581-4686-b35a-434b2fe5c1d5 |
      | project_id        | d3b83ed3f2504a8699c9528a2297fea7     |
      | revision_number   | 0                                    |
      | segment_id        | None                                 |
      | service_types     | None                                 |
      | subnetpool_id     | None                                 |
      | tags              |                                      |
      | updated_at        | 2019-01-01T06:31:10Z                 |
      +-------------------+--------------------------------------+
      
    • Create the health management interface for Octavia in RegionOne.

      $ id_and_mac=$(openstack --os-region-name CentralRegion port create --security-group lb-health-mgr-sec-grp --device-owner Octavia:health-mgr --network lb-mgmt-net1 octavia-health-manager-region-one-listen-port | awk '/ id | mac_address / {print $4}')
      $ id_and_mac=($id_and_mac)
      $ MGMT_PORT_ID=${id_and_mac[0]}
      $ MGMT_PORT_MAC=${id_and_mac[1]}
      $ MGMT_PORT_IP=$(openstack --os-region-name RegionOne port show -f value -c fixed_ips $MGMT_PORT_ID | awk '{FS=",| "; gsub(",",""); gsub("'\''",""); for(i = 1; i <= NF; ++i) {if ($i ~ /^ip_address/) {n=index($i, "="); if (substr($i, n+1) ~ "\\.") print substr($i, n+1)}}}')
      $ openstack --os-region-name RegionOne port set --host $(hostname)  $MGMT_PORT_ID
      $ sudo ovs-vsctl -- --may-exist add-port ${OVS_BRIDGE:-br-int} o-hm0 -- set Interface o-hm0 type=internal -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC -- set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID -- set Interface o-hm0 external-ids:skip_cleanup=true
      $ OCTAVIA_DHCLIENT_CONF=/etc/octavia/dhcp/dhclient.conf
      $ sudo ip link set dev o-hm0 address $MGMT_PORT_MAC
      $ sudo dhclient -v o-hm0 -cf $OCTAVIA_DHCLIENT_CONF
      
      Listening on LPF/o-hm0/fa:16:3e:54:16:8e
      Sending on   LPF/o-hm0/fa:16:3e:54:16:8e
      Sending on   Socket/fallback
      DHCPDISCOVER on o-hm0 to 255.255.255.255 port 67 interval 3 (xid=0xd3e7353)
      DHCPREQUEST of 192.168.10.194 on o-hm0 to 255.255.255.255 port 67 (xid=0x53733e0d)
      DHCPOFFER of 192.168.10.194 from 192.168.10.2
      DHCPACK of 192.168.10.194 from 192.168.10.2
      bound to 192.168.10.194 -- renewal in 42514 seconds.
      
      $ sudo iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT
      

      Note

      As shown in the console, DHCP server allocates 192.168.10.194 as the IP of the health management interface, i.e., 0-hm. Hence, we need to modify the /etc/octavia/octavia.conf file to make Octavia aware of it and use the resources we just created, including health management interface, amphora security group and so on.

      Option

      Description

      Example

      [health_manager] bind_ip

      the ip of health manager in RegionOne

      192.168.10.194

      [health_manager] bind_port

      the port health manager listens on

      5555

      [health_manager] controller_ip_port_list

      the ip and port of health manager binds in RegionOne

      192.168.10.194:5555

      [controller_worker] amp_boot_network_list

      the id of amphora management network in RegionOne

      query neutron to obtain it, i.e., the id of lb-mgmt-net1 in this doc

      [controller_worker] amp_secgroup_list

      the id of security group created for amphora in central region

      query neutron to obtain it, i.e., the id of lb-mgmt-sec-grp

      [neutron] service_name

      The name of the neutron service in the keystone catalog

      neutron

      [neutron] endpoint

      Central neutron endpoint if override is necessary

      http://192.168.57.9:20001/

      [neutron] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      CentralRegion

      [neutron] endpoint_type

      Endpoint type

      public

      [nova] service_name

      The name of the nova service in the keystone catalog

      nova

      [nova] endpoint

      Custom nova endpoint if override is necessary

      http://192.168.57.9/compute/v2.1

      [nova] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      RegionOne

      [nova] endpoint_type

      Endpoint type in Identity service catalog to use for communication with the OpenStack services

      public

      [glance] service_name

      The name of the glance service in the keystone catalog

      glance

      [glance] endpoint

      Custom glance endpoint if override is necessary

      http://192.168.57.9/image

      [glance] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      RegionOne

      [glance] endpoint_type

      Endpoint type in Identity service catalog to use for communication with the OpenStack services

      public

      Restart all the services of Octavia in node1.

      $ sudo systemctl restart devstack@o-*
      
  • 2 If users only deploy Octavia in RegionOne, this step can be skipped. Configure LBaaS in node2.

    • Create an amphora management network in CentralRegion

      $ openstack --os-region-name CentralRegion network create lb-mgmt-net2
      
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | UP                                   |
      | availability_zone_hints   |                                      |
      | availability_zones        | None                                 |
      | created_at                | None                                 |
      | description               | None                                 |
      | dns_domain                | None                                 |
      | id                        | 6494d887-25a8-4b07-8422-93f7acc21ecd |
      | ipv4_address_scope        | None                                 |
      | ipv6_address_scope        | None                                 |
      | is_default                | None                                 |
      | is_vlan_transparent       | None                                 |
      | location                  | None                                 |
      | mtu                       | None                                 |
      | name                      | lb-mgmt-net2                         |
      | port_security_enabled     | False                                |
      | project_id                | d3b83ed3f2504a8699c9528a2297fea7     |
      | provider:network_type     | vxlan                                |
      | provider:physical_network | None                                 |
      | provider:segmentation_id  | 1085                                 |
      | qos_policy_id             | None                                 |
      | revision_number           | None                                 |
      | router:external           | Internal                             |
      | segments                  | None                                 |
      | shared                    | False                                |
      | status                    | ACTIVE                               |
      | subnets                   |                                      |
      | tags                      |                                      |
      | updated_at                | None                                 |
      +---------------------------+--------------------------------------+
      
    • Create a subnet in lb-mgmt-net2

      $ openstack --os-region-name CentralRegion subnet create --subnet-range 192.168.20.0/24 --network lb-mgmt-net2 lb-mgmt-subnet2
      
      +-------------------+--------------------------------------+
      | Field             | Value                                |
      +-------------------+--------------------------------------+
      | allocation_pools  | 192.168.20.2-192.168.20.254          |
      | cidr              | 192.168.20.0/24                      |
      | created_at        | 2019-01-01T06:53:28Z                 |
      | description       |                                      |
      | dns_nameservers   |                                      |
      | enable_dhcp       | True                                 |
      | gateway_ip        | 192.168.20.1                         |
      | host_routes       |                                      |
      | id                | de2e9e76-e3c8-490f-b030-4374b22c2d95 |
      | ip_version        | 4                                    |
      | ipv6_address_mode | None                                 |
      | ipv6_ra_mode      | None                                 |
      | location          | None                                 |
      | name              | lb-mgmt-subnet2                      |
      | network_id        | 6494d887-25a8-4b07-8422-93f7acc21ecd |
      | project_id        | d3b83ed3f2504a8699c9528a2297fea7     |
      | revision_number   | 0                                    |
      | segment_id        | None                                 |
      | service_types     | None                                 |
      | subnetpool_id     | None                                 |
      | tags              |                                      |
      | updated_at        | 2019-01-01T06:53:28Z                 |
      +-------------------+--------------------------------------+
      
    • Create the health management interface for Octavia in RegionTwo.

      $ id_and_mac=$(openstack --os-region-name CentralRegion port create --security-group lb-health-mgr-sec-grp --device-owner Octavia:health-mgr --network lb-mgmt-net2 octavia-health-manager-region-two-listen-port | awk '/ id | mac_address / {print $4}')
      $ id_and_mac=($id_and_mac)
      $ MGMT_PORT_ID=${id_and_mac[0]}
      $ MGMT_PORT_MAC=${id_and_mac[1]}
      $ MGMT_PORT_IP=$(openstack --os-region-name RegionTwo port show -f value -c fixed_ips $MGMT_PORT_ID | awk '{FS=",| "; gsub(",",""); gsub("'\''",""); for(i = 1; i <= NF; ++i) {if ($i ~ /^ip_address/) {n=index($i, "="); if (substr($i, n+1) ~ "\\.") print substr($i, n+1)}}}')
      $ openstack --os-region-name RegionTwo port set --host $(hostname) $MGMT_PORT_ID
      $ sudo ovs-vsctl -- --may-exist add-port ${OVS_BRIDGE:-br-int} o-hm0 -- set Interface o-hm0 type=internal -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC -- set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID -- set Interface o-hm0 external-ids:skip_cleanup=true
      $ OCTAVIA_DHCLIENT_CONF=/etc/octavia/dhcp/dhclient.conf
      $ sudo ip link set dev o-hm0 address $MGMT_PORT_MAC
      $ sudo dhclient -v o-hm0 -cf $OCTAVIA_DHCLIENT_CONF
      
      Listening on LPF/o-hm0/fa:16:3e:c0:bf:30
      Sending on   LPF/o-hm0/fa:16:3e:c0:bf:30
      Sending on   Socket/fallback
      DHCPDISCOVER on o-hm0 to 255.255.255.255 port 67 interval 3 (xid=0xad6d3a1a)
      DHCPREQUEST of 192.168.20.3 on o-hm0 to 255.255.255.255 port 67 (xid=0x1a3a6dad)
      DHCPOFFER of 192.168.20.3 from 192.168.20.2
      DHCPACK of 192.168.20.3 from 192.168.20.2
      bound to 192.168.20.3 -- renewal in 37208 seconds.
      
      $ sudo iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT
      

      Note

      The ip allocated by DHCP server, i.e., 192.168.20.3 in this case, is the bound and listened by health manager of Octavia. Please note that it will be used in the configuration file of Octavia.

      Modify the /etc/octavia/octavia.conf in node2.

      Option

      Description

      Example

      [health_manager] bind_ip

      the ip of health manager in RegionTwo

      192.168.20.3

      [health_manager] bind_port

      the port health manager listens on in RegionTwo

      5555

      [health_manager] controller_ip_port_list

      the ip and port of health manager binds in RegionTwo

      192.168.20.3:5555

      [controller_worker] amp_boot_network_list

      the id of amphora management network in RegionTwo

      query neutron to obtain it, i.e., the id of lb-mgmt-net2 in this doc

      [controller_worker] amp_secgroup_list

      the id of security group created for amphora in central region

      query neutron to obtain it, i.e., the id of lb-mgmt-sec-grp

      [neutron] service_name

      The name of the neutron service in the keystone catalog

      neutron

      [neutron] endpoint

      Central neutron endpoint if override is necessary

      http://192.168.57.9:20001/

      [neutron] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      CentralRegion

      [neutron] endpoint_type

      Endpoint type

      public

      [nova] service_name

      The name of the nova service in the keystone catalog

      nova

      [nova] endpoint

      Custom nova endpoint if override is necessary

      http://192.168.57.10/compute/v2.1

      [nova] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      RegionTwo

      [nova] endpoint_type

      Endpoint type in Identity service catalog to use for communication with the OpenStack services

      public

      [glance] service_name

      The name of the glance service in the keystone catalog

      glance

      [glance] endpoint

      Custom glance endpoint if override is necessary

      http://192.168.57.10/image

      [glance] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      RegionTwo

      [glance] endpoint_type

      Endpoint type in Identity service catalog to use for communication with the OpenStack services

      public

      Restart all the services of Octavia in node2.

      $ sudo systemctl restart devstack@o-*
      
    • By now, we finish installing LBaaS.

How to play

  • 1 LBaaS members in one network and in same region

    Here we take VxLAN as an example.

    • Create net1 in CentralRegion

      $ openstack --os-region-name CentralRegion network create net1
      
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | UP                                   |
      | availability_zone_hints   |                                      |
      | availability_zones        | None                                 |
      | created_at                | None                                 |
      | description               | None                                 |
      | dns_domain                | None                                 |
      | id                        | 9dcdcb56-358f-40b1-9e3f-6ed6bae6db7d |
      | ipv4_address_scope        | None                                 |
      | ipv6_address_scope        | None                                 |
      | is_default                | None                                 |
      | is_vlan_transparent       | None                                 |
      | location                  | None                                 |
      | mtu                       | None                                 |
      | name                      | net1                                 |
      | port_security_enabled     | False                                |
      | project_id                | d3b83ed3f2504a8699c9528a2297fea7     |
      | provider:network_type     | vxlan                                |
      | provider:physical_network | None                                 |
      | provider:segmentation_id  | 1102                                 |
      | qos_policy_id             | None                                 |
      | revision_number           | None                                 |
      | router:external           | Internal                             |
      | segments                  | None                                 |
      | shared                    | False                                |
      | status                    | ACTIVE                               |
      | subnets                   |                                      |
      | tags                      |                                      |
      | updated_at                | None                                 |
      +---------------------------+--------------------------------------+
      
    • Create a subnet in net1

      $ openstack --os-region-name CentralRegion subnet create --subnet-range 10.0.10.0/24 --gateway none --network net1 subnet1
      
      +-------------------+--------------------------------------+
      | Field             | Value                                |
      +-------------------+--------------------------------------+
      | allocation_pools  | 10.0.10.1-10.0.10.254                |
      | cidr              | 10.0.10.0/24                         |
      | created_at        | 2019-01-01T07:22:45Z                 |
      | description       |                                      |
      | dns_nameservers   |                                      |
      | enable_dhcp       | True                                 |
      | gateway_ip        | None                                 |
      | host_routes       |                                      |
      | id                | 39ccf811-b188-4ccf-a643-dd7669a413c2 |
      | ip_version        | 4                                    |
      | ipv6_address_mode | None                                 |
      | ipv6_ra_mode      | None                                 |
      | location          | None                                 |
      | name              | subnet1                              |
      | network_id        | 9dcdcb56-358f-40b1-9e3f-6ed6bae6db7d |
      | project_id        | d3b83ed3f2504a8699c9528a2297fea7     |
      | revision_number   | 0                                    |
      | segment_id        | None                                 |
      | service_types     | None                                 |
      | subnetpool_id     | None                                 |
      | tags              |                                      |
      | updated_at        | 2019-01-01T07:22:45Z                 |
      +-------------------+--------------------------------------+
      

      Note

      To enable adding instances as members with VIP, amphora adds a new route table to route the traffic sent from VIP to its gateway. However, in Tricircle, the gateway obtained from central neutron is not the real gateway in local neutron. As a result, we did not set any gateway for the subnet temporarily. We will remove the limitation in the future.

    • List all available flavors in RegionOne

      $ openstack --os-region-name RegionOne flavor list
      
      +----+-----------+-------+------+-----------+-------+-----------+
      | ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
      +----+-----------+-------+------+-----------+-------+-----------+
      | 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
      | 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
      | 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
      | 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
      | 42 | m1.nano   |    64 |    0 |         0 |     1 | True      |
      | 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
      | 84 | m1.micro  |   128 |    0 |         0 |     1 | True      |
      | c1 | cirros256 |   256 |    0 |         0 |     1 | True      |
      | d1 | ds512M    |   512 |    5 |         0 |     1 | True      |
      | d2 | ds1G      |  1024 |   10 |         0 |     1 | True      |
      | d3 | ds2G      |  2048 |   10 |         0 |     2 | True      |
      | d4 | ds4G      |  4096 |   20 |         0 |     4 | True      |
      +----+-----------+-------+------+-----------+-------+-----------+
      
    • List all available images in RegionOne

      $ openstack --os-region-name RegionOne image list
      
      +--------------------------------------+--------------------------+--------+
      | ID                                   | Name                     | Status |
      +--------------------------------------+--------------------------+--------+
      | 471ed2cb-8004-4973-9210-b96463b2c668 | amphora-x64-haproxy      | active |
      | 85d165f0-bc7a-43d5-850b-4a8e89e57a66 | cirros-0.3.6-x86_64-disk | active |
      +--------------------------------------+--------------------------+--------+
      
    • Create two instances, i.e., backend1 and backend2, in RegionOne, which reside in subnet1.

      $ nova --os-region-name=RegionOne boot --flavor 1 --image $image_id --nic net-id=$net1_id backend1
      $ nova --os-region-name=RegionOne boot --flavor 1 --image $image_id --nic net-id=$net1_id backend2
      
      +--------------------------------------+-----------------------------------------------------------------+
      | Property                             | Value                                                           |
      +--------------------------------------+-----------------------------------------------------------------+
      | OS-DCF:diskConfig                    | MANUAL                                                          |
      | OS-EXT-AZ:availability_zone          |                                                                 |
      | OS-EXT-SRV-ATTR:host                 | -                                                               |
      | OS-EXT-SRV-ATTR:hostname             | backend1                                                        |
      | OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                               |
      | OS-EXT-SRV-ATTR:instance_name        |                                                                 |
      | OS-EXT-SRV-ATTR:kernel_id            |                                                                 |
      | OS-EXT-SRV-ATTR:launch_index         | 0                                                               |
      | OS-EXT-SRV-ATTR:ramdisk_id           |                                                                 |
      | OS-EXT-SRV-ATTR:reservation_id       | r-0m1suyvm                                                      |
      | OS-EXT-SRV-ATTR:root_device_name     | -                                                               |
      | OS-EXT-SRV-ATTR:user_data            | -                                                               |
      | OS-EXT-STS:power_state               | 0                                                               |
      | OS-EXT-STS:task_state                | scheduling                                                      |
      | OS-EXT-STS:vm_state                  | building                                                        |
      | OS-SRV-USG:launched_at               | -                                                               |
      | OS-SRV-USG:terminated_at             | -                                                               |
      | accessIPv4                           |                                                                 |
      | accessIPv6                           |                                                                 |
      | adminPass                            | 7poPJnDxV3Mz                                                    |
      | config_drive                         |                                                                 |
      | created                              | 2019-01-01T07:30:26Z                                            |
      | description                          | -                                                               |
      | flavor:disk                          | 1                                                               |
      | flavor:ephemeral                     | 0                                                               |
      | flavor:extra_specs                   | {}                                                              |
      | flavor:original_name                 | m1.tiny                                                         |
      | flavor:ram                           | 512                                                             |
      | flavor:swap                          | 0                                                               |
      | flavor:vcpus                         | 1                                                               |
      | hostId                               |                                                                 |
      | host_status                          |                                                                 |
      | id                                   | d330f73f-2d78-4f59-8ea2-6fa1b878d6a5                            |
      | image                                | cirros-0.3.6-x86_64-disk (85d165f0-bc7a-43d5-850b-4a8e89e57a66) |
      | key_name                             | -                                                               |
      | locked                               | False                                                           |
      | metadata                             | {}                                                              |
      | name                                 | backend1                                                        |
      | os-extended-volumes:volumes_attached | []                                                              |
      | progress                             | 0                                                               |
      | security_groups                      | default                                                         |
      | status                               | BUILD                                                           |
      | tags                                 | []                                                              |
      | tenant_id                            | d3b83ed3f2504a8699c9528a2297fea7                                |
      | trusted_image_certificates           | -                                                               |
      | updated                              | 2019-01-01T07:30:27Z                                            |
      | user_id                              | fdf37c6259544a9294ae8463e9be063c                                |
      +--------------------------------------+-----------------------------------------------------------------+
      
      $ nova --os-region-name=RegionOne list
      
      +--------------------------------------+----------+--------+------------+-------------+------------------+
      | ID                                   | Name     | Status | Task State | Power State | Networks         |
      +--------------------------------------+----------+--------+------------+-------------+------------------+
      | d330f73f-2d78-4f59-8ea2-6fa1b878d6a5 | backend1 | ACTIVE | -          | Running     | net1=10.0.10.152 |
      | 72a4d0b0-88bc-41c5-9cb1-0965a5f3008f | backend2 | ACTIVE | -          | Running     | net1=10.0.10.176 |
      +--------------------------------------+----------+--------+------------+-------------+------------------+
      
    • Console in the instances with user ‘cirros’ and password of ‘cubswin:)’. Then run the following commands to simulate a web server.

      Note

      If using cirros 0.4.0 and above, Console in the instances with user ‘cirros’ and password of ‘gocubsgo’.

      $ sudo ip netns exec dhcp-$net1_id ssh cirros@10.0.10.152
      $ sudo ip netns exec dhcp-$net1_id ssh cirros@10.0.10.176
      
      $ MYIP=$(ifconfig eth0| grep 'inet addr'| awk -F: '{print $2}'| awk '{print $1}')
      $ while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
      

      The Octavia installed in node1 and node2 are two standalone services, here we take RegionOne as an example.

    • Create a load balancer for subnet1 in RegionOne.

      $ openstack --os-region-name RegionOne loadbalancer create --name lb1 --vip-subnet-id $subnet1_id
      
      +---------------------+--------------------------------------+
      | Field               | Value                                |
      +---------------------+--------------------------------------+
      | admin_state_up      | True                                 |
      | created_at          | 2019-01-01T07:37:46                  |
      | description         |                                      |
      | flavor              |                                      |
      | id                  | bbb5480a-a6ec-4cea-a77d-4872a94aca5c |
      | listeners           |                                      |
      | name                | lb1                                  |
      | operating_status    | OFFLINE                              |
      | pools               |                                      |
      | project_id          | d3b83ed3f2504a8699c9528a2297fea7     |
      | provider            | amphora                              |
      | provisioning_status | PENDING_CREATE                       |
      | updated_at          | None                                 |
      | vip_address         | 10.0.10.189                          |
      | vip_network_id      | 9dcdcb56-358f-40b1-9e3f-6ed6bae6db7d |
      | vip_port_id         | 759370eb-5f50-4229-be7e-0ca7aefe04db |
      | vip_qos_policy_id   | None                                 |
      | vip_subnet_id       | 39ccf811-b188-4ccf-a643-dd7669a413c2 |
      +---------------------+--------------------------------------+
      

      Create a listener for the load balancer after the status of the load balancer is ‘ACTIVE’. Please note that it may take some time for the load balancer to become ‘ACTIVE’.

      $ openstack --os-region-name RegionOne loadbalancer list
      
      +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+
      | id                                   | name | project_id                       | vip_address | provisioning_status | provider |
      +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+
      | bbb5480a-a6ec-4cea-a77d-4872a94aca5c | lb1  | d3b83ed3f2504a8699c9528a2297fea7 | 10.0.10.189 | ACTIVE              | amphora  |
      +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+
      
      $ openstack --os-region-name RegionOne loadbalancer listener create --protocol HTTP --protocol-port 80 --name listener1 lb1
      
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | True                                 |
      | connection_limit          | -1                                   |
      | created_at                | 2019-01-01T07:44:21                  |
      | default_pool_id           | None                                 |
      | default_tls_container_ref | None                                 |
      | description               |                                      |
      | id                        | ec9d2e51-25ab-4c50-83cb-15f726d366ec |
      | insert_headers            | None                                 |
      | l7policies                |                                      |
      | loadbalancers             | bbb5480a-a6ec-4cea-a77d-4872a94aca5c |
      | name                      | listener1                            |
      | operating_status          | OFFLINE                              |
      | project_id                | d3b83ed3f2504a8699c9528a2297fea7     |
      | protocol                  | HTTP                                 |
      | protocol_port             | 80                                   |
      | provisioning_status       | PENDING_CREATE                       |
      | sni_container_refs        | []                                   |
      | timeout_client_data       | 50000                                |
      | timeout_member_connect    | 5000                                 |
      | timeout_member_data       | 50000                                |
      | timeout_tcp_inspect       | 0                                    |
      | updated_at                | None                                 |
      +---------------------------+--------------------------------------+
      
    • Create a pool for the listener after the status of the load balancer is ‘ACTIVE’.

      $ openstack --os-region-name RegionOne loadbalancer pool create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
      
      +---------------------+--------------------------------------+
      | Field               | Value                                |
      +---------------------+--------------------------------------+
      | admin_state_up      | True                                 |
      | created_at          | 2019-01-01T07:46:21                  |
      | description         |                                      |
      | healthmonitor_id    |                                      |
      | id                  | 7560b064-cdbe-4fa2-ae50-f66ad67fb575 |
      | lb_algorithm        | ROUND_ROBIN                          |
      | listeners           | ec9d2e51-25ab-4c50-83cb-15f726d366ec |
      | loadbalancers       | bbb5480a-a6ec-4cea-a77d-4872a94aca5c |
      | members             |                                      |
      | name                | pool1                                |
      | operating_status    | OFFLINE                              |
      | project_id          | d3b83ed3f2504a8699c9528a2297fea7     |
      | protocol            | HTTP                                 |
      | provisioning_status | PENDING_CREATE                       |
      | session_persistence | None                                 |
      | updated_at          | None                                 |
      +---------------------+--------------------------------------+
      
    • Add two instances to the pool as members, after the status of the load balancer is ‘ACTIVE’.

      $  openstack --os-region-name RegionOne loadbalancer member create --subnet $subnet1_id --address $backend1_ip  --protocol-port 80 pool1
      
      +---------------------+--------------------------------------+
      | Field               | Value                                |
      +---------------------+--------------------------------------+
      | address             | 10.0.10.152                          |
      | admin_state_up      | True                                 |
      | created_at          | 2019-01-01T07:49:04                  |
      | id                  | 4e6ce567-0710-4a29-a98f-ab766e4963ab |
      | name                |                                      |
      | operating_status    | NO_MONITOR                           |
      | project_id          | d3b83ed3f2504a8699c9528a2297fea7     |
      | protocol_port       | 80                                   |
      | provisioning_status | PENDING_CREATE                       |
      | subnet_id           | 39ccf811-b188-4ccf-a643-dd7669a413c2 |
      | updated_at          | None                                 |
      | weight              | 1                                    |
      | monitor_port        | None                                 |
      | monitor_address     | None                                 |
      | backup              | False                                |
      +---------------------+--------------------------------------+
      
      $ openstack --os-region-name RegionOne loadbalancer member create --subnet $subnet1_id --address $backend2_ip  --protocol-port 80 pool1
      
      +---------------------+--------------------------------------+
      | Field               | Value                                |
      +---------------------+--------------------------------------+
      | address             | 10.0.10.176                          |
      | admin_state_up      | True                                 |
      | created_at          | 2019-01-01T07:50:06                  |
      | id                  | 1e8ab609-a7e9-44af-b37f-69b494b40d01 |
      | name                |                                      |
      | operating_status    | NO_MONITOR                           |
      | project_id          | d3b83ed3f2504a8699c9528a2297fea7     |
      | protocol_port       | 80                                   |
      | provisioning_status | PENDING_CREATE                       |
      | subnet_id           | 39ccf811-b188-4ccf-a643-dd7669a413c2 |
      | updated_at          | None                                 |
      | weight              | 1                                    |
      | monitor_port        | None                                 |
      | monitor_address     | None                                 |
      | backup              | False                                |
      +---------------------+--------------------------------------+
      
    • Verify load balancing. Request the VIP twice.

      $ sudo ip netns exec dhcp-$net1_id curl -v $VIP
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.152
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.176
      * Closing connection 0
      
  • 2 LBaaS members in one network but in different regions

    • List all available flavors in RegionTwo

      $ openstack --os-region-name RegionTwo flavor list
      
      +----+-----------+-------+------+-----------+-------+-----------+
      | ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
      +----+-----------+-------+------+-----------+-------+-----------+
      | 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
      | 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
      | 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
      | 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
      | 42 | m1.nano   |    64 |    0 |         0 |     1 | True      |
      | 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
      | 84 | m1.micro  |   128 |    0 |         0 |     1 | True      |
      | c1 | cirros256 |   256 |    0 |         0 |     1 | True      |
      | d1 | ds512M    |   512 |    5 |         0 |     1 | True      |
      | d2 | ds1G      |  1024 |   10 |         0 |     1 | True      |
      | d3 | ds2G      |  2048 |   10 |         0 |     2 | True      |
      | d4 | ds4G      |  4096 |   20 |         0 |     4 | True      |
      +----+-----------+-------+------+-----------+-------+-----------+
      
    • List all available images in RegionTwo

      $ openstack --os-region-name RegionTwo image list
      
      +--------------------------------------+--------------------------+--------+
      | ID                                   | Name                     | Status |
      +--------------------------------------+--------------------------+--------+
      | 471ed2cb-8004-4973-9210-b96463b2c668 | amphora-x64-haproxy      | active |
      | 85d165f0-bc7a-43d5-850b-4a8e89e57a66 | cirros-0.3.6-x86_64-disk | active |
      +--------------------------------------+--------------------------+--------+
      
    • Create an instance in RegionTwo, which resides in subnet1

      $ nova --os-region-name=RegionTwo boot --flavor 1 --image $image_id --nic net-id=$net1_id backend3
      
      +-------------------------------------+-----------------------------------------------------------------+
      | Field                               | Value                                                           |
      +-------------------------------------+-----------------------------------------------------------------+
      | OS-DCF:diskConfig                   | MANUAL                                                          |
      | OS-EXT-AZ:availability_zone         | az2                                                             |
      | OS-EXT-SRV-ATTR:host                | None                                                            |
      | OS-EXT-SRV-ATTR:hypervisor_hostname | None                                                            |
      | OS-EXT-SRV-ATTR:instance_name       |                                                                 |
      | OS-EXT-STS:power_state              | NOSTATE                                                         |
      | OS-EXT-STS:task_state               | scheduling                                                      |
      | OS-EXT-STS:vm_state                 | building                                                        |
      | OS-SRV-USG:launched_at              | None                                                            |
      | OS-SRV-USG:terminated_at            | None                                                            |
      | accessIPv4                          |                                                                 |
      | accessIPv6                          |                                                                 |
      | addresses                           |                                                                 |
      | adminPass                           | rpV9MLzPGSvB                                                    |
      | config_drive                        |                                                                 |
      | created                             | 2019-01-01T07:56:41Z                                            |
      | flavor                              | m1.tiny (1)                                                     |
      | hostId                              |                                                                 |
      | id                                  | b27539fb-4c98-4f0c-b3f8-bc6744659f67                            |
      | image                               | cirros-0.3.6-x86_64-disk (85d165f0-bc7a-43d5-850b-4a8e89e57a66) |
      | key_name                            | None                                                            |
      | name                                | backend3                                                        |
      | progress                            | 0                                                               |
      | project_id                          | d3b83ed3f2504a8699c9528a2297fea7                                |
      | properties                          |                                                                 |
      | security_groups                     | name='default'                                                  |
      | status                              | BUILD                                                           |
      | updated                             | 2019-01-01T07:56:42Z                                            |
      | user_id                             | fdf37c6259544a9294ae8463e9be063c                                |
      | volumes_attached                    |                                                                 |
      +-------------------------------------+-----------------------------------------------------------------+
      
    • Console in the instances with user ‘cirros’ and password of ‘cubswin:)’. Then run the following commands to simulate a web server.

      $ sudo ip netns exec dhcp-$net1_id ssh cirros@$backend3_ip
      
      $ MYIP=$(ifconfig eth0| grep 'inet addr'| awk -F: '{print $2}'| awk '{print $1}')
      $ while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
      
    • Add backend3 to the pool as a member, after the status of the load balancer is ‘ACTIVE’.

      $ openstack --os-region-name RegionOne loadbalancer member create --subnet $subnet1_id --address $backend3_ip --protocol-port 80 pool1
      
    • Verify load balancing. Request the VIP three times.

      Note

      Please note if the subnet is created in the region, just like the cases before this step, either unique name or id of the subnet can be used as hint. But if the subnet is not created yet, like the case for backend3, users are required to use subnet id as hint instead of subnet name. Because the subnet is not created in RegionOne, local neutron needs to query central neutron for the subnet with id.

      $ sudo ip netns exec dhcp- curl -v $VIP
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.152
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.176
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.186
      * Closing connection 0
      
  • 3 LBaaS across members in different networks and different regions

    • Create net2 in CentralRegion

      $ openstack --os-region-name CentralRegion network create net2
      
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | UP                                   |
      | availability_zone_hints   |                                      |
      | availability_zones        | None                                 |
      | created_at                | None                                 |
      | description               | None                                 |
      | dns_domain                | None                                 |
      | id                        | f0ea9608-2d6e-4272-a596-2dc3a725eddc |
      | ipv4_address_scope        | None                                 |
      | ipv6_address_scope        | None                                 |
      | is_default                | None                                 |
      | is_vlan_transparent       | None                                 |
      | location                  | None                                 |
      | mtu                       | None                                 |
      | name                      | net2                                 |
      | port_security_enabled     | False                                |
      | project_id                | d3b83ed3f2504a8699c9528a2297fea7     |
      | provider:network_type     | vxlan                                |
      | provider:physical_network | None                                 |
      | provider:segmentation_id  | 1088                                 |
      | qos_policy_id             | None                                 |
      | revision_number           | None                                 |
      | router:external           | Internal                             |
      | segments                  | None                                 |
      | shared                    | False                                |
      | status                    | ACTIVE                               |
      | subnets                   |                                      |
      | tags                      |                                      |
      | updated_at                | None                                 |
      +---------------------------+--------------------------------------+
      
    • Create a subnet in net2

      $ openstack --os-region-name CentralRegion subnet create --subnet-range 10.0.20.0/24 --gateway none --network net2 subnet2
      
      +-------------------+--------------------------------------+
      | Field             | Value                                |
      +-------------------+--------------------------------------+
      | allocation_pools  | 10.0.20.1-10.0.20.254                |
      | cidr              | 10.0.20.0/24                         |
      | created_at        | 2019-01-01T07:59:53Z                 |
      | description       |                                      |
      | dns_nameservers   |                                      |
      | enable_dhcp       | True                                 |
      | gateway_ip        | None                                 |
      | host_routes       |                                      |
      | id                | 4c05a73d-fa1c-46a9-982f-6683b0d1cb2a |
      | ip_version        | 4                                    |
      | ipv6_address_mode | None                                 |
      | ipv6_ra_mode      | None                                 |
      | location          | None                                 |
      | name              | subnet2                              |
      | network_id        | f0ea9608-2d6e-4272-a596-2dc3a725eddc |
      | project_id        | d3b83ed3f2504a8699c9528a2297fea7     |
      | revision_number   | 0                                    |
      | segment_id        | None                                 |
      | service_types     | None                                 |
      | subnetpool_id     | None                                 |
      | tags              |                                      |
      | updated_at        | 2019-01-01T07:59:53Z                 |
      +-------------------+--------------------------------------+
      
    • List all available flavors in RegionTwo

      $ openstack --os-region-name RegionTwo flavor list
      
      +----+-----------+-------+------+-----------+-------+-----------+
      | ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
      +----+-----------+-------+------+-----------+-------+-----------+
      | 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
      | 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
      | 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
      | 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
      | 42 | m1.nano   |    64 |    0 |         0 |     1 | True      |
      | 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
      | 84 | m1.micro  |   128 |    0 |         0 |     1 | True      |
      | c1 | cirros256 |   256 |    0 |         0 |     1 | True      |
      | d1 | ds512M    |   512 |    5 |         0 |     1 | True      |
      | d2 | ds1G      |  1024 |   10 |         0 |     1 | True      |
      | d3 | ds2G      |  2048 |   10 |         0 |     2 | True      |
      | d4 | ds4G      |  4096 |   20 |         0 |     4 | True      |
      +----+-----------+-------+------+-----------+-------+-----------+
      
    • List all available images in RegionTwo

      $ openstack --os-region-name RegionTwo image list
      
      +--------------------------------------+--------------------------+--------+
      | ID                                   | Name                     | Status |
      +--------------------------------------+--------------------------+--------+
      | 471ed2cb-8004-4973-9210-b96463b2c668 | amphora-x64-haproxy      | active |
      | 85d165f0-bc7a-43d5-850b-4a8e89e57a66 | cirros-0.3.6-x86_64-disk | active |
      +--------------------------------------+--------------------------+--------+
      
    • Create an instance in RegionTwo, which resides in subnet2

      $ nova --os-region-name=RegionTwo boot --flavor 1 --image $image_id --nic net-id=$net2_id backend4
      
      +-------------------------------------+-----------------------------------------------------------------+
      | Field                               | Value                                                           |
      +-------------------------------------+-----------------------------------------------------------------+
      | OS-DCF:diskConfig                   | MANUAL                                                          |
      | OS-EXT-AZ:availability_zone         | az2                                                             |
      | OS-EXT-SRV-ATTR:host                | None                                                            |
      | OS-EXT-SRV-ATTR:hypervisor_hostname | None                                                            |
      | OS-EXT-SRV-ATTR:instance_name       |                                                                 |
      | OS-EXT-STS:power_state              | NOSTATE                                                         |
      | OS-EXT-STS:task_state               | scheduling                                                      |
      | OS-EXT-STS:vm_state                 | building                                                        |
      | OS-SRV-USG:launched_at              | None                                                            |
      | OS-SRV-USG:terminated_at            | None                                                            |
      | accessIPv4                          |                                                                 |
      | accessIPv6                          |                                                                 |
      | addresses                           |                                                                 |
      | adminPass                           | jHY5xdqgxezb                                                    |
      | config_drive                        |                                                                 |
      | created                             | 2019-01-01T08:02:50Z                                            |
      | flavor                              | m1.tiny (1)                                                     |
      | hostId                              |                                                                 |
      | id                                  | 43bcdc80-6492-4a88-90dd-a979c73219a1                            |
      | image                               | cirros-0.3.6-x86_64-disk (85d165f0-bc7a-43d5-850b-4a8e89e57a66) |
      | key_name                            | None                                                            |
      | name                                | backend4                                                        |
      | progress                            | 0                                                               |
      | project_id                          | d3b83ed3f2504a8699c9528a2297fea7                                |
      | properties                          |                                                                 |
      | security_groups                     | name='default'                                                  |
      | status                              | BUILD                                                           |
      | updated                             | 2019-01-01T08:02:51Z                                            |
      | user_id                             | fdf37c6259544a9294ae8463e9be063c                                |
      | volumes_attached                    |                                                                 |
      +-------------------------------------+-----------------------------------------------------------------+
      
    • Console in the instances with user ‘cirros’ and password of ‘cubswin:)’. Then run the following commands to simulate a web server.

      $ sudo ip netns exec dhcp-$net2_id ssh cirros@$backend4_ip
      
      $ MYIP=$(ifconfig eth0| grep 'inet addr'| awk -F: '{print $2}'| awk '{print $1}')
      $ while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
      
    • Add the instance to the pool as a member, after the status of the load balancer is ‘ACTIVE’.

      $ openstack --os-region-name RegionOne loadbalancer member create --subnet $subnet2_id --address $backend4_ip --protocol-port 80 pool1
      
    • Verify load balancing. Request the VIP four times.

      $ sudo ip netns exec dhcp- curl -v $VIP
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.152
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.176
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.186
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.20.64
      * Closing connection 0
      

Installation guide for LBaaS with nova cell v2 in Tricircle

Note

Since Octavia does not support multiple region scenarios, some modifications are required to install the Tricircle and Octavia in multiple pods. As a result, we will keep updating this document, so as to support automatic installation and test for Tricircle and Octavia in multiple regions.

Note

Multi-cell support of Nova cell v2 is under development. DevStack doesn’t support multi-cell deployment currently, so the steps discussed in this document may seem not that elegant. We will keep updating this document according to the progress of multi-cell development by Nova team.

Setup & Installation

  • 1 For the node1 in RegionOne, clone the code from Octavia repository to /opt/stack/ . Then make some changes to Octavia, so that we can build the management network in multiple regions manually. Here we give the following comment example.

    • First, comment the following lines in the octavia_init function in octavia/devstack/plugin.sh .

      Line 586-588 :

      • build_mgmt_network

      • OCTAVIA_AMP_NETWORK_ID=$(openstack network show lb-mgmt-net -f value -c id)

      • iniset $OCTAVIA_CONF controller_worker amp_boot_network_list ${OCTAVIA_AMP_NETWORK_ID}

      Line 593-595 :

      • if is_service_enabled tempest; then

      • configure_octavia_tempest ${OCTAVIA_AMP_NETWORK_ID}

      • fi

      Line 602-604 :

      • if is_service_enabled tempest; then

      • configure_octavia_tempest ${OCTAVIA_AMP_NETWORK_ID}

      • fi

      Line 610 :

      • create_mgmt_network_interface

      Line 612 :

      • configure_lb_mgmt_sec_grp

    • Second, comment the following three lines in the octavia_start function in octavia/devstack/plugin.sh .

      Line 465-467 :

      • if ! ps aux | grep -q [o]-hm0 && [ $OCTAVIA_NODE != ‘api’ ] ; then

      • sudo dhclient -v o-hm0 -cf $OCTAVIA_DHCLIENT_CONF

      • fi

  • 2 Follow “Multi-pod Installation with DevStack” document Multi-pod Installation with DevStack to prepare your local.conf for the node1 in RegionOne, and add the following lines before installation. Start DevStack in node1.

    TRICIRCLE_DEPLOY_WITH_CELL=True
    
    enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
    enable_plugin octavia https://github.com/openstack/octavia.git
    ENABLED_SERVICES+=,q-lbaasv2
    ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
    
  • 3 If users only want to deploy Octavia in RegionOne, the following two steps can be skipped. After the DevStack installation in node1 is completed. For the node2 in RegionTwo, clone the code from Octavia repository to /opt/stack/. Here we need to modify plugin.sh in five sub-steps.

    • First, since Keystone is installed in RegionOne and shared by other regions, we need to comment all add_load-balancer_roles lines in the octavia_init function in octavia/devstack/plugin.sh .

      Line 597 and Line 606 :

      • add_load-balancer_roles

    • Second, the same as Step 1, comment total fourteen lines of creating networking resources in the octavia_init function.

    • Third, replace all ‘openstack keypair’ with ‘openstack –os-region-name=$REGION_NAME keypair’.

    • Fourth, replace all ‘openstack image’ with ‘openstack –os-region-name=$REGION_NAME image’.

    • Fifth, replace all ‘openstack flavor’ with ‘openstack –os-region-name=$REGION_NAME flavor’.

  • 4 Follow “Multi-pod Installation with DevStack” document Multi-pod Installation with DevStack to prepare your local.conf for the node2 in RegionTwo, and add the following lines before installation. Start DevStack in node2.

    TRICIRCLE_DEPLOY_WITH_CELL=True
    
    enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
    enable_plugin octavia https://github.com/openstack/octavia.git
    ENABLED_SERVICES+=,q-lbaasv2
    ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
    
  • 5 After DevStack successfully starts, we must create environment variables for the admin user and use the admin project, since Octavia controller will use admin account to query and use the management network as well as security group created in the following steps

    $ source openrc admin admin
    
  • 6 Then unset the region name environment variable, so that the command can be issued to specified region in following commands as needed.

    $ unset OS_REGION_NAME
    

Note

After running DevStack in both nodes, a multi-cell environment will be prepared: there is one CentralRegion, where Nova API and central Neutron will be registered. Nova has two cells, node1 belongs to cell1, node2 belongs to cell2, and each cell will be configured to use a dedicated local Neutron. For cell1, it’s RegionOne Neutron in node1; for cell2, it’s RegionTwo Neutron in node2(you can set the region name in local.conf to make the name more friendly). End user can access CentralRegion endpoint of Nova and Neutron to experience the integration of Nova cell v2 and Tricircle.

  • 7 Stop the following services in node2:

    systemctl stop devstack@n-sch.service
    systemctl stop devstack@n-super-cond.service
    systemctl stop devstack@n-api.service
    

    if the service of devstack@n-api-meta.service exists, stop it:

    systemctl stop devstack@n-api-meta.service
    

Note

Actually for cell v2, only one Nova API is required. We enable n-api in node2 because we need DevStack to help us create the necessary cell database. If n-api is disabled, neither API database nor cell database will be created.

  • 8 In node2, run the following command:

    mysql -u $user -p $password -D nova_cell1 -e 'select host, mapped from compute_nodes'
    

    you can see that this command returns you one row showing the host of node2 is already mapped:

    +--------+--------+
    | host   | mapped |
    +--------+--------+
    | stack2 |      1 |
    +--------+--------+
    

    This host is registered to Nova API in node2, which is already stopped by us, We need to update this row to set “mapped” to 0:

    mysql -u $user -p $password -D nova_cell1 -e 'update compute_nodes set mapped = 0 where host = "stack2"'
    mysql -u $user -p $password -D nova_cell1 -e 'select host, mapped from compute_nodes'
    
    +--------+--------+
    | host   | mapped |
    +--------+--------+
    | stack2 |      0 |
    +--------+--------+
    

    then we can register this host again in step4.

  • 9 In node1, run the following commands to register the new cell:

    nova-manage cell_v2 create_cell --name cell2 \
      --transport-url rabbit://$rabbit_user:$rabbit_passwd@$node2_ip:5672/nova_cell1 \
      --database_connection mysql+pymysql://$db_user:$db_passwd@$node2_ip/nova_cell1?charset=utf8
    
    nova-manage cell_v2 discover_hosts
    

    then you can see the new cell and host are added in the database:

    mysql -u $user -p $password -D nova_api -e 'select cell_id, host from host_mappings'
    
    +---------+--------+
    | cell_id | host   |
    +---------+--------+
    |       2 | stack1 |
    |       3 | stack2 |
    +---------+--------+
    
    mysql -u $user -p $password -D nova_api -e 'select id, name from cell_mappings'
    
    +----+-------+
    | id | name  |
    +----+-------+
    |  1 | cell0 |
    |  2 | cell1 |
    |  3 | cell2 |
    +----+-------+
    
  • 10 In node1, run the following commands:

    systemctl restart devstack@n-sch.service
    
  • 11 In node1, check if compute services in both hosts are registered:

    openstack --os-region-name CentralRegion compute service list
    
    +----+------------------+--------+----------+---------+-------+----------------------------+
    | ID | Binary           | Host   | Zone     | Status  | State | Updated At                 |
    +----+------------------+--------+----------+---------+-------+----------------------------+
    |  3 | nova-scheduler   | stack1 | internal | enabled | up    | 2019-01-01T05:31:31.000000 |
    |  5 | nova-consoleauth | stack1 | internal | enabled | up    | 2019-01-01T05:31:37.000000 |
    |  7 | nova-conductor   | stack1 | internal | enabled | up    | 2019-01-01T05:31:30.000000 |
    |  1 | nova-conductor   | stack1 | internal | enabled | up    | 2019-01-01T05:31:38.000000 |
    |  3 | nova-compute     | stack1 | nova     | enabled | up    | 2019-01-01T05:31:38.000000 |
    |  1 | nova-conductor   | stack2 | internal | enabled | up    | 2019-01-01T05:31:36.000000 |
    |  3 | nova-compute     | stack2 | nova     | enabled | up    | 2019-01-01T05:31:31.000000 |
    +----+------------------+--------+----------+---------+-------+----------------------------+
    
    stack1 has two nova-conductor services, because one of them is a super
    conductor service.
    
    After you run "compute service list" in step5, if you only see services in node1, like::
    
    +----+------------------+--------+----------+---------+-------+----------------------------+
    | ID | Binary           | Host   | Zone     | Status  | State | Updated At                 |
    +----+------------------+--------+----------+---------+-------+----------------------------+
    |  1 | nova-conductor   | stack1 | internal | enabled | up    | 2019-01-01T05:30:58.000000 |
    |  3 | nova-compute     | stack1 | nova     | enabled | up    | 2019-01-01T05:30:58.000000 |
    |  3 | nova-scheduler   | stack1 | internal | enabled | up    | 2019-01-01T05:31:01.000000 |
    |  5 | nova-consoleauth | stack1 | internal | enabled | up    | 2019-01-01T05:30:57.000000 |
    |  7 | nova-conductor   | stack1 | internal | enabled | up    | 2019-01-01T05:31:00.000000 |
    +----+------------------+--------+----------+---------+-------+----------------------------+
    
    Though new cell has been registered in the database, the running n-api process
    in node1 may not recognize it. We find that restarting n-api can solve this
    problem.
    
  • 12 Create two aggregates and put the two hosts in each aggregate:

    nova --os-region-name CentralRegion aggregate-create ag1 az1
    nova --os-region-name CentralRegion aggregate-create ag2 az2
    nova --os-region-name CentralRegion aggregate-add-host ag1 stack1
    nova --os-region-name CentralRegion aggregate-add-host ag2 stack2
    
  • 13 Create pods, tricircle client is used:

    openstack --os-region-name CentralRegion multiregion networking pod create --region-name CentralRegion
    openstack --os-region-name CentralRegion multiregion networking pod create --region-name RegionOne --availability-zone az1
    openstack --os-region-name CentralRegion multiregion networking pod create --region-name RegionTwo --availability-zone az2
    

Configuration

  • 1 Create security groups.

    • Create security group and rules for load balancer management network.

      $ openstack --os-region-name CentralRegion security group create lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol icmp lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 80 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol icmpv6 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 22 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 80 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol tcp --dst-port 9443 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp
      $ openstack --os-region-name RegionOne security group show $lb-mgmt-sec-grp_ID
      

      Note

      The output in the console is omitted.

    • Create security group and rules for healthy manager

      $ openstack --os-region-name CentralRegion security group create lb-health-mgr-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp
      $ openstack --os-region-name CentralRegion security group rule create --protocol udp --dst-port 5555 --ethertype IPv6 --remote-ip ::/0 lb-health-mgr-sec-grp
      

      Note

      The output in the console is omitted.

  • 2 Configure LBaaS in node1

    • Create an amphora management network in CentralRegion

      $ openstack --os-region-name CentralRegion network create lb-mgmt-net1
      
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | UP                                   |
      | availability_zone_hints   |                                      |
      | availability_zones        | None                                 |
      | created_at                | None                                 |
      | description               | None                                 |
      | dns_domain                | None                                 |
      | id                        | 9c3bd3f7-b581-4686-b35a-434b2fe5c1d5 |
      | ipv4_address_scope        | None                                 |
      | ipv6_address_scope        | None                                 |
      | is_default                | None                                 |
      | is_vlan_transparent       | None                                 |
      | location                  | None                                 |
      | mtu                       | None                                 |
      | name                      | lb-mgmt-net1                         |
      | port_security_enabled     | False                                |
      | project_id                | d3b83ed3f2504a8699c9528a2297fea7     |
      | provider:network_type     | vxlan                                |
      | provider:physical_network | None                                 |
      | provider:segmentation_id  | 1094                                 |
      | qos_policy_id             | None                                 |
      | revision_number           | None                                 |
      | router:external           | Internal                             |
      | segments                  | None                                 |
      | shared                    | False                                |
      | status                    | ACTIVE                               |
      | subnets                   |                                      |
      | tags                      |                                      |
      | updated_at                | None                                 |
      +---------------------------+--------------------------------------+
      
    • Create a subnet in lb-mgmt-net1

      $ openstack --os-region-name CentralRegion subnet create --subnet-range 192.168.10.0/24 --network lb-mgmt-net1 lb-mgmt-subnet1
      
      +-------------------+--------------------------------------+
      | Field             | Value                                |
      +-------------------+--------------------------------------+
      | allocation_pools  | 192.168.10.2-192.168.10.254          |
      | cidr              | 192.168.10.0/24                      |
      | created_at        | 2019-01-01T06:31:10Z                 |
      | description       |                                      |
      | dns_nameservers   |                                      |
      | enable_dhcp       | True                                 |
      | gateway_ip        | 192.168.10.1                         |
      | host_routes       |                                      |
      | id                | 84562c3a-55be-4c0f-9e50-3a5206670077 |
      | ip_version        | 4                                    |
      | ipv6_address_mode | None                                 |
      | ipv6_ra_mode      | None                                 |
      | location          | None                                 |
      | name              | lb-mgmt-subnet1                      |
      | network_id        | 9c3bd3f7-b581-4686-b35a-434b2fe5c1d5 |
      | project_id        | d3b83ed3f2504a8699c9528a2297fea7     |
      | revision_number   | 0                                    |
      | segment_id        | None                                 |
      | service_types     | None                                 |
      | subnetpool_id     | None                                 |
      | tags              |                                      |
      | updated_at        | 2019-01-01T06:31:10Z                 |
      +-------------------+--------------------------------------+
      
    • Create the health management interface for Octavia in RegionOne.

      $ id_and_mac=$(openstack --os-region-name CentralRegion port create --security-group lb-health-mgr-sec-grp --device-owner Octavia:health-mgr --network lb-mgmt-net1 octavia-health-manager-region-one-listen-port | awk '/ id | mac_address / {print $4}')
      $ id_and_mac=($id_and_mac)
      $ MGMT_PORT_ID=${id_and_mac[0]}
      $ MGMT_PORT_MAC=${id_and_mac[1]}
      $ MGMT_PORT_IP=$(openstack --os-region-name RegionOne port show -f value -c fixed_ips $MGMT_PORT_ID | awk '{FS=",| "; gsub(",",""); gsub("'\''",""); for(i = 1; i <= NF; ++i) {if ($i ~ /^ip_address/) {n=index($i, "="); if (substr($i, n+1) ~ "\\.") print substr($i, n+1)}}}')
      $ openstack --os-region-name RegionOne port set --host $(hostname)  $MGMT_PORT_ID
      $ sudo ovs-vsctl -- --may-exist add-port ${OVS_BRIDGE:-br-int} o-hm0 -- set Interface o-hm0 type=internal -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC -- set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID -- set Interface o-hm0 external-ids:skip_cleanup=true
      $ OCTAVIA_DHCLIENT_CONF=/etc/octavia/dhcp/dhclient.conf
      $ sudo ip link set dev o-hm0 address $MGMT_PORT_MAC
      $ sudo dhclient -v o-hm0 -cf $OCTAVIA_DHCLIENT_CONF
      
      Listening on LPF/o-hm0/fa:16:3e:54:16:8e
      Sending on   LPF/o-hm0/fa:16:3e:54:16:8e
      Sending on   Socket/fallback
      DHCPDISCOVER on o-hm0 to 255.255.255.255 port 67 interval 3 (xid=0xd3e7353)
      DHCPREQUEST of 192.168.10.194 on o-hm0 to 255.255.255.255 port 67 (xid=0x53733e0d)
      DHCPOFFER of 192.168.10.194 from 192.168.10.2
      DHCPACK of 192.168.10.194 from 192.168.10.2
      bound to 192.168.10.194 -- renewal in 42514 seconds.
      
      $ sudo iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT
      

      Note

      As shown in the console, DHCP server allocates 192.168.10.194 as the IP of the health management interface, i.e., 0-hm. Hence, we need to modify the /etc/octavia/octavia.conf file to make Octavia aware of it and use the resources we just created, including health management interface, amphora security group and so on.

      Option

      Description

      Example

      [health_manager] bind_ip

      the ip of health manager in RegionOne

      192.168.10.194

      [health_manager] bind_port

      the port health manager listens on

      5555

      [health_manager] controller_ip_port_list

      the ip and port of health manager binds in RegionOne

      192.168.10.194:5555

      [controller_worker] amp_boot_network_list

      the id of amphora management network in RegionOne

      query neutron to obtain it, i.e., the id of lb-mgmt-net1 in this doc

      [controller_worker] amp_secgroup_list

      the id of security group created for amphora in central region

      query neutron to obtain it, i.e., the id of lb-mgmt-sec-grp

      [neutron] service_name

      The name of the neutron service in the keystone catalog

      neutron

      [neutron] endpoint

      Central neutron endpoint if override is necessary

      http://192.168.57.9:20001/

      [neutron] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      CentralRegion

      [neutron] endpoint_type

      Endpoint type

      public

      [nova] service_name

      The name of the nova service in the keystone catalog

      nova

      [nova] endpoint

      Custom nova endpoint if override is necessary

      http://192.168.57.9/compute/v2.1

      [nova] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      RegionOne

      [nova] endpoint_type

      Endpoint type in Identity service catalog to use for communication with the OpenStack services

      public

      [glance] service_name

      The name of the glance service in the keystone catalog

      glance

      [glance] endpoint

      Custom glance endpoint if override is necessary

      http://192.168.57.9/image

      [glance] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      RegionOne

      [glance] endpoint_type

      Endpoint type in Identity service catalog to use for communication with the OpenStack services

      public

      Restart all the services of Octavia in node1.

      $ sudo systemctl restart devstack@o-*
      
  • 2 If users only deploy Octavia in RegionOne, this step can be skipped. Configure LBaaS in node2.

    • Create an amphora management network in CentralRegion

      $ openstack --os-region-name CentralRegion network create lb-mgmt-net2
      
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | UP                                   |
      | availability_zone_hints   |                                      |
      | availability_zones        | None                                 |
      | created_at                | None                                 |
      | description               | None                                 |
      | dns_domain                | None                                 |
      | id                        | 6494d887-25a8-4b07-8422-93f7acc21ecd |
      | ipv4_address_scope        | None                                 |
      | ipv6_address_scope        | None                                 |
      | is_default                | None                                 |
      | is_vlan_transparent       | None                                 |
      | location                  | None                                 |
      | mtu                       | None                                 |
      | name                      | lb-mgmt-net2                         |
      | port_security_enabled     | False                                |
      | project_id                | d3b83ed3f2504a8699c9528a2297fea7     |
      | provider:network_type     | vxlan                                |
      | provider:physical_network | None                                 |
      | provider:segmentation_id  | 1085                                 |
      | qos_policy_id             | None                                 |
      | revision_number           | None                                 |
      | router:external           | Internal                             |
      | segments                  | None                                 |
      | shared                    | False                                |
      | status                    | ACTIVE                               |
      | subnets                   |                                      |
      | tags                      |                                      |
      | updated_at                | None                                 |
      +---------------------------+--------------------------------------+
      
    • Create a subnet in lb-mgmt-net2

      $ openstack --os-region-name CentralRegion subnet create --subnet-range 192.168.20.0/24 --network lb-mgmt-net2 lb-mgmt-subnet2
      
      +-------------------+--------------------------------------+
      | Field             | Value                                |
      +-------------------+--------------------------------------+
      | allocation_pools  | 192.168.20.2-192.168.20.254          |
      | cidr              | 192.168.20.0/24                      |
      | created_at        | 2019-01-01T06:53:28Z                 |
      | description       |                                      |
      | dns_nameservers   |                                      |
      | enable_dhcp       | True                                 |
      | gateway_ip        | 192.168.20.1                         |
      | host_routes       |                                      |
      | id                | de2e9e76-e3c8-490f-b030-4374b22c2d95 |
      | ip_version        | 4                                    |
      | ipv6_address_mode | None                                 |
      | ipv6_ra_mode      | None                                 |
      | location          | None                                 |
      | name              | lb-mgmt-subnet2                      |
      | network_id        | 6494d887-25a8-4b07-8422-93f7acc21ecd |
      | project_id        | d3b83ed3f2504a8699c9528a2297fea7     |
      | revision_number   | 0                                    |
      | segment_id        | None                                 |
      | service_types     | None                                 |
      | subnetpool_id     | None                                 |
      | tags              |                                      |
      | updated_at        | 2019-01-01T06:53:28Z                 |
      +-------------------+--------------------------------------+
      
    • Create the health management interface for Octavia in RegionTwo.

      $ id_and_mac=$(openstack --os-region-name CentralRegion port create --security-group lb-health-mgr-sec-grp --device-owner Octavia:health-mgr --network lb-mgmt-net2 octavia-health-manager-region-two-listen-port | awk '/ id | mac_address / {print $4}')
      $ id_and_mac=($id_and_mac)
      $ MGMT_PORT_ID=${id_and_mac[0]}
      $ MGMT_PORT_MAC=${id_and_mac[1]}
      $ MGMT_PORT_IP=$(openstack --os-region-name RegionTwo port show -f value -c fixed_ips $MGMT_PORT_ID | awk '{FS=",| "; gsub(",",""); gsub("'\''",""); for(i = 1; i <= NF; ++i) {if ($i ~ /^ip_address/) {n=index($i, "="); if (substr($i, n+1) ~ "\\.") print substr($i, n+1)}}}')
      $ openstack --os-region-name RegionTwo port set --host $(hostname) $MGMT_PORT_ID
      $ sudo ovs-vsctl -- --may-exist add-port ${OVS_BRIDGE:-br-int} o-hm0 -- set Interface o-hm0 type=internal -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC -- set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID -- set Interface o-hm0 external-ids:skip_cleanup=true
      $ OCTAVIA_DHCLIENT_CONF=/etc/octavia/dhcp/dhclient.conf
      $ sudo ip link set dev o-hm0 address $MGMT_PORT_MAC
      $ sudo dhclient -v o-hm0 -cf $OCTAVIA_DHCLIENT_CONF
      
      Listening on LPF/o-hm0/fa:16:3e:c0:bf:30
      Sending on   LPF/o-hm0/fa:16:3e:c0:bf:30
      Sending on   Socket/fallback
      DHCPDISCOVER on o-hm0 to 255.255.255.255 port 67 interval 3 (xid=0xad6d3a1a)
      DHCPREQUEST of 192.168.20.3 on o-hm0 to 255.255.255.255 port 67 (xid=0x1a3a6dad)
      DHCPOFFER of 192.168.20.3 from 192.168.20.2
      DHCPACK of 192.168.20.3 from 192.168.20.2
      bound to 192.168.20.3 -- renewal in 37208 seconds.
      
      $ sudo iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT
      

      Note

      The ip allocated by DHCP server, i.e., 192.168.20.3 in this case, is the bound and listened by health manager of Octavia. Please note that it will be used in the configuration file of Octavia.

      Modify the /etc/octavia/octavia.conf in node2.

      Option

      Description

      Example

      [health_manager] bind_ip

      the ip of health manager in RegionTwo

      192.168.20.3

      [health_manager] bind_port

      the port health manager listens on in RegionTwo

      5555

      [health_manager] controller_ip_port_list

      the ip and port of health manager binds in RegionTwo

      192.168.20.3:5555

      [controller_worker] amp_boot_network_list

      the id of amphora management network in RegionTwo

      query neutron to obtain it, i.e., the id of lb-mgmt-net2 in this doc

      [controller_worker] amp_secgroup_list

      the id of security group created for amphora in central region

      query neutron to obtain it, i.e., the id of lb-mgmt-sec-grp

      [neutron] service_name

      The name of the neutron service in the keystone catalog

      neutron

      [neutron] endpoint

      Central neutron endpoint if override is necessary

      http://192.168.57.9:20001/

      [neutron] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      CentralRegion

      [neutron] endpoint_type

      Endpoint type

      public

      [nova] service_name

      The name of the nova service in the keystone catalog

      nova

      [nova] endpoint

      Custom nova endpoint if override is necessary

      http://192.168.57.10/compute/v2.1

      [nova] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      RegionTwo

      [nova] endpoint_type

      Endpoint type in Identity service catalog to use for communication with the OpenStack services

      public

      [glance] service_name

      The name of the glance service in the keystone catalog

      glance

      [glance] endpoint

      Custom glance endpoint if override is necessary

      http://192.168.57.10/image

      [glance] region_name

      Region in Identity service catalog to use for communication with the OpenStack services

      RegionTwo

      [glance] endpoint_type

      Endpoint type in Identity service catalog to use for communication with the OpenStack services

      public

      Restart all the services of Octavia in node2.

      $ sudo systemctl restart devstack@o-*
      
    • By now, we finish installing LBaaS.

How to play

  • 1 LBaaS members in one network and in same region

    Here we take VxLAN as an example.

    • Create net1 in CentralRegion

      $ openstack --os-region-name CentralRegion network create net1
      
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | UP                                   |
      | availability_zone_hints   |                                      |
      | availability_zones        | None                                 |
      | created_at                | None                                 |
      | description               | None                                 |
      | dns_domain                | None                                 |
      | id                        | 9dcdcb56-358f-40b1-9e3f-6ed6bae6db7d |
      | ipv4_address_scope        | None                                 |
      | ipv6_address_scope        | None                                 |
      | is_default                | None                                 |
      | is_vlan_transparent       | None                                 |
      | location                  | None                                 |
      | mtu                       | None                                 |
      | name                      | net1                                 |
      | port_security_enabled     | False                                |
      | project_id                | d3b83ed3f2504a8699c9528a2297fea7     |
      | provider:network_type     | vxlan                                |
      | provider:physical_network | None                                 |
      | provider:segmentation_id  | 1102                                 |
      | qos_policy_id             | None                                 |
      | revision_number           | None                                 |
      | router:external           | Internal                             |
      | segments                  | None                                 |
      | shared                    | False                                |
      | status                    | ACTIVE                               |
      | subnets                   |                                      |
      | tags                      |                                      |
      | updated_at                | None                                 |
      +---------------------------+--------------------------------------+
      
    • Create a subnet in net1

      $ openstack --os-region-name CentralRegion subnet create --subnet-range 10.0.10.0/24 --gateway none --network net1 subnet1
      
      +-------------------+--------------------------------------+
      | Field             | Value                                |
      +-------------------+--------------------------------------+
      | allocation_pools  | 10.0.10.1-10.0.10.254                |
      | cidr              | 10.0.10.0/24                         |
      | created_at        | 2019-01-01T07:22:45Z                 |
      | description       |                                      |
      | dns_nameservers   |                                      |
      | enable_dhcp       | True                                 |
      | gateway_ip        | None                                 |
      | host_routes       |                                      |
      | id                | 39ccf811-b188-4ccf-a643-dd7669a413c2 |
      | ip_version        | 4                                    |
      | ipv6_address_mode | None                                 |
      | ipv6_ra_mode      | None                                 |
      | location          | None                                 |
      | name              | subnet1                              |
      | network_id        | 9dcdcb56-358f-40b1-9e3f-6ed6bae6db7d |
      | project_id        | d3b83ed3f2504a8699c9528a2297fea7     |
      | revision_number   | 0                                    |
      | segment_id        | None                                 |
      | service_types     | None                                 |
      | subnetpool_id     | None                                 |
      | tags              |                                      |
      | updated_at        | 2019-01-01T07:22:45Z                 |
      +-------------------+--------------------------------------+
      

      Note

      To enable adding instances as members with VIP, amphora adds a new route table to route the traffic sent from VIP to its gateway. However, in Tricircle, the gateway obtained from central neutron is not the real gateway in local neutron. As a result, we did not set any gateway for the subnet temporarily. We will remove the limitation in the future.

    • List all available flavors in RegionOne

      $ openstack --os-region-name RegionOne flavor list
      
      +----+-----------+-------+------+-----------+-------+-----------+
      | ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
      +----+-----------+-------+------+-----------+-------+-----------+
      | 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
      | 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
      | 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
      | 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
      | 42 | m1.nano   |    64 |    0 |         0 |     1 | True      |
      | 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
      | 84 | m1.micro  |   128 |    0 |         0 |     1 | True      |
      | c1 | cirros256 |   256 |    0 |         0 |     1 | True      |
      | d1 | ds512M    |   512 |    5 |         0 |     1 | True      |
      | d2 | ds1G      |  1024 |   10 |         0 |     1 | True      |
      | d3 | ds2G      |  2048 |   10 |         0 |     2 | True      |
      | d4 | ds4G      |  4096 |   20 |         0 |     4 | True      |
      +----+-----------+-------+------+-----------+-------+-----------+
      
    • List all available images in RegionOne

      $ openstack --os-region-name RegionOne image list
      
      +--------------------------------------+--------------------------+--------+
      | ID                                   | Name                     | Status |
      +--------------------------------------+--------------------------+--------+
      | 471ed2cb-8004-4973-9210-b96463b2c668 | amphora-x64-haproxy      | active |
      | 85d165f0-bc7a-43d5-850b-4a8e89e57a66 | cirros-0.3.6-x86_64-disk | active |
      +--------------------------------------+--------------------------+--------+
      
    • Create two instances, i.e., backend1 and backend2, in RegionOne, which reside in subnet1.

      $ openstack --os-region-name CentralRegion server create --flavor 1 --image $image_id --nic net-id=$net1_id --availability-zone az1 backend1
      $ openstack --os-region-name CentralRegion server create --flavor 1 --image $image_id --nic net-id=$net1_id --availability-zone az1 backend2
      
      +--------------------------------------+-----------------------------------------------------------------+
      | Property                             | Value                                                           |
      +--------------------------------------+-----------------------------------------------------------------+
      | OS-DCF:diskConfig                    | MANUAL                                                          |
      | OS-EXT-AZ:availability_zone          |                                                                 |
      | OS-EXT-SRV-ATTR:host                 | -                                                               |
      | OS-EXT-SRV-ATTR:hostname             | backend1                                                        |
      | OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                               |
      | OS-EXT-SRV-ATTR:instance_name        |                                                                 |
      | OS-EXT-SRV-ATTR:kernel_id            |                                                                 |
      | OS-EXT-SRV-ATTR:launch_index         | 0                                                               |
      | OS-EXT-SRV-ATTR:ramdisk_id           |                                                                 |
      | OS-EXT-SRV-ATTR:reservation_id       | r-0m1suyvm                                                      |
      | OS-EXT-SRV-ATTR:root_device_name     | -                                                               |
      | OS-EXT-SRV-ATTR:user_data            | -                                                               |
      | OS-EXT-STS:power_state               | 0                                                               |
      | OS-EXT-STS:task_state                | scheduling                                                      |
      | OS-EXT-STS:vm_state                  | building                                                        |
      | OS-SRV-USG:launched_at               | -                                                               |
      | OS-SRV-USG:terminated_at             | -                                                               |
      | accessIPv4                           |                                                                 |
      | accessIPv6                           |                                                                 |
      | adminPass                            | 7poPJnDxV3Mz                                                    |
      | config_drive                         |                                                                 |
      | created                              | 2019-01-01T07:30:26Z                                            |
      | description                          | -                                                               |
      | flavor:disk                          | 1                                                               |
      | flavor:ephemeral                     | 0                                                               |
      | flavor:extra_specs                   | {}                                                              |
      | flavor:original_name                 | m1.tiny                                                         |
      | flavor:ram                           | 512                                                             |
      | flavor:swap                          | 0                                                               |
      | flavor:vcpus                         | 1                                                               |
      | hostId                               |                                                                 |
      | host_status                          |                                                                 |
      | id                                   | d330f73f-2d78-4f59-8ea2-6fa1b878d6a5                            |
      | image                                | cirros-0.3.6-x86_64-disk (85d165f0-bc7a-43d5-850b-4a8e89e57a66) |
      | key_name                             | -                                                               |
      | locked                               | False                                                           |
      | metadata                             | {}                                                              |
      | name                                 | backend1                                                        |
      | os-extended-volumes:volumes_attached | []                                                              |
      | progress                             | 0                                                               |
      | security_groups                      | default                                                         |
      | status                               | BUILD                                                           |
      | tags                                 | []                                                              |
      | tenant_id                            | d3b83ed3f2504a8699c9528a2297fea7                                |
      | trusted_image_certificates           | -                                                               |
      | updated                              | 2019-01-01T07:30:27Z                                            |
      | user_id                              | fdf37c6259544a9294ae8463e9be063c                                |
      +--------------------------------------+-----------------------------------------------------------------+
      
      $ openstack --os-region-name CentralRegion server list
      
      +--------------------------------------+----------+--------+------------+-------------+------------------+
      | ID                                   | Name     | Status | Task State | Power State | Networks         |
      +--------------------------------------+----------+--------+------------+-------------+------------------+
      | d330f73f-2d78-4f59-8ea2-6fa1b878d6a5 | backend1 | ACTIVE | -          | Running     | net1=10.0.10.152 |
      | 72a4d0b0-88bc-41c5-9cb1-0965a5f3008f | backend2 | ACTIVE | -          | Running     | net1=10.0.10.176 |
      +--------------------------------------+----------+--------+------------+-------------+------------------+
      
    • Console in the instances with user ‘cirros’ and password of ‘cubswin:)’. Then run the following commands to simulate a web server.

      Note

      If using cirros 0.4.0 and above, Console in the instances with user ‘cirros’ and password of ‘gocubsgo’.

      $ sudo ip netns exec dhcp-$net1_id ssh cirros@10.0.10.152
      $ sudo ip netns exec dhcp-$net1_id ssh cirros@10.0.10.176
      
      $ MYIP=$(ifconfig eth0| grep 'inet addr'| awk -F: '{print $2}'| awk '{print $1}')
      $ while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
      

      The Octavia installed in node1 and node2 are two standalone services, here we take RegionOne as an example.

    • Create a load balancer for subnet1 in RegionOne.

      $ openstack --os-region-name RegionOne loadbalancer create --name lb1 --vip-subnet-id $subnet1_id
      
      +---------------------+--------------------------------------+
      | Field               | Value                                |
      +---------------------+--------------------------------------+
      | admin_state_up      | True                                 |
      | created_at          | 2019-01-01T07:37:46                  |
      | description         |                                      |
      | flavor              |                                      |
      | id                  | bbb5480a-a6ec-4cea-a77d-4872a94aca5c |
      | listeners           |                                      |
      | name                | lb1                                  |
      | operating_status    | OFFLINE                              |
      | pools               |                                      |
      | project_id          | d3b83ed3f2504a8699c9528a2297fea7     |
      | provider            | amphora                              |
      | provisioning_status | PENDING_CREATE                       |
      | updated_at          | None                                 |
      | vip_address         | 10.0.10.189                          |
      | vip_network_id      | 9dcdcb56-358f-40b1-9e3f-6ed6bae6db7d |
      | vip_port_id         | 759370eb-5f50-4229-be7e-0ca7aefe04db |
      | vip_qos_policy_id   | None                                 |
      | vip_subnet_id       | 39ccf811-b188-4ccf-a643-dd7669a413c2 |
      +---------------------+--------------------------------------+
      

      Create a listener for the load balancer after the status of the load balancer is ‘ACTIVE’. Please note that it may take some time for the load balancer to become ‘ACTIVE’.

      $ openstack --os-region-name RegionOne loadbalancer list
      
      +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+
      | id                                   | name | project_id                       | vip_address | provisioning_status | provider |
      +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+
      | bbb5480a-a6ec-4cea-a77d-4872a94aca5c | lb1  | d3b83ed3f2504a8699c9528a2297fea7 | 10.0.10.189 | ACTIVE              | amphora  |
      +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+
      
      $ openstack --os-region-name RegionOne loadbalancer listener create --protocol HTTP --protocol-port 80 --name listener1 lb1
      
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | True                                 |
      | connection_limit          | -1                                   |
      | created_at                | 2019-01-01T07:44:21                  |
      | default_pool_id           | None                                 |
      | default_tls_container_ref | None                                 |
      | description               |                                      |
      | id                        | ec9d2e51-25ab-4c50-83cb-15f726d366ec |
      | insert_headers            | None                                 |
      | l7policies                |                                      |
      | loadbalancers             | bbb5480a-a6ec-4cea-a77d-4872a94aca5c |
      | name                      | listener1                            |
      | operating_status          | OFFLINE                              |
      | project_id                | d3b83ed3f2504a8699c9528a2297fea7     |
      | protocol                  | HTTP                                 |
      | protocol_port             | 80                                   |
      | provisioning_status       | PENDING_CREATE                       |
      | sni_container_refs        | []                                   |
      | timeout_client_data       | 50000                                |
      | timeout_member_connect    | 5000                                 |
      | timeout_member_data       | 50000                                |
      | timeout_tcp_inspect       | 0                                    |
      | updated_at                | None                                 |
      +---------------------------+--------------------------------------+
      
    • Create a pool for the listener after the status of the load balancer is ‘ACTIVE’.

      $ openstack --os-region-name RegionOne loadbalancer pool create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
      
      +---------------------+--------------------------------------+
      | Field               | Value                                |
      +---------------------+--------------------------------------+
      | admin_state_up      | True                                 |
      | created_at          | 2019-01-01T07:46:21                  |
      | description         |                                      |
      | healthmonitor_id    |                                      |
      | id                  | 7560b064-cdbe-4fa2-ae50-f66ad67fb575 |
      | lb_algorithm        | ROUND_ROBIN                          |
      | listeners           | ec9d2e51-25ab-4c50-83cb-15f726d366ec |
      | loadbalancers       | bbb5480a-a6ec-4cea-a77d-4872a94aca5c |
      | members             |                                      |
      | name                | pool1                                |
      | operating_status    | OFFLINE                              |
      | project_id          | d3b83ed3f2504a8699c9528a2297fea7     |
      | protocol            | HTTP                                 |
      | provisioning_status | PENDING_CREATE                       |
      | session_persistence | None                                 |
      | updated_at          | None                                 |
      +---------------------+--------------------------------------+
      
    • Add two instances to the pool as members, after the status of the load balancer is ‘ACTIVE’.

      $  openstack --os-region-name RegionOne loadbalancer member create --subnet $subnet1_id --address $backend1_ip  --protocol-port 80 pool1
      
      +---------------------+--------------------------------------+
      | Field               | Value                                |
      +---------------------+--------------------------------------+
      | address             | 10.0.10.152                          |
      | admin_state_up      | True                                 |
      | created_at          | 2019-01-01T07:49:04                  |
      | id                  | 4e6ce567-0710-4a29-a98f-ab766e4963ab |
      | name                |                                      |
      | operating_status    | NO_MONITOR                           |
      | project_id          | d3b83ed3f2504a8699c9528a2297fea7     |
      | protocol_port       | 80                                   |
      | provisioning_status | PENDING_CREATE                       |
      | subnet_id           | 39ccf811-b188-4ccf-a643-dd7669a413c2 |
      | updated_at          | None                                 |
      | weight              | 1                                    |
      | monitor_port        | None                                 |
      | monitor_address     | None                                 |
      | backup              | False                                |
      +---------------------+--------------------------------------+
      
      $ openstack --os-region-name RegionOne loadbalancer member create --subnet $subnet1_id --address $backend2_ip  --protocol-port 80 pool1
      
      +---------------------+--------------------------------------+
      | Field               | Value                                |
      +---------------------+--------------------------------------+
      | address             | 10.0.10.176                          |
      | admin_state_up      | True                                 |
      | created_at          | 2019-01-01T07:50:06                  |
      | id                  | 1e8ab609-a7e9-44af-b37f-69b494b40d01 |
      | name                |                                      |
      | operating_status    | NO_MONITOR                           |
      | project_id          | d3b83ed3f2504a8699c9528a2297fea7     |
      | protocol_port       | 80                                   |
      | provisioning_status | PENDING_CREATE                       |
      | subnet_id           | 39ccf811-b188-4ccf-a643-dd7669a413c2 |
      | updated_at          | None                                 |
      | weight              | 1                                    |
      | monitor_port        | None                                 |
      | monitor_address     | None                                 |
      | backup              | False                                |
      +---------------------+--------------------------------------+
      
    • Verify load balancing. Request the VIP twice.

      $ sudo ip netns exec dhcp-$net1_id curl -v $VIP
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.152
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.176
      * Closing connection 0
      
  • 2 LBaaS members in one network but in different regions

    • List all available flavors in RegionTwo

      $ openstack --os-region-name RegionTwo flavor list
      
      +----+-----------+-------+------+-----------+-------+-----------+
      | ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
      +----+-----------+-------+------+-----------+-------+-----------+
      | 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
      | 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
      | 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
      | 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
      | 42 | m1.nano   |    64 |    0 |         0 |     1 | True      |
      | 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
      | 84 | m1.micro  |   128 |    0 |         0 |     1 | True      |
      | c1 | cirros256 |   256 |    0 |         0 |     1 | True      |
      | d1 | ds512M    |   512 |    5 |         0 |     1 | True      |
      | d2 | ds1G      |  1024 |   10 |         0 |     1 | True      |
      | d3 | ds2G      |  2048 |   10 |         0 |     2 | True      |
      | d4 | ds4G      |  4096 |   20 |         0 |     4 | True      |
      +----+-----------+-------+------+-----------+-------+-----------+
      
    • List all available images in RegionTwo

      $ openstack --os-region-name RegionTwo image list
      
      +--------------------------------------+--------------------------+--------+
      | ID                                   | Name                     | Status |
      +--------------------------------------+--------------------------+--------+
      | 471ed2cb-8004-4973-9210-b96463b2c668 | amphora-x64-haproxy      | active |
      | 85d165f0-bc7a-43d5-850b-4a8e89e57a66 | cirros-0.3.6-x86_64-disk | active |
      +--------------------------------------+--------------------------+--------+
      
    • Create an instance in RegionTwo, which resides in subnet1

      $ openstack --os-region-name CentralRegion server create --flavor 1 --image $image_id --nic net-id=$net1_id --availability-zone az2 backend3
      
      +-------------------------------------+-----------------------------------------------------------------+
      | Field                               | Value                                                           |
      +-------------------------------------+-----------------------------------------------------------------+
      | OS-DCF:diskConfig                   | MANUAL                                                          |
      | OS-EXT-AZ:availability_zone         | az2                                                             |
      | OS-EXT-SRV-ATTR:host                | None                                                            |
      | OS-EXT-SRV-ATTR:hypervisor_hostname | None                                                            |
      | OS-EXT-SRV-ATTR:instance_name       |                                                                 |
      | OS-EXT-STS:power_state              | NOSTATE                                                         |
      | OS-EXT-STS:task_state               | scheduling                                                      |
      | OS-EXT-STS:vm_state                 | building                                                        |
      | OS-SRV-USG:launched_at              | None                                                            |
      | OS-SRV-USG:terminated_at            | None                                                            |
      | accessIPv4                          |                                                                 |
      | accessIPv6                          |                                                                 |
      | addresses                           |                                                                 |
      | adminPass                           | rpV9MLzPGSvB                                                    |
      | config_drive                        |                                                                 |
      | created                             | 2019-01-01T07:56:41Z                                            |
      | flavor                              | m1.tiny (1)                                                     |
      | hostId                              |                                                                 |
      | id                                  | b27539fb-4c98-4f0c-b3f8-bc6744659f67                            |
      | image                               | cirros-0.3.6-x86_64-disk (85d165f0-bc7a-43d5-850b-4a8e89e57a66) |
      | key_name                            | None                                                            |
      | name                                | backend3                                                        |
      | progress                            | 0                                                               |
      | project_id                          | d3b83ed3f2504a8699c9528a2297fea7                                |
      | properties                          |                                                                 |
      | security_groups                     | name='default'                                                  |
      | status                              | BUILD                                                           |
      | updated                             | 2019-01-01T07:56:42Z                                            |
      | user_id                             | fdf37c6259544a9294ae8463e9be063c                                |
      | volumes_attached                    |                                                                 |
      +-------------------------------------+-----------------------------------------------------------------+
      
    • Console in the instances with user ‘cirros’ and password of ‘cubswin:)’. Then run the following commands to simulate a web server.

      $ sudo ip netns exec dhcp-$net1_id ssh cirros@$backend3_ip
      
      $ MYIP=$(ifconfig eth0| grep 'inet addr'| awk -F: '{print $2}'| awk '{print $1}')
      $ while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
      
    • Add backend3 to the pool as a member, after the status of the load balancer is ‘ACTIVE’.

      $ openstack --os-region-name RegionOne loadbalancer member create --subnet $subnet1_id --address $backend3_ip --protocol-port 80 pool1
      
    • Verify load balancing. Request the VIP three times.

      Note

      Please note if the subnet is created in the region, just like the cases before this step, either unique name or id of the subnet can be used as hint. But if the subnet is not created yet, like the case for backend3, users are required to use subnet id as hint instead of subnet name. Because the subnet is not created in RegionOne, local neutron needs to query central neutron for the subnet with id.

      $ sudo ip netns exec dhcp- curl -v $VIP
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.152
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.176
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.186
      * Closing connection 0
      
  • 3 LBaaS across members in different networks and different regions

    • Create net2 in CentralRegion

      $ openstack --os-region-name CentralRegion network create net2
      
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | UP                                   |
      | availability_zone_hints   |                                      |
      | availability_zones        | None                                 |
      | created_at                | None                                 |
      | description               | None                                 |
      | dns_domain                | None                                 |
      | id                        | f0ea9608-2d6e-4272-a596-2dc3a725eddc |
      | ipv4_address_scope        | None                                 |
      | ipv6_address_scope        | None                                 |
      | is_default                | None                                 |
      | is_vlan_transparent       | None                                 |
      | location                  | None                                 |
      | mtu                       | None                                 |
      | name                      | net2                                 |
      | port_security_enabled     | False                                |
      | project_id                | d3b83ed3f2504a8699c9528a2297fea7     |
      | provider:network_type     | vxlan                                |
      | provider:physical_network | None                                 |
      | provider:segmentation_id  | 1088                                 |
      | qos_policy_id             | None                                 |
      | revision_number           | None                                 |
      | router:external           | Internal                             |
      | segments                  | None                                 |
      | shared                    | False                                |
      | status                    | ACTIVE                               |
      | subnets                   |                                      |
      | tags                      |                                      |
      | updated_at                | None                                 |
      +---------------------------+--------------------------------------+
      
    • Create a subnet in net2

      $ openstack --os-region-name CentralRegion subnet create --subnet-range 10.0.20.0/24 --gateway none --network net2 subnet2
      
      +-------------------+--------------------------------------+
      | Field             | Value                                |
      +-------------------+--------------------------------------+
      | allocation_pools  | 10.0.20.1-10.0.20.254                |
      | cidr              | 10.0.20.0/24                         |
      | created_at        | 2019-01-01T07:59:53Z                 |
      | description       |                                      |
      | dns_nameservers   |                                      |
      | enable_dhcp       | True                                 |
      | gateway_ip        | None                                 |
      | host_routes       |                                      |
      | id                | 4c05a73d-fa1c-46a9-982f-6683b0d1cb2a |
      | ip_version        | 4                                    |
      | ipv6_address_mode | None                                 |
      | ipv6_ra_mode      | None                                 |
      | location          | None                                 |
      | name              | subnet2                              |
      | network_id        | f0ea9608-2d6e-4272-a596-2dc3a725eddc |
      | project_id        | d3b83ed3f2504a8699c9528a2297fea7     |
      | revision_number   | 0                                    |
      | segment_id        | None                                 |
      | service_types     | None                                 |
      | subnetpool_id     | None                                 |
      | tags              |                                      |
      | updated_at        | 2019-01-01T07:59:53Z                 |
      +-------------------+--------------------------------------+
      
    • List all available flavors in RegionTwo

      $ openstack --os-region-name RegionTwo flavor list
      
      +----+-----------+-------+------+-----------+-------+-----------+
      | ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
      +----+-----------+-------+------+-----------+-------+-----------+
      | 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
      | 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
      | 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
      | 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
      | 42 | m1.nano   |    64 |    0 |         0 |     1 | True      |
      | 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
      | 84 | m1.micro  |   128 |    0 |         0 |     1 | True      |
      | c1 | cirros256 |   256 |    0 |         0 |     1 | True      |
      | d1 | ds512M    |   512 |    5 |         0 |     1 | True      |
      | d2 | ds1G      |  1024 |   10 |         0 |     1 | True      |
      | d3 | ds2G      |  2048 |   10 |         0 |     2 | True      |
      | d4 | ds4G      |  4096 |   20 |         0 |     4 | True      |
      +----+-----------+-------+------+-----------+-------+-----------+
      
    • List all available images in RegionTwo

      $ openstack --os-region-name RegionTwo image list
      
      +--------------------------------------+--------------------------+--------+
      | ID                                   | Name                     | Status |
      +--------------------------------------+--------------------------+--------+
      | 471ed2cb-8004-4973-9210-b96463b2c668 | amphora-x64-haproxy      | active |
      | 85d165f0-bc7a-43d5-850b-4a8e89e57a66 | cirros-0.3.6-x86_64-disk | active |
      +--------------------------------------+--------------------------+--------+
      
    • Create an instance in RegionTwo, which resides in subnet2

      $ openstack --os-region-name CentralRegion server create --flavor 1 --image $image_id --nic net-id=$net2_id --availability-zone az2 backend4
      
      +-------------------------------------+-----------------------------------------------------------------+
      | Field                               | Value                                                           |
      +-------------------------------------+-----------------------------------------------------------------+
      | OS-DCF:diskConfig                   | MANUAL                                                          |
      | OS-EXT-AZ:availability_zone         | az2                                                             |
      | OS-EXT-SRV-ATTR:host                | None                                                            |
      | OS-EXT-SRV-ATTR:hypervisor_hostname | None                                                            |
      | OS-EXT-SRV-ATTR:instance_name       |                                                                 |
      | OS-EXT-STS:power_state              | NOSTATE                                                         |
      | OS-EXT-STS:task_state               | scheduling                                                      |
      | OS-EXT-STS:vm_state                 | building                                                        |
      | OS-SRV-USG:launched_at              | None                                                            |
      | OS-SRV-USG:terminated_at            | None                                                            |
      | accessIPv4                          |                                                                 |
      | accessIPv6                          |                                                                 |
      | addresses                           |                                                                 |
      | adminPass                           | jHY5xdqgxezb                                                    |
      | config_drive                        |                                                                 |
      | created                             | 2019-01-01T08:02:50Z                                            |
      | flavor                              | m1.tiny (1)                                                     |
      | hostId                              |                                                                 |
      | id                                  | 43bcdc80-6492-4a88-90dd-a979c73219a1                            |
      | image                               | cirros-0.3.6-x86_64-disk (85d165f0-bc7a-43d5-850b-4a8e89e57a66) |
      | key_name                            | None                                                            |
      | name                                | backend4                                                        |
      | progress                            | 0                                                               |
      | project_id                          | d3b83ed3f2504a8699c9528a2297fea7                                |
      | properties                          |                                                                 |
      | security_groups                     | name='default'                                                  |
      | status                              | BUILD                                                           |
      | updated                             | 2019-01-01T08:02:51Z                                            |
      | user_id                             | fdf37c6259544a9294ae8463e9be063c                                |
      | volumes_attached                    |                                                                 |
      +-------------------------------------+-----------------------------------------------------------------+
      
    • Console in the instances with user ‘cirros’ and password of ‘cubswin:)’. Then run the following commands to simulate a web server.

      $ sudo ip netns exec dhcp-$net2_id ssh cirros@$backend4_ip
      
      $ MYIP=$(ifconfig eth0| grep 'inet addr'| awk -F: '{print $2}'| awk '{print $1}')
      $ while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
      
    • Add the instance to the pool as a member, after the status of the load balancer is ‘ACTIVE’.

      $ openstack --os-region-name RegionOne loadbalancer member create --subnet $subnet2_id --address $backend4_ip --protocol-port 80 pool1
      
    • Verify load balancing. Request the VIP four times.

      $ sudo ip netns exec dhcp- curl -v $VIP
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.152
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.176
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.10.186
      * Closing connection 0
      
      * Rebuilt URL to: 10.0.10.189/
      *   Trying 10.0.10.189...
      * Connected to 10.0.10.189 (10.0.10.189) port 80 (#0)
      > GET / HTTP/1.1
      > Host: 10.0.10.189
      > User-Agent: curl/7.47.0
      > Accept: */*
      >
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      <
      Welcome to 10.0.20.64
      * Closing connection 0
      

Installation guide for Tricircle work with Container

Introduction

In the Multi-pod Installation with DevStack , we have discussed how to deploy Tricircle in multi-region scenario with DevStack. However, the previous installation guides have been on how to manage virtual machines using tricircle and Nova in cross-region openstack cloud environments. So, multi-region container management is not supported in Tricircle. Meanwhile, OpenStack uses Zun component to provide container management service, OpenStack also use kuyr component and kuryr-libnetwork component to provide container network. In view of the Tricircle Central_Neutron-Local_Neutron fashion, Tricircle work with zun and kuryr will provide a cross-region container management solution. This guide is to describe how tricircle work with container management and how to deploy a multi-region container environment.

Prerequisite

In this guide, we need specific versions of the zun project and kuryr project source code. The source code versions of both projects must be the Train version and upper. If not, we need to manually change the source code for both projects. The modification example is as follows:

  • 1 Zun Source Code Modification:

    For Zun project, we need modify the neutron function in /zun/zun/common/clients.py file. (The ‘+’ sign represents the added line)

    def neutron(self):
        if self._neutron:
            return self._neutron
    
        session = self.keystone().session
        session.verify = self._get_client_option('neutron', 'ca_file') or True
        if self._get_client_option('neutron', 'insecure'):
            session.verify = False
        endpoint_type = self._get_client_option('neutron', 'endpoint_type')
    +   region_name = self._get_client_option('neutron', 'region_name')
        self._neutron = neutronclient.Client(session=session,
                                             endpoint_type=endpoint_type,
    +                                        region_name=region_name)
    
        return self._neutron
    
  • 2 Kuryr Source Code Modification:

    For kuryr project, we need modify the get_neutron_client function in /kuryr/kuryr/lib/utils.py file. (The ‘+’ sign represents the added line)

    def get_neutron_client(*args, **kwargs):
        conf_group = kuryr_config.neutron_group.name
        auth_plugin = get_auth_plugin(conf_group)
        session = get_keystone_session(conf_group, auth_plugin)
        endpoint_type = getattr(getattr(cfg.CONF, conf_group), 'endpoint_type')
    +   region_name = getattr(getattr(cfg.CONF, conf_group), 'region_name')
    
        return client.Client(session=session,
                             auth=auth_plugin,
                             endpoint_type=endpoint_type,
    +                        region_name=region_name)
    

Setup

In this guide we take two nodes deployment as an example, the node1 run as RegionOne and Central Region, the node2 run as RegionTwo.

  • 1 For the node1 in RegionOne and the node2 in RegionTwo, clone the code from Zun repository and Kuryr repository to /opt/stack/ . If the code does not meet the requirements described in the Prerequisite Section, modify it with reference to the modification example of the Prerequisite Section.

  • 2 Follow “Multi-pod Installation with DevStack” document Multi-pod Installation with DevStack to prepare your local.conf for the node1 in RegionOne and the node12 in RegionTwo, and add the following lines before installation. Start DevStack in node1 and node2.

    enable_plugin zun https://git.openstack.org/openstack/zun
    enable_plugin zun-tempest-plugin https://git.openstack.org/openstack/zun-tempest-plugin
    enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
    enable_plugin kuryr-libnetwork https://git.openstack.org/openstack/kuryr-libnetwork
    
    KURYR_CAPABILITY_SCOPE=local
    KURYR_PROCESS_EXTERNAL_CONNECTIVITY=False
    
  • 3 After DevStack successfully started and finished, we need make some configuration changes to Zun component and Kuryr component in node1 and node2.

    • For Zun in node1, modify the /etc/zun/zun.conf

      Group

      Option

      Value

      [neutron_client]

      region_name

      RegionOne

    • Restart all the services of Zun in node1.

      $ sudo systemctl restart devstack@zun*
      
    • For Kuryr in node1, modify the /etc/kuryr/kuryr.conf

      Group

      Option

      Value

      [neutron]

      region_name

      RegionOne

    • Restart all the services of Kuryr in node1.

      $ sudo systemctl restart devstack@kur*
      
    • For Zun in node2, modify the /etc/zun/zun.conf

      Group

      Option

      Value

      [neutron_client]

      region_name

      RegionTwo

    • Restart all the services of Zun in node2.

      $ sudo systemctl restart devstack@zun*
      
    • For Kuryr in node2, modify the /etc/kuryr/kuryr.conf

      Group

      Option

      Value

      [neutron]

      region_name

      RegionTwo

    • Restart all the services of Zun in node2.

      $ sudo systemctl restart devstack@kur*
      
  • 4 Then, we must create environment variables for the admin user and use the admin project.

    $ source openrc admin admin
    $ unset OS_REGION_NAME
    
  • 5 Finally, use tricircle client to create pods for multi-region.

    $ openstack --os-region-name CentralRegion multiregion networking pod create --region-name CentralRegion
    $ openstack --os-region-name CentralRegion multiregion networking pod create --region-name RegionOne --availability-zone az1
    $ openstack --os-region-name CentralRegion multiregion networking pod create --region-name RegionTwo --availability-zone az2
    

How to play

  • 1 Create container glance image in RegionOne and RegionTwo.

    • Get docker image from Docker Hub. Run these command in the node1 and the node2.

      $ docker pull cirros
      $ docker save cirros -o /opt/stack/container_cirros
      
    • Use glance client to create container image.

      $ glance --os-region-name=RegionOne image-create --file /opt/stack/container_cirros --container-format=docker --disk-format=raw --name container_cirros --progress
      $ glance --os-region-name=RegionTwo image-create --file /opt/stack/container_cirros --container-format=docker --disk-format=raw --name container_cirros --progress
      
      $ openstack --os-region-name RegionOne image list
      
      +--------------------------------------+--------------------------+--------+
      | ID                                   | Name                     | Status |
      +--------------------------------------+--------------------------+--------+
      | 11186baf-4381-4e52-956c-22878b0642df | cirros-0.4.0-x86_64-disk | active |
      | 87864205-4352-4a2c-b9b1-ca95df52c93c | container_cirros         | active |
      +--------------------------------------+--------------------------+--------+
      
      $ openstack --os-region-name RegionTwo image list
      
      +--------------------------------------+--------------------------+--------+
      | ID                                   | Name                     | Status |
      +--------------------------------------+--------------------------+--------+
      | cd062c19-bb3a-4f60-b5ef-9688eb67b3da | container_cirros         | active |
      | cf4a2dc7-6d6e-4b7e-a772-44247246e1ff | cirros-0.4.0-x86_64-disk | active |
      +--------------------------------------+--------------------------+--------+
      
  • 2 Create container network in CentralRegion.

    • Create a net in CentralRegion.

      $ openstack --os-region-name CentralRegion network create container-net
      
      +---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | Field                     | Value                                                                                                                                                                |
      +---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | admin_state_up            | UP                                                                                                                                                                   |
      | availability_zone_hints   |                                                                                                                                                                      |
      | availability_zones        | None                                                                                                                                                                 |
      | created_at                | None                                                                                                                                                                 |
      | description               | None                                                                                                                                                                 |
      | dns_domain                | None                                                                                                                                                                 |
      | id                        | 5e73dda5-902b-4322-b5b6-4121437fde26                                                                                                                                 |
      | ipv4_address_scope        | None                                                                                                                                                                 |
      | ipv6_address_scope        | None                                                                                                                                                                 |
      | is_default                | None                                                                                                                                                                 |
      | is_vlan_transparent       | None                                                                                                                                                                 |
      | location                  | cloud='', project.domain_id='default', project.domain_name=, project.id='2f314a39de10467bb62745bd96c5fe4d', project.name='admin', region_name='CentralRegion', zone= |
      | mtu                       | None                                                                                                                                                                 |
      | name                      | container-net                                                                                                                                                        |
      | port_security_enabled     | False                                                                                                                                                                |
      | project_id                | 2f314a39de10467bb62745bd96c5fe4d                                                                                                                                     |
      | provider:network_type     | vxlan                                                                                                                                                                |
      | provider:physical_network | None                                                                                                                                                                 |
      | provider:segmentation_id  | 1070                                                                                                                                                                 |
      | qos_policy_id             | None                                                                                                                                                                 |
      | revision_number           | None                                                                                                                                                                 |
      | router:external           | Internal                                                                                                                                                             |
      | segments                  | None                                                                                                                                                                 |
      | shared                    | False                                                                                                                                                                |
      | status                    | ACTIVE                                                                                                                                                               |
      | subnets                   |                                                                                                                                                                      |
      | tags                      |                                                                                                                                                                      |
      | updated_at                | None                                                                                                                                                                 |
      +---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      
    • Create a subnet in container-net

      $ openstack --os-region-name CentralRegion subnet create --subnet-range 10.0.60.0/24 --network container-net container-subnet
      
      +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | Field             | Value                                                                                                                                                                |
      +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | allocation_pools  | 10.0.60.2-10.0.60.254                                                                                                                                                |
      | cidr              | 10.0.60.0/24                                                                                                                                                         |
      | created_at        | 2019-12-10T07:13:21Z                                                                                                                                                 |
      | description       |                                                                                                                                                                      |
      | dns_nameservers   |                                                                                                                                                                      |
      | enable_dhcp       | True                                                                                                                                                                 |
      | gateway_ip        | 10.0.60.1                                                                                                                                                            |
      | host_routes       |                                                                                                                                                                      |
      | id                | b7a7adbd-afd3-4449-9cbc-fbce16c7a2e7                                                                                                                                 |
      | ip_version        | 4                                                                                                                                                                    |
      | ipv6_address_mode | None                                                                                                                                                                 |
      | ipv6_ra_mode      | None                                                                                                                                                                 |
      | location          | cloud='', project.domain_id='default', project.domain_name=, project.id='2f314a39de10467bb62745bd96c5fe4d', project.name='admin', region_name='CentralRegion', zone= |
      | name              | container-subnet                                                                                                                                                     |
      | network_id        | 5e73dda5-902b-4322-b5b6-4121437fde26                                                                                                                                 |
      | prefix_length     | None                                                                                                                                                                 |
      | project_id        | 2f314a39de10467bb62745bd96c5fe4d                                                                                                                                     |
      | revision_number   | 0                                                                                                                                                                    |
      | segment_id        | None                                                                                                                                                                 |
      | service_types     | None                                                                                                                                                                 |
      | subnetpool_id     | None                                                                                                                                                                 |
      | tags              |                                                                                                                                                                      |
      | updated_at        | 2019-12-10T07:13:21Z                                                                                                                                                 |
      +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      
  • 3 Create container in RegionOne and RegionTwo.

    Note

    We can give container a specific command to run it continually, e.g. “sudo nc -l -p 5000” .

    $ openstack --os-region-name RegionOne appcontainer run --name container01 --net network=$container_net_id --image-driver glance $RegionTwo_container_cirros_id sudo nc -l -p 5000
    
    +-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Field             | Value                                                                                                                                                                                                           |
    +-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | tty               | False                                                                                                                                                                                                           |
    | addresses         | None                                                                                                                                                                                                            |
    | links             | [{u'href': u'http://192.168.1.81/v1/containers/ca67055c-635d-4603-9b0b-19c16eed7ef9', u'rel': u'self'}, {u'href': u'http://192.168.1.81/containers/ca67055c-635d-4603-9b0b-19c16eed7ef9', u'rel': u'bookmark'}] |
    | image             | 87864205-4352-4a2c-b9b1-ca95df52c93c                                                                                                                                                                            |
    | labels            | {}                                                                                                                                                                                                              |
    | disk              | 0                                                                                                                                                                                                               |
    | security_groups   | None                                                                                                                                                                                                            |
    | image_pull_policy | None                                                                                                                                                                                                            |
    | user_id           | 57df611fd8c7415dad6d2530bf962ecd                                                                                                                                                                                |
    | uuid              | ca67055c-635d-4603-9b0b-19c16eed7ef9                                                                                                                                                                            |
    | hostname          | None                                                                                                                                                                                                            |
    | auto_heal         | False                                                                                                                                                                                                           |
    | environment       | {}                                                                                                                                                                                                              |
    | memory            | 0                                                                                                                                                                                                               |
    | project_id        | 2f314a39de10467bb62745bd96c5fe4d                                                                                                                                                                                |
    | privileged        | False                                                                                                                                                                                                           |
    | status            | Creating                                                                                                                                                                                                        |
    | workdir           | None                                                                                                                                                                                                            |
    | healthcheck       | None                                                                                                                                                                                                            |
    | auto_remove       | False                                                                                                                                                                                                           |
    | status_detail     | None                                                                                                                                                                                                            |
    | cpu_policy        | shared                                                                                                                                                                                                          |
    | host              | None                                                                                                                                                                                                            |
    | image_driver      | glance                                                                                                                                                                                                          |
    | task_state        | None                                                                                                                                                                                                            |
    | status_reason     | None                                                                                                                                                                                                            |
    | name              | container01                                                                                                                                                                                                     |
    | restart_policy    | None                                                                                                                                                                                                            |
    | ports             | None                                                                                                                                                                                                            |
    | command           | [u'sudo', u'nc', u'-l', u'-p', u'5000']                                                                                                                                                                         |
    | runtime           | None                                                                                                                                                                                                            |
    | registry_id       | None                                                                                                                                                                                                            |
    | cpu               | 0.0                                                                                                                                                                                                             |
    | interactive       | False                                                                                                                                                                                                           |
    +-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    
    $ openstack --os-region-name RegionOne appcontainer list
    
    +--------------------------------------+-------------+--------------------------------------+---------+------------+------------+-------+
    | uuid                                 | name        | image                                | status  | task_state | addresses  | ports |
    +--------------------------------------+-------------+--------------------------------------+---------+------------+------------+-------+
    | ca67055c-635d-4603-9b0b-19c16eed7ef9 | container01 | 87864205-4352-4a2c-b9b1-ca95df52c93c | Running | None       | 10.0.60.62 | []    |
    +--------------------------------------+-------------+--------------------------------------+---------+------------+------------+-------+
    
    
    $ openstack --os-region-name RegionTwo appcontainer run --name container02 --net network=$container_net_id --image-driver glance $RegionTwo_container_cirros_id sudo nc -l -p 5000
    
    +-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Field             | Value                                                                                                                                                                                                           |
    +-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | tty               | False                                                                                                                                                                                                           |
    | addresses         | None                                                                                                                                                                                                            |
    | links             | [{u'href': u'http://192.168.1.82/v1/containers/c359e48c-7637-4d9f-8219-95a4577683c3', u'rel': u'self'}, {u'href': u'http://192.168.1.82/containers/c359e48c-7637-4d9f-8219-95a4577683c3', u'rel': u'bookmark'}] |
    | image             | cd062c19-bb3a-4f60-b5ef-9688eb67b3da                                                                                                                                                                            |
    | labels            | {}                                                                                                                                                                                                              |
    | disk              | 0                                                                                                                                                                                                               |
    | security_groups   | None                                                                                                                                                                                                            |
    | image_pull_policy | None                                                                                                                                                                                                            |
    | user_id           | 57df611fd8c7415dad6d2530bf962ecd                                                                                                                                                                                |
    | uuid              | c359e48c-7637-4d9f-8219-95a4577683c3                                                                                                                                                                            |
    | hostname          | None                                                                                                                                                                                                            |
    | auto_heal         | False                                                                                                                                                                                                           |
    | environment       | {}                                                                                                                                                                                                              |
    | memory            | 0                                                                                                                                                                                                               |
    | project_id        | 2f314a39de10467bb62745bd96c5fe4d                                                                                                                                                                                |
    | privileged        | False                                                                                                                                                                                                           |
    | status            | Creating                                                                                                                                                                                                        |
    | workdir           | None                                                                                                                                                                                                            |
    | healthcheck       | None                                                                                                                                                                                                            |
    | auto_remove       | False                                                                                                                                                                                                           |
    | status_detail     | None                                                                                                                                                                                                            |
    | cpu_policy        | shared                                                                                                                                                                                                          |
    | host              | None                                                                                                                                                                                                            |
    | image_driver      | glance                                                                                                                                                                                                          |
    | task_state        | None                                                                                                                                                                                                            |
    | status_reason     | None                                                                                                                                                                                                            |
    | name              | container02                                                                                                                                                                                                     |
    | restart_policy    | None                                                                                                                                                                                                            |
    | ports             | None                                                                                                                                                                                                            |
    | command           | [u'sudo', u'nc', u'-l', u'-p', u'5000']                                                                                                                                                                         |
    | runtime           | None                                                                                                                                                                                                            |
    | registry_id       | None                                                                                                                                                                                                            |
    | cpu               | 0.0                                                                                                                                                                                                             |
    | interactive       | False                                                                                                                                                                                                           |
    +-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    
    $ openstack --os-region-name RegionTwo appcontainer list
    
    +--------------------------------------+-------------+--------------------------------------+---------+------------+-------------+-------+
    | uuid                                 | name        | image                                | status  | task_state | addresses   | ports |
    +--------------------------------------+-------------+--------------------------------------+---------+------------+-------------+-------+
    | c359e48c-7637-4d9f-8219-95a4577683c3 | container02 | cd062c19-bb3a-4f60-b5ef-9688eb67b3da | Running | None       | 10.0.60.134 | []    |
    +--------------------------------------+-------------+--------------------------------------+---------+------------+-------------+-------+
    
  • 4 Execute container in RegionOne and RegionTwo.

    $ openstack --os-region-name RegionOne appcontainer exec --interactive container01 /bin/sh
    $ openstack --os-region-name RegionTwo appcontainer exec --interactive container02 /bin/sh
    
  • 5 By now, we successfully created multi-region container scenario. So we can do something on cross-region container, e.g. 1) RegionOne container ping RegionTwo container 2) Cross-Region Container Load Balancing.