Nova role for OpenStack-Ansible

Nova role for OpenStack-Ansible

tags:openstack, zun, cloud, ansible
category:*nix
This role will install the following Systemd services:
  • zun-server
  • zun-compute

To clone or view the source code for this repository, visit the role repository for os_zun.

Default variables

# Enable/Disable barbican configurations
zun_barbican_enabled: False
# Enable/Disable designate configurations
zun_designate_enabled: False
# Notification topics for designate.
zun_notifications_designate: notifications_designate
# Enable/Disable ceilometer configurations
zun_ceilometer_enabled: False

## Verbosity Options
debug: False

# Set the host which will execute the shade modules
# for the service setup. The host must already have
# clouds.yaml properly configured.
zun_service_setup_host: "{{ openstack_service_setup_host | default('localhost') }}"

# Set the package install state for distribution and pip packages
# Options are 'present' and 'latest'
zun_package_state: "latest"
zun_pip_package_state: "latest"

zun_git_repo: https://git.openstack.org/openstack/zun
zun_git_install_branch: 'stable/rocky'

zun_kuryr_git_repo: https://git.openstack.org/openstack/kuryr-libnetwork
zun_kuryr_git_install_branch: 'stable/rocky'

zun_developer_mode: false
zun_developer_constraints:
  - "git+{{ zun_git_repo }}@{{ zun_git_install_branch }}#egg=zun"
  - "git+{{ zun_kuryr_git_repo }}@{{ zun_kuryr_git_install_branch }}#egg=kuryr-libnetwork"

# Name of the virtual env to deploy into
zun_venv_tag: untagged
zun_bin: "/openstack/venvs/zun-{{ zun_venv_tag }}/bin"

# venv_download, even when true, will use the fallback method of building the
# venv from scratch if the venv download fails.
zun_venv_download: "{{ not zun_developer_mode | bool }}"
zun_venv_download_url: http://127.0.0.1/venvs/untagged/ubuntu/zun.tgz

zun_fatal_deprecations: False

## Zun user information
zun_system_user_name: zun
zun_system_group_name: zun
zun_system_shell: /bin/false
zun_system_comment: zun system user
zun_system_home_folder: "/var/lib/{{ zun_system_user_name }}"
zun_log_dir: "/var/log/zun"

zun_lock_path: "/var/lock/zun"

## Kuryr user information
zun_kuryr_system_user_name: kuryr
zun_kuryr_system_group_name: kuryr
zun_kuryr_system_shell: /bin/false
zun_kuryr_system_comment: kuryr system user
zun_kuryr_system_home_folder: "/var/lib/{{ zun_kuryr_system_user_name }}"
zun_kuryr_log_dir: "/var/log/kuryr"

zun_kuryr_lock_path: "/var/lock/kuryr"

# Set a list of users that are permitted to execute the docker binary.
zun_docker_users:
  - "{{ zun_system_user_name }}"
  - "{{ zun_kuryr_system_user_name }}"

# Set the docker api version. The default is false, which will result in no
# option being set in config for api servers. On compute hosts the docker api
# version will be used as determined by the client version information.
zun_docker_api_version: false

## Manually specified zun UID/GID
# Deployers can specify a UID for the zun user as well as the GID for the
# zun group if needed. This is commonly used in environments where shared
# storage is used, such as NFS or GlusterFS, and zun UID/GID values must be
# in sync between multiple servers.
#
# WARNING: Changing these values on an existing deployment can lead to
#          failures, errors, and instability.
#
# zun_system_user_uid = <UID>
# zun_system_group_gid = <GID>

## Database info
zun_db_setup_host: "{{ ('galera_all' in groups) | ternary(groups['galera_all'][0], 'localhost') }}"
zun_galera_address: "{{ galera_address | default('127.0.0.1') }}"
zun_galera_user: zun
zun_galera_database: zun
zun_db_max_overflow: 10
zun_db_max_pool_size: 120
zun_db_pool_timeout: 30
# Toggle whether zun connects via an encrypted connection
zun_galera_use_ssl: "{{ galera_use_ssl | default(False) }}"
# The path where to store the database server CA certificate
zun_galera_ssl_ca_cert: "{{ galera_ssl_ca_cert | default('/etc/ssl/certs/galera-ca.pem') }}"

## RabbitMQ info

## Configuration for RPC communications
zun_rpc_thread_pool_size: 64
zun_rpc_conn_pool_size: 30
zun_rpc_response_timeout: 60

## Oslo Messaging info

# RPC
zun_oslomsg_rpc_host_group: "{{ oslomsg_rpc_host_group | default('rabbitmq_all') }}"
zun_oslomsg_rpc_setup_host: "{{ (zun_oslomsg_rpc_host_group in groups) | ternary(groups[zun_oslomsg_rpc_host_group][0], 'localhost') }}"
zun_oslomsg_rpc_transport: "{{ oslomsg_rpc_transport | default('rabbit') }}"
zun_oslomsg_rpc_servers: "{{ oslomsg_rpc_servers | default('127.0.0.1') }}"
zun_oslomsg_rpc_port: "{{ oslomsg_rpc_port | default('5672') }}"
zun_oslomsg_rpc_use_ssl: "{{ oslomsg_rpc_use_ssl | default(False) }}"
zun_oslomsg_rpc_userid: zun
zun_oslomsg_rpc_vhost: /zun

# Notify
zun_oslomsg_notify_host_group: "{{ oslomsg_notify_host_group | default('rabbitmq_all') }}"
zun_oslomsg_notify_setup_host: "{{ (zun_oslomsg_notify_host_group in groups) | ternary(groups[zun_oslomsg_notify_host_group][0], 'localhost') }}"
zun_oslomsg_notify_transport: "{{ oslomsg_notify_transport | default('rabbit') }}"
zun_oslomsg_notify_servers: "{{ oslomsg_notify_servers | default('127.0.0.1') }}"
zun_oslomsg_notify_port: "{{ oslomsg_notify_port | default('5672') }}"
zun_oslomsg_notify_use_ssl: "{{ oslomsg_notify_use_ssl | default(False) }}"
zun_oslomsg_notify_userid: "{{ zun_oslomsg_rpc_userid }}"
zun_oslomsg_notify_password: "{{ zun_oslomsg_rpc_password }}"
zun_oslomsg_notify_vhost: "{{ zun_oslomsg_rpc_vhost }}"

# If this is not set, then the playbook will try to guess it.
#zun_virt_type: kvm

## Zun Auth
zun_service_region: RegionOne
zun_service_project_name: "service"
zun_service_project_domain_id: default
zun_service_user_domain_id: default
zun_service_user_name: "zun"
zun_service_role_name: "admin"

## Zun Auth for kuryr
zun_kuryr_service_username: kuryr

## Keystone authentication middleware
zun_keystone_auth_plugin: password

## Zun v1
zun_service_name: zun
zun_service_type: container
zun_service_proto: http
zun_service_publicuri_proto: "{{ openstack_service_publicuri_proto | default(zun_service_proto) }}"
zun_service_adminuri_proto: "{{ openstack_service_adminuri_proto | default(zun_service_proto) }}"
zun_service_internaluri_proto: "{{ openstack_service_internaluri_proto | default(zun_service_proto) }}"
zun_service_port: 9517
zun_service_description: "Zun Compute Service"
zun_service_publicuri: "{{ zun_service_publicuri_proto }}://{{ external_lb_vip_address }}:{{ zun_service_port }}"
zun_service_publicurl: "{{ zun_service_publicuri }}"
zun_service_adminuri: "{{ zun_service_adminuri_proto }}//{{ internal_lb_vip_address }}:{{ zun_service_port }}"
zun_service_adminurl: "{{ zun_service_adminuri }}"
zun_service_internaluri: "{{ zun_service_internaluri_proto }}://{{ internal_lb_vip_address }}:{{ zun_service_port }}"
zun_service_internalurl: "{{ zun_service_internaluri }}"
zun_service_endpoint_type: internalURL

# If you want to regenerate the zun users SSH keys, on each run, set this var to True
# Otherwise keys will be generated on the first run and not regenerated each run.
zun_recreate_keys: False

## General Zun configuration
# If ``zun_osapi_compute_workers`` is unset the system will use half the number of available VCPUS to
# compute the number of api workers to use.
# zun_osapi_compute_workers: 16

# If ``zun_conductor_workers`` is unset the system will use half the number of available VCPUS to
# compute the number of api workers to use.
# zun_conductor_workers: 16

# If ``zun_metadata_workers`` is unset the system will use half the number of available VCPUS to
# compute the number of api workers to use.
# zun_metadata_workers: 16

## Cap the maximun number of threads / workers when a user value is unspecified.
zun_api_threads_max: 16
zun_api_threads: "{{ [[ansible_processor_vcpus|default(2) // 2, 1] | max, zun_api_threads_max] | min }}"

zun_service_in_ldap: false

zun_scheduler_default_filters: >-
  AvailabilityZoneFilter,
  CPUFilter,
  RamFilter,
  ComputeFilter,
  DiskFilter
zun_scheduler_available_filters: zun.scheduler.filters.all_filters
zun_scheduler_driver: filter_scheduler

## Service Name-Group Mapping
zun_services:
  kuryr-libnetwork:
    group: zun_compute
    service_name: kuryr-libnetwork
    condition: "{{ inventory_hostname in groups['zun_compute'] }}"
    init_config_overrides: "{{ zun_kuryr_init_overrides }}"
    start_order: 3
    execstarts: "{{ zun_bin }}/kuryr-server --config-dir /etc/kuryr"
  zun-api:
    group: zun_api
    service_name: zun-api
    init_config_overrides: "{{ zun_api_init_overrides }}"
    start_order: 1
    execstarts: "{{ zun_bin }}/zun-api --config-dir /etc/zun"
  zun-compute:
    group: zun_compute
    service_name: zun-compute
    init_config_overrides: "{{ zun_compute_init_overrides }}"
    start_order: 4
    execstarts: "{{ zun_bin }}/zun-compute --config-dir /etc/zun"
  zun-wsproxy:
    group: zun_api
    service_name: zun-wsproxy
    init_config_overrides: "{{ zun_wsproxy_init_overrides }}"
    start_order: 2
    execstarts: "{{ zun_bin }}/zun-wsproxy --config-dir /etc/zun"

# Common pip packages
zun_pip_packages:
  - kuryr-libnetwork
  - oslo_rootwrap
  - osprofiler
  - python-memcached
  - python-zunclient
  - pymysql
  - zun

## (Qdrouterd) integration
# TODO(ansmith): Change structure when more backends will be supported
zun_oslomsg_amqp1_enabled: "{{ zun_oslomsg_rpc_transport == 'amqp' }}"

zun_optional_oslomsg_amqp1_pip_packages:
  - oslo.messaging[amqp1]

## Default service options used within all systemd unit files.
zun_service_defaults: {}

## Tunable overrides for services
zun_zun_conf_overrides: {}
zun_rootwrap_conf_overrides: {}
zun_kuryr_conf_overrides: {}
zun_docker_config_overrides: {}
zun_kuryr_config_overrides: {}

## Tubnable overrides for service unit files.
zun_api_paste_ini_overrides: {}
zun_api_init_overrides: {}
zun_wsproxy_init_overrides: {}
zun_compute_init_overrides: {}

## Default zun+kuryr options used within the system unit file.
# NOTE(cloudnull): These options are used to ensure that kuryr is always
#                  started after docker and has the proper capabilities.
zun_kuryr_init_overrides:
  Unit:
    After:
      ? network-online.target
      ? docker.service
    PartOf: docker.service
    Wants: network-online.target
  Service:
    CapabilityBoundingSet: CAP_NET_ADMIN
    Group: "{{ zun_kuryr_system_group_name }}"
    User: "{{ zun_kuryr_system_user_name }}"

Dependencies

This role needs pip >= 7.1 installed on the target host.

Example playbook

---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

- name: Installation and setup of Zun
  hosts: zun_all
  gather_facts: true
  any_errors_fatal: true
  remote_user: root
  vars_files:
    - common/test-vars.yml
  roles:
    - role: "os_zun"
      zun_oslomsg_rpc_password: secrete
      zun_galera_password: secrete
      zun_service_password: secrete
      zun_kuryr_service_password: secrete
      zun_developer_mode: true
      zun_service_publicuri: "http://{{ hostvars[groups['keystone_all'][0]]['ansible_host'] }}:9517"
      zun_service_adminuri: "http://{{ hostvars[groups['keystone_all'][0]]['ansible_host'] }}:9517"
      zun_service_internaluri: "http://{{ hostvars[groups['keystone_all'][0]]['ansible_host'] }}:9517"
      tags:
        - "os-zun"

External Restart Hooks

When the role performs a restart of the service, it will notify an Ansible handler named Manage LB, which is a noop within this role. In the playbook, other roles may be loaded before and after this role which will implement Ansible handler listeners for Manage LB, allowing external roles to manage the load balancer endpoints responsible for sending traffic to the servers being restarted by marking them in maintenance or active mode, draining sessions, etc. For an example implementation, please reference the ansible-haproxy-endpoints role used by the openstack-ansible project.

Tags

This role supports two tags: zun-install and zun-config

The zun-install tag can be used to install and upgrade.

The zun-config tag can be used to manage configuration.

CPU platform compatibility

This role supports multiple CPU architecture types. At least one repo_build node must exist for each CPU type that is in use in the deployment.

Currently supported CPU architectures:
  • x86_64 / amd64
  • ppc64le

At this time, ppc64le is only supported for the Compute node type. It can not be used to manage the OpenStack-Ansible management nodes.

Compute driver compatibility

This role supports multiple zun compute driver types. The following compute drivers are supported:

  • libvirt (default)
  • ironic
  • lxd (via zun-lxd)
  • powervm (via zun-powervm)

The driver type is automatically detected by the OpenStack Ansible Nova role for the following compute driver types:

  • libvirt (kvm / qemu)
  • powervm

Any mix and match of compute node types can be used for those platforms, except for ironic.

If using the lxd driver, the compute type must be specified using the zun_virt_type variable.

The zun_virt_type may be set in /etc/openstack_deploy/user_variables.yml, for example:

zun_virt_type: lxd

You can set zun_virt_type per host by using host_vars in /etc/openstack_deploy/openstack_user_config.yml. For example:

compute_hosts:
 aio1:
   ip: 172.29.236.100
   host_vars:
     zun_virt_type: lxd

If zun_virt_type is set in /etc/openstack_deploy/user_variables.yml, all nodes in the deployment are set to that hypervisor type. Setting zun_virt_type in both /etc/openstack_deploy/user_variables.yml and /etc/openstack_deploy/openstack_user_config.yml will always result in the value specified in /etc/openstack_deploy/user_variables.yml being set on all hosts.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.